The Irrational Ape

ape.jpg

This is very entertaining book, full of anecdotes, humours, relevant real life stories that drive the author’s point home. I find the last chapter on various how to… very useful for not only general public, but also post-graduate students. They will be well-equipped to critically appraise literature in their fields after finishing this chapter.

Why is critical thinking more important now than ever?

The world has changed. Majority of our information are from social media or other online sources, most of which are of dubious quality. Almost 60% of articles shared on social media are from people who haven’t even read them. A massive 2018 study published in Science found that ‘Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information’. Emotional content was a predictor of how widely shared an item would be, and false narratives were crafted to elicit disgust, fear and direct anger. Low quality contents are generated at an exponential rate. Information is now abundant. Our ability to determine what is likely to be true, however, couldn’t quite catch up with ever growing amount of information.

Sense About Science run a laudable ‘Ask for Evidence’ campaign, supporting people to query claims on everything from healthcare to public policy. Simply learning to suspend the acceptance of a particular narrative until it has been independently confirmed is a hugely beneficial habit we can all adopt. There is good evidence that analytical thinking reduces acceptance of pseudoscience bullshit, and encouraging people to reflect rather than to intuitively accept a statement makes them far more likely to spot dubious sentiment.

He, who will not reason, is a bigot; he, who cannot, is a fool; and he, who dares not, is a slave.
— William Drummond

On availability heuristic

When evaluating a concept or forming an opinion, this is in effect a mental short cut that relies on immediate examples that are easy to recall. This pivots on the assumption that if something is easy to recall, it is therefore important – or, at least, it’s more important than alternative explanations. The easier it is to recall information, the greater stock we place in it. But the mere fact that some information is recent or memorable doesn’t make it true, nor are any conclusions drawn from this short cut watertight.

A desire to find universal causes for things is understandable. We have an intrinsic desire for simple narratives, where cause and effect are clear and well defined. Yet, in the interwoven machinery of reality, this is often the exception rather than the rule.

On supplements and natural products

Uranium and arsenic are ‘natural’ but you would be ill-advised to sprinkle them on your breakfast cereal. The simplistic conflation of natural with healthy or good is a non sequitur, fatally scuppered by the equivocal adjective ‘natural’.

On motivated reasoning

It demands impossibly stringent standards for any evidence contrary to one’s beliefs, while accepting uncritically even the flimsiest evidence for any ideas that suit one’s needs. Rather than rationally evaluate evidence that might confirm or deny a belief, motivated reasoning uses our biases to look only at evidence that fits what we already believe and to dismiss that which unsettles us. Motivated reasoning is closely related to confirmation bias, our tendency to seek, remember and frame information in a way that agrees with our preconceived beliefs and world-views, while minimising contradictory information. 

On dealing with people with motivated reasoning

The problem is that this well-meaning and considered ‘information-deficit’ approach hinges on the presupposition that the intended audience is basing their position on the balance of evidence. If the motivations underlying vehement protestations are ideological in nature, then such a well-meaning endeavour will always be in vain.

Politicians use statistics in the same way that a drunk uses lampposts – for support rather than illumination.
— Andrew Lang

Lies, damn lies and statistics

Throwing the baby out with the bathwater; statistician Frederick Mosteller noted that ‘while it is easy to lie with statistics, it is even easier to lie without them’. At their best, statistics are incredibly useful at quantifying life in an uncertain world. But at worst, devoid of context and understanding, they can be mystifying and misleading

Why does snake oil sell?

Regression towards the mean. This is the observation that when a measurement of a variable is extreme in the first instance, the next measurement tends to be closer to average. For example, people usually seek help when their symptoms are at their zenith. This is an extreme state, and over the passage of time recedes to a more normal baseline. But many still attribute their recoveries to long-debunked folk medicine rather than consider the phenomenal talents of their own immune system.

How to evaluate a paper or scientific claims.

1. The smaller the studies, the less likely the research findings are to be true. If the sample is small, the chances of the group being representative is low.

2. The smaller the effect sizes, the less likely the research findings are to be true. Effect size is a measure of how strong the phenomenon is. If the effect size is tiny, effects may not be important enough to be useful practically. 

3. If an experiment generates lots of possible relationships, then by chance alone some of these might be false positives. With lots of possible correlations to examine, it is too easy to cherry-pick those which might by chance alone show a possible statistical connection. This is colloquially called “statistical fishing expedition”.

4. The greater the flexibility in designs, definitions, outcomes and analytical modes, the less likely the research findings are to be true. If one allows more leeway in definitions, bias can creep in and a ‘negative’ result can deftly be manipulated into a false positive one. Clinical trial registry is developed to alleviate this problem by having researchers clearly defined and documented designs, definitions etc. prior to studies being conducted.  Changing them to milk the results after study is done is difficult as there’s paper trails in the registry.

5. The greater the financial and other interests and prejudices, the less likely the research findings are to be true. In the biomedical field especially, conflicts of interest often arise between funders and results, inviting bias. The conflict of interest does not have to be financial; scientists are not immune to ideological devotion to certain ideas and this can alter results.

6. The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true. While more investigation of a certain area should, in principle, increase the quality of the findings, the opposite occurs when groups compete aggressively. In such cases, time becomes of the essence, and research teams might be inclined to publish prematurely, leading to an excess of false positive results. This phase of research is termed the ‘Proteus phenomenon’, capturing the rapid alternation between extreme research claims and equally extreme refutations.

On totality of evidence and the basic principle of meta-analysis

A study in isolation is simply a single data point. Ideally, it is accurate, but for various reasons it might be flawed. What really matters is the complete picture, the trends that emerge when results and analyses are pooled. This is why, for example, evidence for human-mediated climate change or the safety of vaccines is so overwhelming: data from thousands of studies and theoretical models all point to the same conclusion. Conversely, climate-change deniers or anti-vaccine activists who clutch at single or weak studies are being disingenuous; cherry-picked studies in isolation simply do not trump overwhelming evidence.

Scientific knowledge is always provisional, and our acceptance of findings should be proportional to the strength of evidence offered. New discoveries constantly refine our understanding, and theoretical insights act as a compass for discovery, rendering science ultimately self-correcting.

How to differentiate science from pseudoscience

There are some vital things we can consider when confronted with anything purporting to be science. A non-exhaustive list might include: 

• Quality of evidence: Scientific claims are underpinned by supporting data and clear description of the methodology used. If, however, a claim relies largely on anecdote and testimonial, it should be considered suspect. 

• Authority: Scientific claims don’t derive their authority by virtue of coming from scientists. A scientific claims acceptance stems from the weight of the evidence behind it, not the people.

• Logic: If an argument is presented, every link in the chain must connect, not just a few. Overly reductive claims that suggest single causes or cures for complex situations or conditions should also be treated sceptically. 

• Testable claims: Falsifiability is paramount to gauging the validity of a claim. If it cannot be proven wrong, then it is not scientific. Similarly, science pivots on reproducibility. That which cannot be verified by independent investigation is likely to be pseudoscience. 

• Totality of evidence: The hypothesis must consider all the evidence and not just cherry-pick only corroborating evidence. If the claim is consistent and compatible with all the evidence to date, then it is usually reasonable to accept it provisionally. If, however, it clashes with the weight of previous data, testable reasons for this disconnect must be suggested. 

• Occam’s razor: Does the claim rely on a multitude of supplementary assertions? If an alternative hypothesis better explains the available data, strong evidence would have to be provided to justify additional assumptions. 

• Burden of proof: The onus is always on those making the claim to support it rather than for others to disprove it. Attempts to shift the burden of proof are a warning sign of bad science. Claims that pivot on special pleading to justify a lack of evidence (including claims of conspiracy) are hallmarks of pseudoscience.

How to evaluate the strength of arguments

• Reasoning: Do the premises lead to the conclusion presented or is something askew in the reasoning? To be valid, every link in the chain of argument must connect seamlessly to the others. If following the argument through to its logical conclusions yields contradictions or absurdity, it’s a warning to be cautious. The premises themselves are vital too; are they reasonable and well supported or do they disintegrate under interrogation? If the premises wither in the light of enquiry, the conclusion that stems from them can usually be dismissed. 

• Rhetoric: What kind of argument is being made? Authority alone is no substitute for evidence. Narratives that reduce complex situations down to a simple cause ought to be considered with caution, as should those that force a complicated spectrum of views into an artificial binary. The onus to prove a claim is always on the one asserting it, and approaches that rely solely on denigrating or smearing an opponent prove nothing. 

• Human factors: What biases might be at play in different accounts? None of us is immune to instances of motivated reasoning or confirmation bias. Determining whether a position is reasoned or ideologically driven is imperative. Is the argument put forward based on cherry-picked information to support a particular point of view? When the evidence at hand is subjective or anecdotal, we cannot overlook the fact that perception and memory are imperfect. 

• Sources: Where does the information come from? Does it come from reliable, verifiable sources? Assertions that cannot be traced back to a reliable source should not be seriously considered. The information we acquire is often shaped by our own echo chambers and ideology. We must take pains to verify whether it is fair-handed or merely reflective of what we want to hear.

• Quantification: Can the claim be quantified? If numbers are presented, the context for those figures is vital. The difference between relative and absolute risks must be kept in mind, and we must compare like with like. And, as always, the mantra that correlation does not imply causation must never be forgotten. 

• Science: Is the claim testable? Can it be falsified, at least in principle? If the claim presents a seemingly scientific hypothesis, is it based on reputable work? If scientific data is presented, does it reflect the consensus view (totality of evidence) or cherry-picked outliers? Is the supporting data strong enough to support the conclusion? If the data can be equally well explained by another hypothesis with fewer assumptions, Occam’s razor suggests caution.

Affecting change doesn’t require swaying the entire world, only shifting the conversation towards evidence and reason. But to change minds and hearts, we must not only offer better arguments, but remind people on a visceral level why it matters.
— Grimes-The Irrational Ape
Chankhrit Sathorn