Wrong: Why Experts Keep Failing Us

Evidence-based policy rests, in part, on the assumption that the effectiveness of certain programs can be determined through rigorous research. But what if the research — not just on social programs, but science in general — is usually wrong?

That is the provocative question asked by David Freedman in his book: Wrong: Why Experts Keep Failing Us — And How to Know When Not to Trust Them.

Freedman’s book, published in 2010, predates the more recent debates over Fake News that helped contribute to the recent decline in trust in the media and most of our societal institutions (as measured by Gallup). Freedman’s motivations were not ideological, however, but to help improve our understanding of the flaws inherent in expert opinion so that we can come closer to the truth.

Freedman’s story begins with John Ioannidis, a doctor and researcher at Johns Hopkins University and the National Institutes of Health who spent time in the 1990s reading medical journals to see how patients fared with certain treatments. According to Freedman:

In examining hundreds of these studies, Ioannidis did indeed spot a pattern — a disturbing one. When a study was published, often it was only a matter of months, and at most a few years, before other studies came out to either fully refute the findings or declare that the results were “exaggerated” in the sense that later papers revealed significantly lesser benefits to the treatment under study. Results that held up were outweighed two-to-one by results labeled “never mind.”

… They exhibited the sort of wrongness rate you would associate more with fad-diet tips, celebrity gossip, or political punditry than with state-of-the-art medical research.

When Freedman asked him about these findings, Ioannidis responded:

“The facts suggest that for many, if not the majority, of fields, the majority of published studies are likely to be wrong,” he says. Probably, he adds, “the vast majority.”

What are the sources of this wrongness and what can we do about it?  Freedman spends the entire book attempting to answer those questions.


Why Experts Get Things Wrong

In Freedman’s telling, the failings of expert opinion can be traced to a variety of sources, but most of them fall into one of the following categories:

Researcher Bias: Many of the largest threats to the credibility of research involve various forms of researcher bias. Simply stated, if a researcher is motivated to find a certain set of results then he or she will be more likely to find them.

If a scientist wants to or expects to end up with certain results, he will likely achieve them, often through some form of fudging, whether conscious or not. Bias exerts a sort of gravity over error, pulling the errors in one direction, so that the errors tend to add up rather than cancel out. Francis Bacon noted in the late sixteenth century that preconceived ideas shape observation, causing people, for example, to take special notice of phenomena and measurements that confirm a belief while ignoring those that contradict it.

These biases can come from a variety of places. One is financial interest:

[M]ost of us already have the good sense to be at least a bit suspicious of industry-driven research, to the extent that we’re capable of identifying when research is industry funded. But I’ll offer one tidbit: a 2003 Journal of the American Medical Association review of conflict-of-interest meta studies involving some 67 conflict-of-interest studies and some 398 other research reports confirmed a strong correlation between industry sponsorship and positive findings.

Financial incentives may play a smaller role in the social sciences, but ideological bias may play a greater one. Some organizations such as the Heterodox Academy have working to highlight the issue.

Other incentives in academia may also create a bias toward positive findings. “Researchers need to publish impressive findings to keep their careers alive,” writes Freedman. “Researchers who don’t publish well-regarded work typically don’t get tenure and are forced out of their institutions.”


Poorly Conducted Research
:
Another threat to research is simple ineptitude. Studies can suffer from poor measurement, poor analysis, and various other threats to internal and external validity.


Publication Bias
:
In academia, most research does not see the light of day until it has been published in a recognized journal. But journals themselves suffer from various forms of publication bias. These include a bias toward research that contains groundbreaking new findings and a corresponding bias against research that produces null findings (which occur far more frequently and are just as important).

Replication studies are rarely published and the peer review process, which is intended to serve as a quality check, often does not live up to its billing. According to Freedman:

When, as a test, 221 of the British Medical Journal’s frequent referees were sent an article purposefully tainted with eight presumably detectable problems, the reviewers managed to catch an average of two.

The peer review process may also itself be a source of bias:

[R]esearchers sometimes line up to grouse that peer review offers preferential treatment to the work of scientists who already have some weight in their fields, in part because heavy-hitting scientists often abuse their own peer-review duties by arbitrarily dinging anyone who challenges them. “Prestigious investigators may suppress via the peer-review process the appearance and dissemination of findings that refute their findings, thus condemning their field to perpetual false dogma,” states Ioannidis.


Group Think
:
The job of sorting through and interpreting this research often falls to the academic and research communities, but relying on academic or scientific consensus is fraught with the perils of group think. According to Freedman, “groups are frequently dominated not by people who are most likely to be right but rather by people who are belligerent, persuasive, persistent, manipulative, or forceful.”

He writes:

Once a majority opinion is formed, even highly competent, confident people are reluctant to voice opinions that go against it, thanks to the notion, drilled into our heads from elementary school up through the workplace, that forging cooperation and agreement is critical.

“Groups amplify bias, squash minority points of view, and can even overcome the correct view when it’s the majority view,” said Robert MacCoun, a UC-Berkeley decision-making expert.


Getting It Right

Given all the ways that research and expert opinion can go wrong, should we just throw our hands up in despair? No, says Freedman. He suggests the following characteristics of research and expert opinion that is more trustworthy:

  • It avoids conflicts of interest
  • It is not overly simplistic
  • It is not overly provocative or groundbreaking.
  • It is supported by many large careful studies, not just one.
  • It is heavy on qualifying statements
  • It is candid about opposing information
  • It is a negative finding

According to Freedman, expert advice that is more likely to be right, or at least on the right track, “will be complex, it will come with many qualifiers, and it will be highly dependent on conditions … in other words, good expert advice will be at odds with every aspect of the sort of advice that draws us to it.”

This entry was posted in Evidence. Bookmark the permalink.