That’s Just Wrong: Celebrity Gossip, Fad Diets, and Evidence-based Practices

In my last column (Ducks, Data, and Evidence-based Politics), I wrote about how central evidence of results will be for nonprofits that want to survive the new era of permanently tight budgets.

Much of this evidence, of course, comes from academic literature and major studies by prestigious organizations, often under contract to the federal government. That may be a problem, according David Freedman, author of a new book entitled, “Wrong: Why Experts Keep Failing Us—And How to Know When Not to Trust Them.”

Freedman starts the book with a story about John Ionnidis, a doctor who began methodically reviewing hundreds of medical journal studies while serving appointments at the National Institutes of Health and Johns Hopkins in the mid 1990s.

According to Freedman, “Ionnidis did indeed spot a pattern—a disturbing one. When a study was published, often it was only a matter of months, and at most a few years, before other studies came out to either fully refute the finding or declare that the results were “exaggerated” in the sense that later papers revealed significantly lesser benefits to the treatment under study. Results that held up were outweighed two-to-one by results destined to be labeled ‘never mind.’ “

These studies “exhibited the sort of wrongness rate you would associate more with fad-diet tips, celebrity gossip, or political punditry than with state-of-the-art medical research.”

Wisecracks about political punditry aside, this has disturbing implications for the trend toward evidence-based practices, particularly when they have budgetary implications. Freedman points out several sources of error, including:

  • Which Numbers Count? Numbers imply objectivity, but the choice of which numbers to use is often subjective. This comes through in such things as magazine ranking of colleges and universities, for instance, where different magazines rank the schools differently based on which data they choose to use and how they prioritize it.
  • Problems of Determining Cause and Effect. By the way, the rooster’s crowing really does cause the sun to come up.
  • Animal Studies: Studies of depression are often conducted on rats instead of humans. But there is a big difference between the two. For example, while human moms tend to like clean rooms, rat moms will often eat their young after their cages are cleaned. (Kids, maybe you should keep that room messy, just to be on the safe side.)
  • Publication Bias: Who is paying for these studies? And even if there is no issue with sponsorship, the journals themselves are biased toward studies that show positive results. Researchers who are subject to “publish or perish” pressures are not rewarded for studies that produce no results, even though that may add more to our understanding of the issues. Replication studies are almost unfundable.
  • The Difficulty of Challenging Common Wisdom:  People often point to the “wisdom of crowds” when observing collective decision making, but too little attention is paid to the “idiocy of crowds.” Group think is a powerful force and researchers are not immune to it. Those who challenge dominant or majority opinions can find themselves marginalized.

Given all these possibility for error, Freedman points out that readers place far too much confidence in expert opinion. Numerical measurements only hide the vast level of uncertainty beneath the numbers.

“Numbers add a sense of precision and authority to an observation, even if it is entirely illusory. Anyone can insist that one pain reliever works better than another, but surely only a well-informed expert would be in a position to claim that a pain reliever reduced patient discomfort by 73 percent, compared to 46 percent for another medication. In fact, people are almost three times more likely to believe an expert finding when it’s presented in terms of numbers. (Just kidding).”

Given this panoply of wrongness, is there any hope for finding the truth? Of course. One way is to simply recognize all of these problems and work to rectify them.

And of course there is this final piece of wisdom. At the end of his book, Freedman devotes an entire appendix to explaining the ways that his book may be, well … wrong.

This column first appeared in the Alliance for Children and Families magazine.


Related

This entry was posted in Evidence. Bookmark the permalink.