Canary in a Coal Mine? SAMHSA’s Clearinghouse Signals Larger Threat to Evidence-based Policy

It started as a simple story. Once again the Trump administration had demonstrated its reputed disdain for facts and evidence. This time it had revoked the contract of one of the federal government’s top evidence clearinghouses — one that reviewed studies of mental health and drug treatment programs to determine their effectiveness.

The decision, which came quietly during the Christmas holidays, seemed to further prove that the administration cared nothing about facts, nothing about evidence, and little about evidence-based approaches to opioids, which it had elevated to a White House-level priority.

And then the story began to fall apart. An independent review of the clearinghouse had revealed substantial problems with its ratings, including significant potential conflicts of interest. The newly-appointed SAMHSA director, who terminated its contract, echoed those criticisms in a strident public statement.

But underneath the charges and counter-charges, there was a quieter, lurking story — one of widespread problems and alleged corruption in medical research. It is an important story, one that could be a harbinger of growing threats to the evidence-based movement as a whole.


Problems with SAMHSA’s Evidence Clearinghouse

SAMHSA’s evidence clearinghouse, the National Registry of Effective Prevention Programs (NREPP), was first created in 1999 in the wake of growing interest in evidence-based medicine. It was an early federal foray into evidence reviews, coming years before other federal clearinghouses like the What Works Clearinghouse at the Department of Education.

After its initial creation, SAMHSA’s clearinghouse was modified and revised several times, most recently in 2015, when its screening criteria were updated. In the aftermath of these changes, the number of programs reviewed and included in the clearinghouse grew rapidly.

This growth drew the attention of Dennis Gorman, a professor of epidemiology and biostatistics at Texas A&M, who examined the underlying studies that NREPP had used to make its decisions. In 2017, he published an article in the International Journal of Drug Policy that sharply criticized the clearinghouse for its poor quality standards.

According to Gorman, the large majority of the new programs approved by the clearinghouse were based on questionable studies, with most suffering significant conflicts of interest, including:

  • Single Study Approvals: Of the 113 approved new programs, more than half (67) were approved on the basis of a single published article (51), non-peer-reviewed online report (4), or unpublished report (12). Fewer than half (46) were based on two or more published reports.
  • Questionable Methodology: Many of the studies featured common and easily-identified design flaws, including very small and non-representative samples, high rates of study attrition, and brief length of follow-up.
  • Conflicts of Interest: Most of the approved programs (87) were based on studies or materials that included someone who was associated with the studied program as the author or co-author.

Today, with an ever-growing list of programs that claim to be evidence-based, clearinghouses are intended to be a stamp of approval, allowing users to sort the wheat from the chaff. But according to Gorman, this was not happening:

As the number of programs grows, these  [problems] are increasingly difficult to identify.

Worse still, the current NREPP review process essentially equates any such quality interventions with those that have been evaluated by the individual who developed and disseminates the program using a very small, self-selected sample, and in which the findings of the evaluation have appeared only in an internal report or an unpublished manuscript or a pay-to publish online journal.

It even includes interventions that employ therapeutic practices such as thought field therapy and eye movement desensitization that are considered potentially harmful and supported only by pseudoscience (Lilienfeld, 2007).

Gorman suggested the following changes to the clearinghouse’s procedures, which SAMHSA may (or may not) be considering in the aftermath of its decision to cancel the contract:

  • Improving the transparency of its review process;
  • Providing detailed declarations of financial conflicts of interest of program developers who review their own programs;
  • Requiring truly independent replication studies;
  • Assigning most significance to results from studies appearing in journals that adhere to rigorous publication standards, such as requiring preregistration of analysis plans and data and materials sharing; and
  • Putting a mechanism in place (such as Registered Reports) that clearly distinguish exploratory research from hypothesis testing


Broader Problems in Evidence-based Medicine

The Gorman review of NREPP comes as medical research and evidence-based medicine in general have both been subjected to increased criticism.

Most of these problems appear to be driven by profit motives in the healthcare industry, which has increasingly involved itself in all phases of health research and evidence reviews. According to critics, the industry’s involvement has been facilitated and worsened by misaligned incentives also present in the academic community and the academic publishing industry.

It did not start out that way. In its early days in the 1990s, proponents of evidence-based medicine were as enthusiastic as evidence-based proponents are in other fields today. According to one review:

It is more than 20 years since the evidence based medicine working group announced a “new paradigm” for teaching and practicing clinical medicine. Tradition, anecdote, and theoretical reasoning from basic sciences would be replaced by evidence from high quality randomized controlled trials and observational studies, in combination with clinical expertise and the needs and wishes of patients.

Evidence based medicine quickly became an energetic intellectual community committed to making clinical practice more scientific and empirically grounded and thereby achieving safer, more consistent, and more cost effective care.

Achievements included establishing the Cochrane Collaboration to collate and summarize evidence from clinical trials; setting methodological and publication standards for primary and secondary research; building national and international infrastructures for developing and updating clinical practice guidelines; developing resources and courses for teaching critical appraisal; and building the knowledge base for implementation and knowledge translation.

But, according to many critics, those hoped-for changes have gone off track. Today, the health care industry (principally pharmaceutical companies and medical device makers) is involving itself in every phase of the evidence-building process, including:

  • Industry-sponsored Studies: Private industry provides funding for, and often designs and controls, a large portion of the most influential medical studies. Independent reviews have found that industry-sponsored drug studies are, on average, four times more likely than trials sponsored by nonprofit organizations to favor the sponsored drug. While such studies appear to be methodologically sound, they frequently compare results with an inactive or straw man comparison group, which make the study results appear stronger than they are. Many are over-powered so they will detect very small effects that are statistically significant, but of little importance. Research that does not produce the desired result is commonly swept under the rug. Access to study data is often limited. Replication efforts are rare, but when they occur they commonly refute the original study, with even the most prominent studies subject to high rates of replication failure. Some critics estimate that as much as 85 percent of health research funding is wasted.
  • Industry-funded Evidence Reviews: The industry frequently funds evidence reviews (meta analyses) that assemble findings across many studies. However, independent analyses of these industry-sponsored reviews have shown that they are typically of much lower methodological quality than other meta analyses and considerably more likely to omit details on study methodological problems and conflicts of interest.
  • Questionable Cost-Effectiveness Evaluations: Most published analyses report favorable cost-effectiveness ratios for the studied intervention. Industry-funded studies are more likely to do so.
  • Problems in the Medical Journal Industry: Prominent academic journals have drawn considerable criticism for failing to adequately review the quality of their published research. These journals have been repeatedly criticized for these flaws, which are the ongoing focus of the International Committee of Medical Journal Editors.
  • Industry-influenced Practice Guidelines: Clinical guidelines, which substantially influence medical practice, are reputedly based on rigorous evidence and are frequently endorsed by recognized authorities. However, they face substantial problems. Most scientists involved with the creation of such guidelines receive funding from industry sources, including research grants, honoraria for speeches, or consultancy fees from pharmaceutical and related industries. Such influence is supplemented by tens of billions of dollars spent annually on direct marketing to physicians, including industry-offered gifts of equipment, educational textbooks, luxury travel, and free meals.

One anecdote from John Ioannidis, co-director of a center at Stanford that is devoted to promoting reproducibility in research, illustrated many of these problems:

Having worked in many different clinical fields, my identity was often mistaken. Some CROs recruiting patients for industry trials believed that I was a clinic chief or chair in cardiology, rheumatology, or other clinical fields. I would get invitations in my fax machine running “Dear Professor Ioannidis, we know that you are a great interventional cardiologist and your clinic is one of the best. Would you be interested to participate in the X trial…”

For fun, one day I called back the contact number. I mentioned that I had received that kind invitation and wanted to find out how I could join the research. The person at the other end of the phone line promised me authorship in the randomized trial; the more patients I could recruit, the better my authorship position. I asked to see the protocol and comment on it.

The answer was clear and immediate. “Oh, the protocol, why should you worry about the protocol? The sponsoring company has taken care of the protocol already and will also take care of writing the paper. You don’t need to worry about that minor stuff. You shouldn’t waste time with the protocol or editing drafts. We will put your name as an author on the papers, no worries. This is what all prestigious clinical researchers do.”

Such conflicts are well-known in medicine. The National Academy of Sciences has published comprehensive recommendations for addressing them, but thus far they have either not been implemented or been implemented inadequately.


Broader Implications for Evidence-based Policy

Problems in evidence-based medicine, which is years or decades ahead of evidence-based policy in other fields, have broader potential implications. If the existing research base in other policy areas is similarly corrupted, it would substantially undercut support for the movement.

The comparison is not perfect. Many of the flaws present in evidence-based medicine may be peculiar to the industry. The profit motive, coupled with trillions of dollars spent annually on health care, creates substantial incentives to corrupt the underlying evidence base.

However, the credibility of the research in other policy areas is vulnerable for other reasons. No field is immune to poorly-conducted research. Moreover, while the profit motive may not be present, research in other fields may be subject to ideological pressures and confirmation bias that create similarly poor results. Replication studies in social science-related fields, including behavioral economics, social psychology, and political science, have also generated poor results.

The answers to these problems are likely to be similar: better implementation of researcher best practices, greater research transparency, improvements in academic journals, and greater investments in research clearinghouses, including efforts to address some of their known shortcomings.

Evidence-based policy continues to hold promise for substantial improvements in outcomes across major domains of public policy — including health, education, social services, criminal justice, and foreign aid. But for that promise to be fulfilled, advocates must be honest and vigilant about emerging threats like those that are already present in medical research.

Scientific Studies: Last Week Tonight with John Oliver: Warning Language

This entry was posted in Evidence, Health. Bookmark the permalink.