Addressing the Research “Replication Crisis”: Evidence-based Policy’s Hidden Vulnerability

One of the central tenets of evidence-based policy is that programs and policies that are deemed “evidence-based” should be replicable in other settings.

This is not always easy — even interventions backed by solid evidence can be hard to replicate. There are lots of potential reasons for this, including differences in the target populations, different settings, poor implementation, and insufficient fidelity to the original model, to name a few.

But the challenge becomes even more difficult if the original research is suspect. As it turns out, this is a potential problem for all kinds of research, much of which is in the throes of a full-blown replication crisis. Why does that matter? If the research is shaky, then the entire edifice of evidence-based policy comes crashing down.


The Replication Crisis in Research

To be clear, this particular replication crisis is not about replicating an evidence-based program that is based on strong research. That is an implementation problem, which is its own challenge. No, this replication crisis is about problems with the underlying research itself.

The research replication crisis is a problem, not just for basic science, but for the social sciences, too. For example, one recent attempt to repeat the findings of 100 peer-reviewed psychology studies found that only 39 could be reproduced. Empirical economics has faced its own similar replication crisis. Stanford professor John Ioannidis, a long-time critic of research more generally, has argued that the primary reasons are poorly-designed studies and researcher bias.

How do we solve these problems? One way is to review the studies to determine their rigor. That is the central mission of evidence clearinghouses, several of which are run by the federal government. (Other reviewers like the Campbell Collaboration are privately run.) Unfortunately, some of the clearinghouses have been experiencing challenges of late.

Their job could be made much easier, however, if the researchers themselves implemented best practices that would help show that their research is credible. Such practices could promote both greater transparency and wider replication, a relatively rare undertaking.


What Can Be Done?

Enter MDRC, one of the nation’s best evaluators. One of their researchers, Rachel Rosen, wrote an interesting and somewhat timely blog post earlier this month that describes what her organization is doing to deal with this.

Their efforts are worth a look. They include:

  • Pre-registering Research Plans: Pre-registering studies helps ensure that negative or null results are not placed in the circular file (i.e., the trash) if they do not come out the way the researcher (or the researcher’s funder) wants them to. It can also discourage the practice of “p-hacking,” where researchers search their data after the fact for a result that seems statistically significant, but may have occurred by chance. Where can studies be pre-registered? Some registries include the American Economic Association’s registry for randomized controlled trials, the Society of Research on Educational Effectiveness registry, the Open Science Framework, or the federal government’s ClinicalTrials.gov registry.
  • Requiring More than Statistical Significance: MDRC implements a variety of statistical techniques, not only to ensure statistical significance, but also to measure the associated impact (for example, by estimating effect sizes). They also include implementation studies to cast additional light on the final evaluation results, good or bad. (MDRC’s Implementation Research Incubator is an interesting recent addition to the field).
  • Sharing Data: MDRC also produces public use data files, which allow other researchers to reproduce their work.
  • Conducting Replication Studies: MDRC has also begun conducting replication studies of their own best-known earlier work. That has helped address a wider problem, which is that replication efforts are not widely conducted (or funded for that matter).

Addressing the replication crisis will be vital if the evidence-based policy movement is to be successful. Continued investments in clearinghouses and registries are one important strategy. But so too are basic best practices for researchers.


Related


Disclosure
: The Laura and John Arnold Foundation (LJAF), a funder of SIRC, is also a funder of efforts to improve scientific replication through its Research Integrity division. LJAF had no input on this story. SIRC maintains independent editorial freedom and control over the selection of all content.

This entry was posted in Uncategorized. Bookmark the permalink.