One of the central tenets of evidence-based policy is that programs and policies that are deemed “evidence-based” should be replicable in other settings.
This is not always easy — even interventions backed by solid evidence can be hard to replicate. There are lots of potential reasons for this, including differences in the target populations, different settings, poor implementation, and insufficient fidelity to the original model, to name a few.
But the challenge becomes even more difficult if the original research is suspect. As it turns out, this is a potential problem for all kinds of research, much of which is in the throes of a full-blown replication crisis. Why does that matter? If the research is shaky, then the entire edifice of evidence-based policy comes crashing down.
The Replication Crisis in Research
To be clear, this particular replication crisis is not about replicating an evidence-based program that is based on strong research. That is an implementation problem, which is its own challenge. No, this replication crisis is about problems with the underlying research itself.
The research replication crisis is a problem, not just for basic science, but for the social sciences, too. For example, one recent attempt to repeat the findings of 100 peer-reviewed psychology studies found that only 39 could be reproduced. Empirical economics has faced its own similar replication crisis. Stanford professor John Ioannidis, a long-time critic of research more generally, has argued that the primary reasons are poorly-designed studies and researcher bias.
How do we solve these problems? One way is to review the studies to determine their rigor. That is the central mission of evidence clearinghouses, several of which are run by the federal government. (Other reviewers like the Campbell Collaboration are privately run.) Unfortunately, some of the clearinghouses have been experiencing challenges of late.
Their job could be made much easier, however, if the researchers themselves implemented best practices that would help show that their research is credible. Such practices could promote both greater transparency and wider replication, a relatively rare undertaking.
What Can Be Done?
Enter MDRC, one of the nation’s best evaluators. One of their researchers, Rachel Rosen, wrote an interesting and somewhat timely blog post earlier this month that describes what her organization is doing to deal with this.