School Improvement Grant Program’s Failure Points to Evidence-based Policy as an Answer

Last month Mathematica Policy Research released a tough report on the effectiveness of the Obama administration’s School Improvement Grants (SIG) program. The conclusion? After $7 billion spent, the program had no effect on student achievement in some of the nation’s most poorly performing schools.

The study’s results spurred a round of “I told you so” responses from some analysts, including Andrew Smarick of the conservative American Enterprise Institute, who suggested that SIG might be the “greatest failure in the history of the U.S. Department of Education.” According to Smarick:

The results are almost too much to believe. How in the world do you spend billions and billions of dollars and get no results—especially after Secretary Duncan promised it would turn around 5,000 failing schools and hailed it as the biggest bet of his tenure?

Probably the only thing more remarkable than the scope of this program’s failure is that this outcome was absolutely, positively, unavoidably predictable.

Smarick had previously argued that the answer was to close poorly-performing schools (which was one of the options under SIG, although it was rarely used) and start from scratch. Smarick pointed to successful charter school networks like KIPP as a more promising alternative.

Is Smarick right?  Are turnaround efforts a complete waste of time and money?

Probably not.  There are plenty of reasons to think that this conclusion is premature. First, the Mathematica study’s conclusions do not quite paint the definitive picture of failure among school turnarounds that has been widely reported.  Second, there have been other examples of success in the turnaround space that are not associated with SIG.

Why Did SIG Fail?

Better understanding the Mathematica study’s conclusions requires a little digging around in its results, some of which have not been that widely reported.  One important point made by the study is that the level of evidence behind SIG-backed education practices was not very strong:

Though research on SIG is limited, a large body of literature examines the effectiveness of the school improvement practices promoted by SIG and school turnaround more broadly.  Overall, this literature provides mixed evidence on whether these practices improve student outcomes.

Second, regardless of how evidence-based these SIG-promoted practices may have been, SIG did not do much to expand their use.  The study found that SIG schools on average adopted 22.8 of 35 identified SIG practices — versus 20.3 practices in non-SIG comparison schools.  The difference between the two groups was not statistically significant.

In short, the Mathematica study found that the SIG program promoted a slight (and statistically insignificant) increase in unproven practices and that — unsurprisingly — this did not produce a meaningful change in student reading and math scores, high school graduation, or college enrollment.

In other words, while the Mathematica study is an indictment of the SIG program, it says very little about whether schools can be turned around successfully.  It only says that SIG was poorly designed, poorly implemented, or both.

The Importance of Evidence

SIG (and the Mathematica study) are not the last word on school turnarounds.  One notable reaction to the study came from Robert Slavin, the Director of the Center for Research and Reform in Education at Johns Hopkins University. Slavin is associated with Success for All, one of just a few whole school turnaround models that actually possess a significant evidence track record.

According to Slavin:

What else could SIG have done?

SIG could have provided funding to enable low-performing schools and their districts to select among proven programs. This would have maintained an element of choice while ensuring that whatever programs schools chose would have been proven effective, used successfully in other low-achieving schools, and supported by capable intermediaries willing and able to work effectively in struggling schools.

Ironically, SIG did finally introduce such an option, but it was too little, too late. In 2015, SIG introduced two additional models, one of which was an Evidence-Based, Whole-School Reform model that would allow schools to utilize SIG funds to adopt a proven whole-school approach. The U.S. Department of Education carefully reviewed the evidence and identified four approaches with strong evidence and the ability to expand that could be utilized under this model. But hardly any schools chose to utilize these approaches because there was little promotion of the new models, and few school, district, or state leaders to this day even know they exist.

Interestingly, there was another federally funded program that took a very different approach and experienced different results.  Unlike SIG, this program — originally called the Investing in Innovation (i3) fund, but since reworked as part of the Every Student Succeeds Act and now called the Education Innovation and Research (EIR) program — produced some notable successes.

One was Success for All, the model that Slavin is associated with, which produced positive effects in scale up efforts that were funded by the program.  Another was KIPP (one of Smarick’s favorites), which also received scale-up funding and produced solid results.  Both examples also seemed aligned with earlier research that has suggested that strong intermediaries may be needed to successfully scale evidence-based programs.

But the EIR program’s successes were not just limited to whole school turnarounds or charter school efforts. It also funded an array of other, smaller evidence-based initiatives, several of which were backed by similarly rigorous studies.

So far, the EIR program has produced 13 evidence-based program models with positive impacts, including in reading and literacy, kindergarten readiness, STEM (science, technology, engineering, and math), the arts, charter schools, distance learning in rural communities, college preparation, and teacher professional development.  If the program maintains its current rate of success, it will generate another 39 evidence-based models in the coming years from grants that are still in progress.

This track record also meshes with bipartisan changes that were instituted by Congress when it replaced No Child Left Behind. According to Slavin:

The old SIG program is changing under the Every Student Succeeds Act (ESSA). In order to receive school improvement funding under ESSA, schools will have to select from programs that meet the strong, moderate, or promising evidence requirements defined in ESSA. Evidence for ESSA, the free web site we are due to release later this month, will identify more than 90 reading and math programs that meet these requirements.

A Different Future

These provisions in ESSA and the related efforts mentioned by Slavin are just a beginning, but they paint a very different picture from the one suggested by SIG. Rather than promoting largely unproven practices, as SIG did, there is now a bipartisan commitment to testing and promoting the use of models and practices with rigorous evidence that they work.

This new strategy does not lend itself to one set of ideological approaches, but any that work, including some (like high-performing charter schools) that are likely to draw support from the new administration.

If the Trump administration maintains this bipartisan commitment to evidence, SIG’s failure may end up being an aberration. It may not turn out to be the predictable failure that some foresaw, but a hard-earned lesson about the importance of evidence — and just one bump among many in a much longer road that will eventually produce meaningful improvement in our nation’s schools.

This entry was posted in Education. Bookmark the permalink.