Investing in Innovation (i3): Strong Start on Evaluation and Scale, But Greater Focus Needed on Innovation

The Social Innovation Research Center has today released a report examining the early progress of the Investing in Innovation (i3) program. The executive summary follows below.


This report is an evaluation of the Investing in Innovation (i3) program, a tiered-evidence grantmaking initiative at the U.S. Department of Education.    The program’s primary purpose is to support the development, testing, and scaling of field-initiated programs for high-need students in K-12 education.

Created in 2009, it has provided over $1.4 billion in grants for education projects, including those focused on kindergarten readiness, student achievement, decreasing dropout rates, and turning around low-performing schools. In late 2015, the program was changed as part of the Every Student Succeeds Act.  However, the renamed Education Innovation and Research (EIR) program has retained most of i3’s original features.

This report reviews the program’s early progress. Its findings are based on a review of publicly available final project evaluations, internal performance reports obtained through a Freedom of Information Act (FOIA) request, and interviews with current and former officials from the U.S. Department of Education, i3 project directors, and several national experts in education.

The report includes an assessment of the program’s overall results, its contributions to the knowledge base, and lessons learned from launching, implementing, evaluating, and scaling i3-funded projects. The remainder of this executive summary provides highlights from the full report.

Early Results

Evaluation Results: As of January 1, 2017, final evaluations have been released for 44 i3 projects. Of these, 13 have positive impact findings (30 percent) and another seven (16 percent) produced mixed results, with positive effects reported on at least one measure.

As expected, a higher percentage of the program’s scale-up and validation grants, which required more evidence, have produced positive impacts (50 percent).  A smaller share of development grants, which required less evidence, did so (20 percent).  Although comparisons should be made cautiously, these rates of success appear to exceed those in other areas of education research.

Affected Issues in Education: The top 13 evaluations, all of which are based on randomized controlled trials (RCTs) or quasi-experimental designs (QEDs), have demonstrated positive effects for programs in reading and literacy, kindergarten readiness, STEM (science, technology, engineering, and math), the arts, charter schools, distance learning in rural communities, college preparation, and teacher professional development.

Evaluation Pipeline: If the current rate of positive impact findings is sustained (30 percent), a total of 52 final evaluations with positive results will be generated from the 172 grants that have been made under the program (2010-2016), or four times the number of final evaluations with positive impact results (13) that have been released to date.

Scaling Evidence-based Initiatives: Results have been released for four scale-up grants, the largest of the i3 grants which are intended to expand programs backed by the highest levels of evidence.  All four scale-up grantees – KIPP, Teach for America, Success for All, and a Reading Recovery program launched by Ohio State University – expanded their evidence-based programs, although some missed their self-identified growth targets.

Two expanded with positive impact findings in their evaluations, while the other two did so with mixed findings.

These results appear to be aligned with earlier research that suggests that strong intermediaries may be needed to successfully scale evidence-based programs in low-performing schools.  As a group, they performed better than local school districts that also received i3 grants, but acted largely on their own.


While the i3 program (now EIR) appears to be achieving many of its intended objectives, it could be improved in the following ways:

EIR Should Rework Its Early-phase Grants to Better Support Genuine Innovation:  While i3-funded projects have produced positive effects at higher rates than has been typical in education research, its support for new and innovative programs appears to be one of its weakest features. Such projects were supported through the program’s lowest-tier grants. While some of these grants have produced positive results, they appear to have generated few, if any, groundbreaking innovations.

The new EIR program has taken steps to address this issue by being more supportive of flexibility and continuous improvement in the early-phase grants, but more is needed. The selection process for these grants should be reworked, with greater reliance on national experts who are aware of gaps in existing research and can more readily identify true innovations. Early-phase grantees should also be offered more tailored technical assistance that better connects them to experts in their respective fields of interest.

EIR Should Support Faster Research: Final evaluation results for most of the first-year grants, which were awarded in 2010, did not become available until 2016.  While some research takes more time, six years is too long to wait for results in most cases.

Much of this delay has been due to the program’s simultaneous scaling expectations, which create delays as new staff are hired and new initiatives are launched in new schools.

The pace of research could be hastened for early-phase and mid-phase grants by providing more grants to programs that already have operations underway in multiple schools and do not require further expansion.  The program should also offer lower-cost, short-duration grants like those that have been funded by the Institute of Education Sciences.

EIR Should Connect to and Leverage Other Publicly-funded Education Programs As noted earlier, the first cohort of scale-up grantees expanded their programs with either positive or mixed effects.  As also noted, one major lesson of these efforts seems to be that successfully scaling evidence-based programs may require the involvement of high-capacity intermediaries like those that have been funded by i3.

To date, demand for evidence-based programs and models has been weak, but the Every Student Succeeds Act has laid the groundwork for increased use of evidence through several of its provisions, including reworked state accountability measures and new evidence definitions that apply to formula-funded and competitive grant programs. The Department of Education is providing guidance to states and local school districts on how to implement the evidence provisions of the new law.

Given the increased importance of these efforts, the limited size of i3’s (now EIR’s) budget, and the apparent importance of high-capacity intermediaries, the Department may wish to consider ways to better integrate EIR with these other efforts by providing incentives to applicants that can leverage other federal, state, and local program funds.


Read the Full Report.


This entry was posted in Education. Bookmark the permalink.