Foster Care Innovation Initiative Charts a Different Path to Evidence

While momentum appears to be building for the increased use of evidence in social policy, the ride has occasionally been a bumpy one, with pushback both on and off Capitol Hill over the appropriate definition of evidence. Some fear that these disagreements — dubbed the “causal wars” — could derail the movement entirely.

One possible answer might be found in an Obama administration program intended to test new ways to find homes for the most disadvantaged children in the foster care system. The program, called the Permanency Innovations Initiative (PII), is similar to its better known cousins like the Social Innovation Fund and the Investing in Innovation (i3) program. Like them, it too is focused on building the social policy evidence base, but it is doing so in a different way.


The Causal Wars

Disagreements over what constitutes evidence in social policy have been occurring for years, with some arguing for the superiority of evaluations that include randomized controlled trials (RCTs), while others have argued for a broader evidence base (for an example, see The Evidence Debate). Reasons for these disagreements have ranged from disputes over the validity, ethics, and funding of various research methods to their impact on overall funding levels for social programs themselves. The arguments are sometimes quite heated, leading some to call them the “causal wars.”

One recent battle broke out in field of child welfare in 2011. That year, the Senate Finance Committee considered legislation that would have prohibited the use of RCTs in state-level child welfare demonstration projects. In the end, the proposed legislative language was watered down and the Children’s Bureau, which oversees such projects, was directed to remain neutral on whether they included RCT evaluations.

“The fact that the ability to mount RCTs in the evaluation of Title IV-E waiver demonstrations was nearly lost should serve as a wake-up call,” said Mark Testa, a professor at the University of North Carolina’s School of Social Work. “It’s time to negotiate a ceasefire in the causal wars.”

Interestingly, the proposed prohibition on RCTs, which was pushed by a variety of child welfare practitioners and administrators, came on the heels of a substantial win for supporters of experimental research. In the 1990s, the Illinois Department of Children and Family Services, which like many other states was witnessing substantial growth in the number of children removed from families and placed in foster care, began exploring ways to reverse this growing trend. One of the promising practices it identified was placing children with relatives and providing them and other foster parents with a stipend to cover their costs if they became the children’s permanent legal guardians.

The strategy, called guardianship assistance, was subjected to rigorous RCT-based testing in three different states (Illinois, Tennessee, and Wisconsin), which verified its positive effects. The results of these studies eventually contributed to Congress enacting a national guardianship assistance program in 2008.

This outcome, in some ways a prototype for evidence-based policy, was far from assured. At the time, said Testa, some practitioners and administrators argued that the success of guardianship assistance was coming at the expense of reduced reunifications of children with their biological parents. The rigorous nature of the RCTs helped overcome that perception.

While subsidized guardianship represented a substantial win in the eyes of many, however, in some ways it has been a relatively isolated one. As of February of this year, only 27 of 325 child welfare programs (just 8 percent) catalogued by the California Evidence-Based Clearinghouse for Child Welfare met its criteria for being “well supported by research.” According to a report released by the Children’s Bureau, frontline practices in child welfare are too often “piloted with limited evaluation, and untested interventions are hastily adopted and spread in response to politics, poor agency performance, or public pressure.”


The Permanency Innovations Initiative

The dearth of evidence in the child welfare field was one of the driving forces behind a demonstration program called the Permanency Innovations Initiative (PII), which was launched in 2010. Like other innovation initiatives begun in the early years of the Obama administration, its primary focus was building the existing evidence base. In this case, however, the focus was quite specific: reducing the time spent in foster care by children most at risk for spending the rest of their childhoods as wards of the state.

The choice to focus on these children was partly due to longer term trends in the foster care system. From 1982 to 1995, the total number of children in foster care had grown dramatically, from 435,000 to nearly 710,000. But a number of policy changes implemented in the late 1990s and afterward reversed that trend. As time passed, fewer children entered the system, or were quickly diverted, and the number of children in foster care dropped.

But as gains were being made for foster children overall, a smaller, more disadvantaged group was being left behind.  Older children, particularly those who had been in care the longest, were still too likely to age out of the system, often with little support and tragic consequences. Their life trajectories disproportionately led to teenage pregnancies, homelessness, and/or engagement with the criminal justice system.

The PII program was designed to test new ways address the needs of these hardest to serve children. Jointly overseen by the Children’s Bureau and Office of Planning, Research and Evaluation (OPRE) in the Administration for Children and Families, it has provided nearly $100 million in funding over five years for the following five demonstration projects:

  • California Partners for Permanency (CAPP): This project, operated by the California Department of Social Services, has developed a set of culturally responsive practice and system changes to improve permanency outcomes for children facing the highest risk of long-term foster care – particularly African-American and Native American children.  Partnering with local communities and tribes, the project has developed a Child and Family Practice Model that includes culturally sensitive engagement; empowerment of family, tribal, and community networks; and the use of culturally based healing practices and practice adaptations. The model is being implemented system-wide in Humboldt, Fresno, and Santa Clara counties and in two offices in Los Angeles.
  • Illinois Trauma Focus Model for Reducing Long-Term Foster Care: This project, operated by the Illinois Department of Children and Family Services, is focused on youth aged 11-16 with histories of trauma and/or emotional-behavioral issues who have been in foster care for two years. It is testing an intervention called Trauma Affect Regulation: Guide for Education and Therapy (TARGET), which addresses the special needs of these youth, their biological parents, and foster parents.
  • Kansas Intensive Permanency Project (KIPP): This project, overseen by the University of Kansas School of Social Welfare, is testing an existing evidence-based model called the Parent Management Training – Oregon Model (PMTO) with children facing serious emotional disturbance (SED) issues. The project is working to change ineffective parenting practices, such as coercion, and connecting parents with community resources such as mental health and substance abuse treatment.
  • Nevada Initiative to Reduce Long-Term Foster Care: This project, overseen by the Washoe County Department of Social Services, is testing two different interventions tailored to three high-risk populations In Nevada. The first intervention, SAFE-FC, combines a safety assessment with community-based family services. The second, Family Search and Engagement (FSE), is being tested for children with parents who are unable or unwilling to work towards reunification with their children.
  • Recognize Intervene Support Empower (RISE): This project, operated by the Los Angeles LGBT Center, is focusing on improving outcomes for lesbian, gay, bisexual, transgender and questioning (LGBTQ) children and youth in the foster care system by identifying them appropriately and providing training to parents and staff to reduce hetero-sexism and anti-gay and anti-transgender bias. RISE has designed care coordination services for LGBTQ youth focused on education about and support for LGBT identity to increase family acceptance.

A sixth demonstration project in Arizona ceased participation in the program in June, 2013.


The “PII Approach”

While the primary purpose of PII is to devise and test innovative ways to promote permanent placements for disadvantaged children in the foster care system, one of its most striking features may be its overall approach to innovation and evaluation.

Two elements in particular stand out. First, it has taken a deliberate approach to developing, adapting, and testing its innovations before subjecting them to full RCT-based evaluations. Second, it has incorporated aspects of an emerging field called implementation science, which is intended to reduce some of the barriers to replicating evidence-based programs. Taken together, the program’s design features seem like a direct response to the “causal wars” that have been raging for years.

“We are making up a controversy that doesn’t exist,” said Testa. “There is nothing inherent in a RCT that doesn’t allow you to do the rich descriptive work in an observational study. What implementation science has done is make it more transparent.”

Although the program’s documentation describes it in somewhat different (and more technical) terms, the strategy, dubbed the PII Approach, includes the following components and phases:

  • Initial Exploration: The first phase focuses on exploring and understanding the chosen population,  issues being addressed, high-need target populations (often using data mining), and desired outcomes. It includes literature reviews, identification of possible evidence-based interventions, development of falsifiable logic models, identification of comparison groups, and the implementation of early-stage usability and feasibility studies.
  • Program Development (Formative Phase): The second phase focuses on choosing and further developing a specific intervention through a process of testing, fine-tuning, and continuous improvement. This phase also includes identifying and measuring program components that have been identified by the field of implementation science as potentially critical to later replication, such as the level of participant exposure to the program (dosage) and program fidelity.  It also includes a preliminary “formative evaluation” using a small sample of program participants (commonly around 60-100) to test the model’s potential. The expected end product of this phase is a stable intervention that is ready for testing — complete with a program manual, logic model, fidelity criteria, training requirements, checklists and other components needed to replicate and evaluate it properly.
  • Formal Evaluation (Summative Phase): This third phase involves a rigorous “summative” evaluation of the intervention that was developed in phase two, preferably using a randomized controlled trial or similar quasi-experimental method, and an associated cost study. While the second phase involves constant tweaking and refinement, this third phase focuses on remaining faithful to the tested model. A phrase commonly repeated by those involved in the program is “no peeking, no tweaking.”  As one PII grantee noted, the goal is to evaluate how the intervention is working. “You can’t do that if you have a constantly moving target.”
  • Replication: This fourth phase focuses on expanding, replicating, and adapting the tested model for different target populations, settings, and geographic locations. During this phase, the focus shifts from determining whether the original program worked to why, when, where, and for whom. It tries to determine which aspects of the initiative were critical and must be replicated faithfully and which may be changed or adapted. This phase also includes further developing coaching, data systems, and training materials that will be used for replication. While none of the PII grantees have reached this stage, the PII program has planned for it by creating a Dissemination Committee that is intended to coordinate communication with the broader child welfare community.
  • Continuous Quality Improvement (CQI) and Ongoing Evaluation: This last phase, which is fleshed out more fully in the Children’s Bureau’s recently released Framework to Design, Test, Spread, and Sustain Effective Practice in Child Welfare, recognizes that because programs exist in a dynamic world where leadership, budgets, legislation, and knowledge change over time, the tested programs must change too. As they change, they must be subjected to additional evaluations to confirm their continued impact and, when necessary, discontinued when they are no longer effective. Due to the long-term and ongoing nature of the activities in this phase, they will likely take place after the 5-year PII program has been completed.

All of the PII grantees, now entering their fifth year, appear to be somewhere between the second (formative) and third (summative) phases. All went through extensive training in the program’s early years, with guidance from the evaluators, JBS International, and the National Implementation Research Network (NIRN) at the University of North Carolina at Chapel Hill.

Final “summative” evaluations for most of the five PII projects, as well as the PII program as a whole, are being designed and conducted by an evaluation team led by Westat in partnership with James Bell Associates and Ronna Cook Associates. These final evaluations are expected to be publicly released sometime in 2016.

Based on interviews with personnel at, or associated with, all five of the grantees, their experience to date has been positive. All say their early results appear promising, but several cautioned that they would not know with any certainty until the final evaluations were complete.

Testa, who conducted evaluations of the subsidized guardianship program in Illinois with Westat and is now serving as principal investigator of the PII evaluation too, affirmed this point, saying that premature release of results based on incomplete data or analysis was a potential problem that PII was working to avoid.

Nevertheless, all of the grantees reported learning substantial lessons along the way. One may be that adapting existing models may be easier than developing new ones from scratch. The KIPP program in Kansas, for example, adapted a top-tier evidence-based program and appears to be coming in ahead of schedule.

In other cases or for other populations, however, the general paucity of existing evidence in child welfare may mean that developing new models may be the better (or only) choice. Moreover, implementing system-level changes — including those involving resource allocation, policies, and procedures — may take more time.

Several of the grantees said that participating in PII also brought additional benefits, such as a better understanding of implementation science, administrative data, and organizational cultures. “But watch out when you ask someone’s opinion,” one joked, “because you just might get it.”


Implications for Evidence-based Policy

While it may be too early to draw conclusions about the individual PII projects or the innovations they are testing, the program as a whole may still provide some lessons for the broader field of evidence-based policy.

One is that creating evidence takes time. When the PII evaluations become available in 2016, nearly six years will have passed since the grants were first announced in October, 2010 — and even more if the time spent planning the initiative is counted.

This is consistent, however, with the time spent in other innovation initiatives, like the Social Innovation Fund and Investing in Innovation (i3) program. Early results for those programs are expected to begin rolling out next year. Moreover, substantially less time has been invested in PII than was spent developing, testing, and replicating the guardianship assistance program, which was widely considered a success.

It may prove to have been time well spent. Earlier this year, in an article about the i3 program, Robert Slavin, Director of the Center for Research and Reform in Education at Johns Hopkins University (who also is associated with one of the i3 grantees, Success for All), wrote about the forthcoming evaluations of that program:

I’m not sure if policymakers or educators are ready for what is about to happen. If most i3 validation and development projects fail to produce significant positive effects in rigorous, well-conducted evaluations, will opinion leaders celebrate the programs that do show good outcomes and value the knowledge gained from the whole process, including knowledge about what almost worked and what to avoid doing next time? Will they support additional funding for projects that take these learnings into account? Or will they declare the i3 program a failure and move on to the next set of untried policies and practices?

The PII program will not be immune to these same political dynamics. This makes the PII Approach — with its emphasis on exploration and studious development of innovative programs — even more important. Will its deliberative approach produce better results? When the final evaluations are issued in 2016, they may render a verdict not just on the individual PII projects, but the PII Approach itself.

Testa is more sanguine. “When the guardianship assistance program was first begun in Illinois in the 1990s, it was one of three or four initiatives that were launched at the same time,” he said.  “No one remembers the others. They only remember the one that worked.”

This entry was posted in Children and Families. Bookmark the permalink.