Building Evidence in Human Services: An Interview with ACF’s Naomi Goldstein

While many federal agencies have been building and increasing their use of research-based evidence in funding and other policy decisions, the Administration for Children and Families (ACF) is considered one of the leaders.  Programs within ACF’s purview are wide-ranging, including Head Start, child care, child support, child welfare, adolescent pregnancy prevention, and many others.

How is evidence being built and used for these programs? Where are these programs headed?  We asked Naomi Goldstein, Director of ACF’s Office of Planning, Research and Evaluation (OPRE).


SIRC:
Let’s start with the big picture. Very early on, the Obama administration made a point of emphasizing evidence in federal programs, including both building the existing evidence base and using that evidence where it is available. Earlier this year, your office published an updated evaluation policy in the Federal Register that reflected that same dual focus. Can you tell us more about ACF’s strategy for building and using evidence in its programs?

Naomi Goldstein: Actually ACF has a long history of building and using evidence under many administrations, with the longest track record in the areas of welfare and early childhood programs. A recent book called Fighting for Reliable Evidence describes “forty-five years of uninterrupted, large-scale random-assignment studies that assessed the effectiveness of [welfare] reform initiatives.” Full disclosure, the authors are Howard Rolston, my predecessor as director of ACF’s Office of Planning, Research, and Evaluation, and Judy Gueron, the former president of MDRC, a firm that has done many evaluations under contract to OPRE.

This history is worth mentioning for a few reasons. First, building knowledge to improve social programs is a long-term enterprise. While some questions can be answered quickly, in other cases evaluations may take years to plan and carry out. Just as important, really robust, useful knowledge requires ongoing portfolios of work, not just one-off investigations of isolated questions. Third, it takes time to develop both the capacity for rigorous, relevant evaluation, and a broad culture that values evidence. ACF has been recognized by the Government Accountability Office as an agency with a “mature evaluation capacity” and an “evaluation culture.”

It’s because of that history, capacity, and culture that ACF and OPRE have been able to respond to the administration’s emphasis on evidence. In the last several years we have taken on new activities related to teen pregnancy prevention, home visiting, health professions training pathways for low-income people, and other areas. Ron Haskins and Greg Margolis of the Brookings Institution pointed this out in their recent book, Show Me the Evidence, when they noted that “OPRE was well suited for the evaluation role [in home visiting] because of its long history of rigorous evaluation.”

ACF’s evaluation policy is based on five principles: rigor, relevance, transparency, independence, and ethics. I could go on at length about each of these, but instead I’ll mention just a few key points. Rigor means we try to get as close to the truth as we can by using the best scientific methods available. Relevance is just as important, because accurate information that’s irrelevant to program and policy decision-making is just not useful. We aim to build strong partnerships with program staff, policy-makers and service providers both in figuring out what questions to address, and in interpreting the findings. We’ve been working on improving our dissemination activities so that we can make information available in ways that are accessible and useful. Transparency means, among other things, that we will release evaluation results regardless of the findings. And independence is important to preserve objectivity.

SIRC: Some of the other federal agencies and departments have clearinghouses that are evaluating the existing evidence base in their respective fields. Examples include the What Works Clearinghouse at the U.S. Department of Education and the Clearinghouse for Labor Information and Research (CLEAR) at the U.S. Department of Labor. Are there any similar plans for a clearinghouse within ACF?

Naomi Goldstein: I’m so glad you asked. We have created several clearinghouses and systematic evidence reviews, with more on the way.

The Home Visiting Evidence of Effectiveness review, which we call HomVEE, is a systematic review of the evidence on home visiting service models for low-income families with pregnant women or young children. By law, grantees of the federal Maternal, Infant and Early Childhood Home Visiting program must reserve three quarters of their funds for models with evidence of effectiveness. This review determines which models qualify.

We’ve done similar reviews of responsible fatherhood, healthy marriage and relationship programs, and we have a review of employment programs underway. We also sponsor a couple of broader clearinghouses, one called Child Care and Early Education Research Connections, and one called the Self-Sufficiency Research Clearinghouse.

We talk often with our colleagues at DoL, Education, and other agencies that sponsor evidence reviews and clearinghouses. We share information and methods so we don’t each re-invent the wheel.

SIRC: Some of the coordination of evidence-related work within the administration seems to be coming from above, specifically the White House Office of Management and Budget. But there also appears to be a lot of cross-agency, horizontal cooperation going on. For example, there appears to be a cross-agency effort to develop a “common framework” for evidence — and ACF appears to be at that table. What is this and what can you tell us about how these efforts will affect ACF programs?

Naomi Goldstein: Yes, there is a lot of inter-agency cooperation. In some cases we collaborate on specific projects. For example, we are working closely with DoL on evaluations of subsidized and transitional employment programs for low-income workers, even using common survey questionnaires so that we’ll be able to learn more by comparing results across different programmatic approaches, settings, and populations. At a broader level, I can say that I learn something every time I have a chance to talk with evaluation colleagues in other agencies – maybe an idea about applying an emerging research method, or an idea about staff training, or how to structure evaluation contracts.

SIRC: So far, we have focused a lot on building the evidence base. But let’s talk more about how that evidence may be used and rolled out more broadly to the many ACF-funded programs. There are a number of big-budget funding streams. How will evidence begin to be integrated into those larger discretionary and formula grant programs?

Naomi Goldstein: It’s challenging to design and carry out rigorous, relevant research and evaluation. But figuring out how best to use evidence from those studies can be even more challenging. Results are sometimes complicated and confusing. And there will never be enough rigorous evidence to fully inform every decision. Evidence from research and evaluation has to be combined with many other kinds of knowledge, practical considerations, and values too: experience and judgment, expert opinion, anecdote, politics, public opinion, capacity and cost all count in decision-making.

Communication between researchers and evaluators on the one hand, and policy and program staff on the other, can be hard work. In spite of many shared goals, people working in these different spheres may be subject to different types of pressures, operate on different timelines, and speak different jargon. Relationships matter. Trust matters. Like many other worthwhile things, these can take time to build, and commitment to maintain.

ACF is a big agency, responsible for several dozen programs, each with its own statutory language, goals, staff, history, service population, and structure of state/federal/local responsibilities. These contexts influence how evidence can best be integrated into policy and practice. Nevertheless, across ACF programs I am seeing an increase in focus on the creation and use of evidence.

For example, ACF’s standard template for funding opportunity announcements now includes options for applicants to propose either to collect performance management data for use in continuous quality improvement or to conduct a rigorous evaluation. More and more, I am seeing ACF programs investing in building and using evidence.

The Permanency Innovations Initiative is one example, which you highlighted in a recent blog post. As another example, the new Head Start Designation Renewal System is a major initiative under which grantees meeting certain conditions are not eligible for continued funding without competition. And the Community-Based Child Abuse Prevention program has set a performance goal (p. 58) to increase the percentage of funding that supports evidence-based and evidence-informed child abuse prevention programs and practices.

SIRC: Some evidence advocates say that the full power of administrative data is only beginning to be realized and that its use will begin to accelerate once evaluators and practitioners begin to tap the existing treasure troves of administrative data. The White House Office of Management and Budget released guidance on the use of administrative data earlier this year.

ACF seemed to make its own contribution in August when it released a new confidentiality toolkit, which was intended to help human services providers navigate the confidentiality and data security requirements of various ACF programs. What can you tell us about how administrative data can be used to build evidence in ACF programs? Where are some of the low-hanging fruit and/or early wins we might see coming out of these efforts?

Naomi Goldstein: I agree that administrative data holds enormous promise as a tool for evaluation, performance management, quality improvement and related activities. We already use administrative data in evaluations whenever we can, such as employment data from state Unemployment Insurance systems or the National Directory of New Hires; health outcomes from state birth and Medicaid records; and data on child maltreatment reports and services from child welfare records.

But using administrative data isn’t always as easy as you might think. For one thing, to get access to administrative data in more than one jurisdiction – say, multiple states – you have to negotiate agreements with each jurisdiction. For a large, geographically dispersed study, this can cost a lot of time and effort. Completeness and quality of data can be another limitation.

I hope that the more we harness administrative data for useful purposes, the smoother the path will become. For example, data use agreements will become more routine and standardized, and shortfalls in quality or completeness will be remedied if the data are increasingly seen as useful rather than merely required.

As one effort to enhance the use of administrative data, we recently awarded a grant to the University of Chicago for a Family Self-Sufficiency Data Center. Among other activities, the Center will acquire state and local administrative data, clean it and link it to other records, make it available to potential users, and provide technical assistance and training to both users and providers of data.

SIRC: Is there anything else you would like to share about the future of evidence in ACF programs?

Naomi Goldstein: I think the future is bright. I can’t say enough good things about the talented, committed staff in OPRE and throughout ACF. I see evaluation staff getting more sophisticated all the time, not just about evaluation methods, but also about how to collaborate with program partners toward our shared goal of making ACF’s services more effective. And I see ACF program staff getting more sophisticated about how to use evidence and also about how to work with evaluators both in developing questions and in interpreting findings.

This entry was posted in Children and Families, Evidence. Bookmark the permalink.