What We Don’t Know about Evidence-based Programs

In recent years both government and private funders have called for greater use of evidence-based programs and practices to serve children and youth. This is generally a good thing. We should fund programs that have at least some potential to improve the lives of children and youth. The good news is that there is a significant—and growing–number of programs with demonstrated positive outcomes. For example, Child Trends maintains a “What Works” database of more than 600 experimentally evaluated programs, which includes programs that have demonstrated positive, mixed, and negative impacts on children’s and youth’s outcomes.

Image

The bad news is that it is far from clear what distinguishes effective from ineffective programs. In other words, we don’t know much about why programs work or don’t. In 2007, the late Douglas Kirby wrote Emerging Answers:  Research Findings on Programs to Reduce Teen Pregnancy and Sexually Transmitted Diseases. In this update from his 1997 report, he emphasized how much the number of rigorous impact studies had increased over ten years.  However, throughout the report he noted how little we know about program implementation. Indeed, one of Kirby’s recommendations for the field is: “Provide much more complete program descriptions in published articles, as well as more informative process evaluations, to help reviewers ascertain why some programs were effective and others were not.”

Unfortunately, although progress continues to be made in terms of impact studies, relatively little progress has been made in studying program implementation across a variety of programs, including teen pregnancy prevention programs. For example, Dr. Mary Terzian, a colleague at Child Trends, has been reviewing teen pregnancy prevention programs that work for Latino adolescents (forthcoming). She found very little available information on implementation in the evaluations. Recently, I was reviewing a comprehensive list of reproductive health programs from Child Trends’ database of programs that work—and don’t –to improve children’s and youth’s outcomes. In reviewing those studies it was clear that, although there are a few clear conclusions about what works and what doesn’t work, many strategies work in one program but not in others. Similarly, there are both short-term “light-touch” and long-term intensive programs that have evidence of effectiveness, but there are also both short- and long-term programs that do not work. Without detailed information about program operations, disentangling features that matter greatly from those that don’t is not possible.

Interestingly, current replications of evidence-based programs place a priority on ensuring that the programs are being implemented with fidelity to the program model. This is done to help improve the chances that the program effects can be replicated in other settings. This is important, but we have missed an important step: If we don’t understand what it is about the program that made it effective in the first place, then it is challenging to replicate the effects that made the program desirable in the first place. And the evidence is clear:  program replications tend to show smaller effects.  A better understanding of program implementation will improve the chances for successful program replications.

Karen Walker, Ph.D., Senior Program Area Co-Director, Youth Development

About these ads

2 Comments

Filed under Children

2 responses to “What We Don’t Know about Evidence-based Programs

  1. We have implemented a program in a variety of ways in 63 out of 67 county children and youth agencies in Pennsylvania. Your blog post speaks to a specific concern we have now that we are attempting to define the program within the framework of EBP in order to faciliate replication in other states. A needs-based, tailored approach has worked for us in the past, but EBP tells us to take a one-size-fits-all approach. Thank you for your suggestions for finding a happy medium.

  2. Kimberly Massey

    I believe the use of evidence-based programs is important. However, using a rigorous research design in the field is often more cumbersome and resource restrictive than many NPO’s can take on. What I find interesting is even the most “researched” programs are leaving out important indicators: have none of these programs been around long enough to have any evidence statistically of whether there has been a reduction in teen pregnancy/birth rates since they began (in reference to school/community based curriculum)?

    My agency created a program that was presented to a large population of 8th and 9th graders across a county over a five year period. The pre/post test results demonstrated an improvement in knowledge and a desire to change behavior to choose abstinence. During that time, this county had the most consistent drop in teen birth rates in the state, and exceeded the state’s average for reduction in teen birth rates. When the program was discontinued, the teen birth rate immediately began to rise, and leveled out to match the state rate. I believe there is enough evidence in the numbers I have collected to begin a more rigorous evaluation of this program as evidence based, and you can bet, we will watch the impact on actual birth and pregnancy rates!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s