When youth-serving programs and their funders make their evaluations transparent, the entire field benefits from the lessons learned – even when the evaluation documents positive, negative and confounding outcomes. So all types of programs can benefit from the public baring of evaluation data from the Community Partnerships for Protecting Children (CPPC) initiative, which produced less than stellar results.
The Edna McConnell Clark Foundation (EMCF) launched CPPC in 1996 to demonstrate how child protection agencies could join forces with local communities to more effectively protect children from abuse and neglect. The Chapin Hall Center for Children at the University of Chicago evaluated the eight-year effort.
In a refreshingly collegial move, former EMCF President Michael Bailin and Frank Farrow, director of the Center for the Study of Social Policy (CSSP), released their own four-page statement to accompany the nearly 300-page evaluation. EMCF was the main funder, while the center provided technical assistance.
They wanted to help others build on the initiative’s findings. “We just wanted to say, ‘Look, this was a complicated effort. Here’s what we learned from this that makes sense and has been helpful, and here’s what we learned that raises some questions,’ ” Bailin says.
CPPC’s Noble Goal
For various reasons, the national rate of abuse and neglect reports more than quadrupled between 1976 and 1992 – from 10 reports per 1,000 children to 45 reports, according to the Chapin Hall report, “Creating Community Responsibility for Child Protection.” That growth made it difficult for public agencies to conduct adequate assessments and strained the availability of therapeutic resources for those who needed them most.
The initiative aimed to spread the responsibility for child safety among the people and organizations that knew families and were best able to respond to them, according to Susan Notkin, director of CSSP’s Center for Community Partnerships in Child Welfare. The center took over the community partnership’s initiative in 2002 after a change in the Clark foundation’s funding focus.
The main goal was to increase child safety by developing individualized courses of action for high-risk families; creating neighborhood networks of formal and informal support services; changing the policies, practices and culture of local child protective service agencies; and establishing local decision-making bodies of agency and community representatives. It was hoped that progress would be reflected in fewer child maltreatment reports and out-of-home placements.
During Phase I (1996-2000), four sites with relatively high rates of maltreatment and histories of child protection reform were chosen to implement CPPC strategies and meet a set of performance benchmarks: Cedar Rapids, Iowa; Jacksonville, Fla.; Louisville, Ky.; and St. Louis.
Although Chapin Hall’s evaluation of Phase I found only adequate implementation success and a general failure to establish partnerships, EMCF agreed to fund Notkin’s center to complete Phase II. From 2000 to 2004, the sites were “encouraged to deepen, institutionalize and expand their efforts,” according to the report.
With an eye toward the possible replication of the program, CSSP and Chapin Hall enriched Phase II evaluation measurements in order to “garner information that might have application for ongoing efforts,” the report says.
“We felt that the effort was important enough and significant enough that we needed to be learning as we went along … and making sure that we were sharing those lessons with the field,” Notkin says.
Chapin Hall did not respond to requests to discuss the project.
Getting Lemons
The Phase II evaluation was a complex quasi-experimental design involving a six-month longitudinal study of 331 families and their caseworkers; a longitudinal analysis of administrative child welfare data; surveys of agency managers, local child welfare workers and supervisors, and CPPC volunteers; and interviews with CPPC site leaders and national planners.
More than nine in 10 CPPC staff at the four sites said they felt the individual courses of action (ICA) process effectively addressed child safety, and most families in that process reported “modest improvements” in their situations after six months.
However, there were no consistent trends in the number of maltreatment reports or placements across the four sites. In addition, CPPC sites linked families with needed services only 65 to 85 percent of the time, presumably due to a shortage of service providers. And while there were improvements in familiarity among agencies and service providers, the overall number of shared activities across agencies remained low.
Also, evaluators had trouble assessing CPPC’s impact on community feelings toward shared responsibility for child protection. Community members repeatedly declined to participate in telephone surveys or were unreachable, the report says, and none of the ICA families reported receiving any new informal supports from their extended families, neighbors or communities.
Making Lemonade
In the end, CPPC “did not demonstrate consistent impacts” on the number of child abuse reports, subsequent maltreatment reports or out-of-home placements across the four sites, according to the evaluation report.
Bailin and Farrow’s statement points to the evaluation’s most positive findings: Cases that received the most intensive CPPC service – the individualized courses of action – showed improvements in participants’ perception of case progress and their levels of depression and stress; and public child welfare workers reported increased feelings of job satisfaction, role clarity and commitment to CPPC practices.
Ultimately, Bailin and Farrow concede, “The fundamental objective of the CPPC initiative was to improve child safety … [T]hat did not happen.”
“However,” they are quick to follow up, “there is much that has been learned.”
Bailin and Farrow hope to move the field forward by sharing the next steps planned by the Center for Community Partnerships in Child Welfare: re-examining the community partnership theory and the factors that may undermine community buy-in; focusing on gaps in and obstacles to on-site implementation of the design model; and disseminating five lessons that warrant “close attention by future researchers.” Those lessons are:
• Evaluate outcomes and replicate models only after an initiative has reached full operating strength.
• Undertake cross-site evaluation only after the theory underlying the initiative has been fully formulated and translated into clear guidance for implementation.
• Factor in the weaknesses and limitations inherent in using administrative data when drawing conclusions about program effectiveness.
• Deliberately phase in and sequence complex interventions to allow for distinct evaluation of those components.
• Use site-specific evaluation approaches when evaluating community-based initiatives that allow programs to tailor their efforts to local circumstances. (See Evaluation Spotlight, February.)
Notkin says that “program innovators, especially when foundations are involved, [have] a responsibility to be capturing the lessons we’re learning … so that the field can adapt and grow.”
Contact: Notkin at susan.notkin@cssp.org. “Creating Community Responsibility for Child Protection: Findings and Implications from the Evaluation of the Community Partnerships for Protecting Children Initiative,” is available at www.cssp.org/center/community_partnership2.html.