Archives: 2014 & Earlier

A Data Shortage Yields Unplanned Lessons

When the researchers responsible for evaluating the federal Title V Community Prevention Grants Program realized that virtually none of the program’s grantees had the capacity to collect and feed evaluation data to their study, their first reaction, according to their final report, was to “provide more training and technical assistance.”

But, faced with the news that that approach would cost an estimated $3 million and delay the evaluation by more than a year, they did what youth programs have done for decades.

“We decided to work with what we had,” said Susan Chibnall, a former managing associate at Caliber Associates, the group’s project manager for the Title V evaluation. Chibnall also co-authored the “National Evaluation of the Title V Community Prevention Grants Program,” released by the U.S. Office of Juvenile Justice and Delinquency Prevention (OJJDP) in August.

In what was probably the most applicable of the many lessons learned over the course of this eight-year, multimillion-dollar, multi-site evaluation, the researchers stumbled on a key tool for anyone working with the funding, design, implementation or evaluation of community youth programs: Be flexible.

A Sure Thing

The Title V program was based on a risk- and protection-focused model of prevention called Communities That Care. The model brings communities and agencies together to reduce the risk factors (such as drug use) that act as predictors of problem adolescent behaviors, and enhance the protective factors (such as social bonding) known to act as buffers to such behavior.

From 1994 through 2002, the federal government distributed $231 million through the Title V program to more than 1,500 grantees in 49 states. The awards are paid to State Advisory Groups (SAGs) – gubernatorially-appointed juvenile justice advisory boards that help state officials administer federal funds.

The SAGs, in turn, approve subgrants to city-, county- and community-level agencies that provide various delinquency prevention programs.

All subgrantees are required to be SAG-certified, assemble an advisory board of community stakeholders, submit a three-year delinquency prevention plan and provide a match of 50 cents to the grant dollar in cash or in-kind services.

Changing Strategies

The researchers knew from the outset that the Title V program wasn’t a candidate for a traditional experimental/quasi-experimental evaluation design.

Among the problems: the horizontal (cross-agency) and vertical (federal, state, local) complexity of the model; contextual issues, such as demographics, that varied across communities; the flexible and evolving nature of prevention strategies and activities; the possible broad range of outcomes (everything from reduced drug use to increased volunteering); and the absence of comparison or control groups.

When the evaluation began in 1998, the researchers conducted “evaluability” assessments at 11 sites in six states to determine the capacity of the sites to collect and report data. The report says the results were “not encouraging.”

“There wasn’t a lot of accountability. People were just sort of doing good things for kids,” Chibnall said.

Among the problems discovered by the researchers: Grantees and subgrantees were unfamiliar with the basic Title V model and the data collection tool provided with the model, even though some had been applying for and receiving Title V funds for nearly four years. The sites generally didn’t have local evaluation and data collecting plans or resources. And they weren’t held adequately accountable by the SAGs, which – untrained themselves in the rigors of evaluation methodology – were willing to accept unscientific, anecdotal quarterly progress reports as proof of effectiveness.

At some point, according to the report, the major research question became: “Given enough training and technical assistance, could these communities collect and report data as required by the national evaluation?”

“We went to OJJDP and said, ‘Look, the level of technical assistance that they are going to need in order to implement this fully … is really cost-prohibitive,’ ” Chibnall said. “So we said, ‘Let’s just go in and see what we can do from here.’ ”

The final report provides detailed case-study descriptions of the communities surrounding each of the 11 grantees, their mobilization and collaboration efforts, their initial assessments and planning, the implementation of their prevention strategies, and their monitoring and evaluation activities.

It also underscores the need for a more collaborative approach to evaluation early in the implementation process. That approach views the community – including program staff, participants, parents, agency leaders, program advisers – as the “end-users” of evaluation data. It also holds that the primary use of evaluation information should be program improvement and decision-making, and that evaluations must recognize the uniqueness of each community, and the social, political, economic and cultural contexts in which each program exists.

Lessons Learned

The study emphasizes that while the national evaluation “provided opportunities to learn about how communities plan and implement local delinquency prevention initiatives,” it also provided “somewhat unanticipated” opportunities to learn about evaluation itself.

Among the lessons:

• Broaden the definition of success. The authors say it’s important to remember that communities start in different places. “We need to focus on what it makes sense to evaluate and hold people accountable for,” Chibnall said. “In some of the places we were, people [from different agencies and programs] had never come to the table before. Just doing that was really, really big for them.”

• Encourage communities to start small and build on success. The study encourages – and provides examples of – the development of manageable plans to which programs and strategies are added as needed.

• Provide ongoing technical assistance. It’s expensive to pay for ongoing evaluation training and technical support, so “it doesn’t really get done right,” Chibnall said. Inviting trainers to a site just once, never to have them return, “just doesn’t work,” she said, especially in programs where high staff turnover may leave an organization without trained personal within months of a training event.

• Emphasize program evaluation and risk-factor tracking. Positive effects may not be visible for years, and the small size of many programs makes it unlikely that they alone could affect city- or county-wide measures of such risk-factors as juvenile arrests or alcohol purchases. The study emphasizes that documentation at least allows a theoretical link between program outcomes and anticipated changes in risk factors.

• Build state-level evaluation capacity to monitor and support local evaluation. Chibnall said that because state agencies fail to train their staff on evaluation methodology and “what good evaluation looks like,” grant reviewers “have no idea what they’re looking for regarding outcome data.” She recommends that every state agency have a trained grantee evaluation group.

• Mandate and fund evaluations. Understanding that program directors are often reluctant to spend money on evaluation at the expense of direct services, the authors suggest that grantees be required to set aside a percentage of their awards for evaluation activities, such as hiring an evaluator.

• Require the use of evidence-based practices. EBPs such as Multi-Systemic Therapy and Family Functional Therapy come with their own evaluation instruments that simplify and jump-start the evaluation process. But Chibnall urges caution:

“There’s a place for [EBPs], but it’s not possible for everyone all the time. When you’re in Valentine, Neb., and the kids are driving across the border to buy booze, Multi-Systemic Therapy is not going to work. They need something for their kids to do. We need to keep our wits about us and think about it a little more broadly.”

Comments
To Top
Skip to content