Guest Opinion Essay

The Uses and Misuses of Evaluations

There’s hardly a youth program manager in the country who doesn’t understand the uses and misuses of evaluation studies.

Surely all understand that evaluation is supposed to be about accountability, and accountability is the mantra (even obsession) in funding circles, on boards and increasingly in federal social policy – if we can believe the Bush administration with its emphasis on supporting only programs that pass muster in scientific evaluations.

(Or maybe this is a ruse to cut programs, since those clever Bush people know that few programs have been deemed “successful” as a result of rigorous studies. Who knows what mysterious thoughts lurk inside the Beltway?)

Accountability can be both frightening and healthy.

Nothing is worse than being held accountable for goals over which you, as a manager, have no control. Remember those poor Head Start directors who may have done an excellent job with preschool 3- and 4-year-old tykes, only to find the researchers wanted sustainable results when the kids were in primary school? “Hey,” they thought, “I don’t control public education!”

It’s also frightening because over the history of the evaluation industry, too many evaluators have disrespected the program managers, asked questions irrelevant to management, measured things crazily or on a useless schedule and never even shared the finished report with the agencies they evaluated.

On the healthy side, performance-based management is about having information that is measurable and connected to real outcomes, linked in a logical way back to the program’s design and strategies. What manager doesn’t see the value in this?

I am involved in three evaluations that show how smart leaders use evaluation for management.

In one program, the National Foundation for Teaching Entrepreneurship (NFTE) is revamping its assessment tools to better evaluate its self-employment/entrepreneurship curriculum and its users. Most importantly, NFTE leaders have come to realize that their program is as much about youth development as business ownership, so they need to revise their pre- and post-evaluation tests to reflect their changing understanding of themselves and their mission.

The second example is YouthBuild USA, where the Center for Youth and Communities here at Brandeis University has just completed a soon-to-be-released survey of 900 YouthBuild graduates, most of whom are in their early 20s. Any manager would want these kinds of details: the barriers and circumstances of their enrollees before they joined the program; whether the program really serves the people it says it serves; whether people who complete the program are doing well or poorly in a range of outcomes, and how that varies by type of program participant; participants’ assessments of program services; sources of help used by the young people since completing the program; and the ways that various indices of “success” relate to pre-program characteristics, program experiences and after-program supports. The YouthBuild USA management will use this fundamental information to expand and improve graduate services and more.

The final example is the Steppingstone Foundation in Boston. This program offers disadvantaged youth in the Boston schools (“Steppingstone Scholars”) the opportunity to work through an incredibly rigorous program of academic preparation after school, on weekends and in the summer. It is aimed at helping the youths gain entry to independent middle and high schools and Boston’s highly competitive exam schools. An upcoming report from Brigham-Nahas Associates tracked participants and parent perspectives, and was able to show: who precisely enrolls in the programs; the key and secondary outcomes; the perceived impacts of various program components; and the roles played by other sources of help, such as families and community institutions.

NFTE, YouthBuild USA and Steppingstone have all integrated or plan to integrate evaluation into the routine of their programs by hiring what might be called “chief organizational learning” officers. These aren’t soulless data clerks. They are sophisticated members of the management team who can connect evaluation to all parts of the organization – linking it to strategy, helping their peers learn from systematic information and responding to external demands in an era of relentless emphasis on performance and accountability.

Andrew Hahn is professor at the Heller School for Social Policy and Management at Brandeis University and directs evaluation and policy analysis projects at the school’s Center for Youth and Communities. Contact: ahahn@brandeis.edu.

Comments
To Top
Skip to content