The Big Lie

Print More

You know all that evidence that’s come out in recent years showing that after-school programs boost academic achievement?

Researcher Robert Halpern has a name for it: the Big Lie.

In his soon-to-be-published monograph, “Confronting the Big Lie: The Need to Reframe Expectations of After-School Programs,” Halpern swims against the tide. He argues that funders and researchers should abandon narrow academic expectations for after-school programs and undertake a different style of evaluation – one that considers both the “breadth of developmental tasks of children of different ages, and of those tasks after-school programs are best suited to help address.”

An after-school evaluator and a professor at the Erikson Graduate School of Child Development in Chicago, Halpern says “fear” has driven after-school programs away from their natural strengths toward trying to reshape themselves to show results that stray from their core objectives.

“Everybody’s afraid that they’ll be deemed irrelevant” if they don’t, he says.

He calls his paper a cry to other evaluators and to stakeholders that judging after-school programs primarily by academic measures “is way out of hand.”

“The only reason … I argue so hard for the distinctive identity and role of after-school programs is because it’s been so lost,” Halpern says. “We shouldn’t confuse our agendas. After-school programs need to be respected as a normative developmental support. It’s especially critical now, because other institutions and settings aren’t willing and able to address the range of normal developmental needs that kids have.”

The New Agenda

Halpern touches on a sensitive subject that’s tied to the recent evolution of after-school programs. For decades, those programs focused primarily on supports such as snacks, enrichment activities, informal social play and a little homework help. Despite occasional forays into programming aimed at concerns such as delinquency, after-school programs operated at the margins of social and political consciousness.

That changed in the 1990s, when increasing academic achievement emerged as a national priority. Government agencies and foundations launched initiatives that tied funding for after-school programs to expectations that the programs would boost academic achievement in measurable ways.

Halpern says the programs suddenly found themselves with a lot of new “friends” – constituencies that redefined the after-school mission on their own terms.

Longstanding traditional providers, such as Boys & Girls Clubs of America and the YMCA, were “caught off-guard,” Halpern writes in his monograph. Programs couldn’t quickly “come together to develop the simple, resonant, problem-oriented storyline demanded of a public issue in American life.”

Providers “were inclined to continue arguing for after-school programs in broad developmental terms,” Halpern writes. “But they also knew that a meaningful share of scarce resources would not be secured by arguing that low- and moderate-income children deserved the same access to fun, enrichment and challenge as their more advantaged peers.”

Instead, he writes, providers were compelled to “make promises about academic effects that the providers knew were unrealistic” given their programs’ scope and focus. What’s more, he says, providers were rarely given money to develop evaluations that reflected the positive, nonacademic outcomes of their individual programs.

With stakeholders eager to justify after-school funding based on the new academic agenda, evaluators – forced to economize both time and money – used tests administered by schools as part of their evaluations. But those tests were designed to measure the effects of a child’s schooling on academic achievement.

According to Karen Walker, director of research at Public/Private Ventures and an evaluator of several after-school programs – including San Francisco Beacons and the California-based Communities Organizing Resources to Advance Learning initiative – evaluators are still pressured by funders and policy-makers to use standardized scores, “because it’s relatively inexpensive to get them, they are available, and they’re what education departments at the state and the federal level are using to assess progress in education.”

Consequently, evaluators have been working with measures and data that have “nothing to do with the after-school programs they are evaluating,” Halpern says.

“You can only strictly separate academic effects from other effects when you define ‘academic’ in the narrow way in which we conceive of and define it today,” he says. “If kids, for example, come to love and enjoy reading more, and identify a particular kind of literature that they like a lot and have [access] to through participation in a program, that’s an academic effect, but not in the way we currently define academic effect.”

What to Measure?

When setting out to measure the effects of after-school programs on children, Halpern argues for more consideration of context.
He says evaluators should first consider the range of developmental needs and tasks that preoccupy children of different ages, then deduce which of those needs and tasks the program is best suited to address, and what kind of experiences the kids are having in the program day in and day out. He suggests that evaluators regularly observe and talk with staff and children in a program for several months before settling on where to focus their outcome measurements.

“The strongest research comes from identification with the subject of your research,” he says.

Wouldn’t that skew the evaluator’s objectivity? “Objectivity is way overrated,” says Halpern. “Your biasing and identification are what allow you to get at the essence of the program and the kids’ experiences in the program.”

He believes “the worst suck-ups … are when these evaluators of these high-stakes initiatives find nothing in the way of academic effects and then struggle to find language to cover their butts.”

Hold on, says Walker of Public/Private Ventures. She says evaluators “can’t say ‘this program absolutely doesn’t work.’ Who wants to say a program is useless?”

She says evaluators investigate until they uncover even modest academic effects, such as effects on subgroups or changes that might lead to academic benefits down the road.

“If a program effects change in a subgroup of youth, that says to me that you might want to target your program more effectively to the kids who benefit,” she says. “And that’s not covering your butt. That’s using your information to speak to social policy.”

However, she says that “programs really do need to be clearer about what they want to achieve, and to the extent that they can, be straightforward with their funders,” about their objectives.

Even better, says Halpern, would be to stop concentrating so much on inappropriate outcome measures.

“If we [evaluators] think that the scientific or developmental or moral arguments” for funding after-school programs as distinct developmental institutions “are too modest, subtle or ambiguous, then we are free to tell the politicians and funders whatever we want,” Halpern concludes in his monograph. “There is not much difference between a small lie and a big one.”