Granger: “Front-line workers want to do the right thing . . . They need some help and support to do that.” |
In the quest for the most effective after-school program recipe, program managers, funders and researchers have done just about everything: They’ve tinkered with curricula, studied the characteristics of participants and their families, examined staff recruitment and retention, measured the frequency and duration of participation, and set standards for academic, social and behavioral outcomes.
The result? An ever-increasing body of literature that seeks to define what enables some after-school programs to positively influence the lives of youth. But according to Robert Granger, president of the William T. Grant Foundation, the literature is behind the curve.
He says practitioners have reached a consensus about the elements of effective program practices. The best shot at creating high-quality standards, he says, lies in research that observes, defines and refines the qualities of effective youth workers.
“I don’t see [the issue] as do these programs make a difference or not,” Granger says. Rather, he phrases the issue as: “Front-line workers want to do the right thing, and they need some help and support to do that.”
Granger’s examination of three recent empirical meta-analyses, each of which combined statistical results from multiple studies, shows that “on average [after-school] programs have positive impacts on important academic, social and emotional outcomes.” His analysis appears in the April issue of Social Policy Report, published by the Society for Research in Child Development.
But when Granger dug deeper into what the studies were measuring, how they were measuring it and how individual programs were performing, he found something else at work.
Distilling Research
The three meta-analyses – referred to here by the names of their lead researchers, S.G. Zief, P.A. Lauer and Joseph Durlak – documented outcomes ranging from significant and consistent positive effects on specific things like test scores, to a complete lack of effects for whole clusters of programs.
Zief looked at the effects of programs that included academic and recreational activities and found no effects on academic, social or emotional outcomes in any of the five studies he reviewed.
Lauer reviewed 35 studies of the effects of out-of-school time academic programs on at-risk youth and found, on average, small improvements in reading and math achievement.
Durlak looked at 66 studies of the effects of after-school programs on personal and social skills and found, on average, small, consistent improvements on achievement tests, school grades, positive social behaviors, reduced problem behaviors, reduced drug use, feelings about school and positive youth views.
Although Lauer and Durlak each found positive effects “on average,” many studies of individual programs found no significant effects when compared with control groups. Also, the findings were not always positive across all outcomes measured in a study.
Granger writes that such discrepancies show that rather than characterizing individual programs as high- or low- quality, we need to see “quality as something that varies within a program.”
The question, he writes, is: “Why do some programs create effects while others do not?”
The analyses by Lauer and Durlak had enough variation in outcomes and large enough sample sizes to look for predictors of positive or negative impact. Lauer examined factors affecting math and reading scores that could be shaped by policy and practice, such as program duration, the grade level of participants, the program’s focus (academic vs. academic/social), grouping strategy and the quality of the program study.
Again, a pattern was not clear across outcomes. For example, both one-on-one tutoring and a mixed-group strategy improved reading outcomes, but one-on-one tutoring did not improve math scores.
Durlak took a different approach. He grouped the studies into those whose programs focused on specific skills, and used a sequential learning activity and active forms of learning (such as practice and role-playing) to teach those skills, and those that did not. He used the acronym SAFE (Sequenced, Active, Focused and Explicit) to describe programs that had those features.
On average, the cluster of programs that had SAFE features showed positive effects for every outcome but school attendance, and the non-SAFE cluster showed no positive effects for any outcome.
Granger concludes that because it appears that different program features seem to matter at different times, “something else is driving the results.”
The “Real Strength”
Granger theorizes that the presence or absence of SAFE features is one possible explanation for the discrepancy in the findings of the three meta-analyses. That, he writes, is a “much better predictor of program effectiveness than other structural features discussed in the literature.”
But while the strength of the three meta-analyses he examines lies in their ability to relate program features to “experimental or quasi-experimental estimates of net changes in performance” among the youth participants, Granger speculates that the real strength of after-school programs probably lies elsewhere.
While the best research studies take into consideration the resources, standards and curricula available to programs, Granger says that “what separates out the programs that seem to be getting good results from those that are not … has to do with staff/kid interactions and the nature of those interactions; how supportive, warm, focused and growth-producing they are.”
Durlak, a professor of psychology at Loyola University in Chicago and the lead researcher on the SAFE meta-analysis, supports Granger’s assessment in an accompanying article in Social Policy Report. He writes that more than 5,000 studies on program diffusion/dissemination indicate that a program’s “most important resources … are the [staff] people, and their talents and values.”
“If you want to influence real-world practices,” he writes, “you must make a concerted effort to inform and collaborate with front-line providers.”
According to Granger, a new generation of observational studies – in which researchers record what they witness at after-school programs – offers the best bet for data collection at that level.
While not yet perfected, observational measures of interpersonal interactions “attempt to say what actually goes on inside a successful program that will make kids do better,” Granger says. This is much like trying to characterize “what makes a good teacher” in the context of a study finding that teachers have an impact on their students’ academic achievement.
Granger believes the practice of documenting behaviors of front-line youth workers is poised to have a big impact on the quality standards and outcomes of after-school programs.
In 2007, the Forum for Youth Investment released a review of nine observational instruments used to assess social norms, physical and psychological safety, skill-building opportunities, and program routine and structure – core concepts nearly identical to the SAFE features found most effective by Granger’s analysis. While observational instruments are still in the early stages of development, the Forum review found that practitioners believe the measures yield data that can help them improve programs.
Such front-line measures are especially important given the field’s largely transitional, part-time labor force, which often lacks pre-service training, Granger said. What’s most promising, he says, is to “work with site directors to help them be better ongoing coaches and staff developers for young people” by conducting “direct, ongoing staff development while they [the staff] are working with kids.
“It is very useful for people in any job to have a sense of ‘What should I be doing to do this job well?’ ”
Youth workers, Granger notes, are often the first to pose that question.
Contact: Granger (212) 752-0071; report is available at www.srcd.org/documents/publications/spr/spr22-2.pdf.