The recent economic crisis has intensified challenges throughout the nonprofit sector. Nowhere is the strain felt more than in the field of out-of-school time (OST) where an “Accordion Effect” — the simultaneous top-down pressure for accountability and bottom-up pressure to meet the ever-growing demands of communities that have been hit the hardest, has metaphorically squeezed the quality out of programs.
With scarce resources, funders and donors have begun to rely heavily on evaluations to help make critical decisions about the distribution of resources. In the quest for certainty, such terms as “evidence-based,” “research-based” and “data-driven” have become commonplace.
However, what do these terms really mean in the lives of MOST practitioners? I would argue not much, as there are almost no resources supporting them to make these practices a reality.
To bring these terms to life, we would need to address several key challenges, including:
- Expensive experts: I hate to say it but … evaluation is expensive! Increasingly, medium and even small nonprofits are expected to have full-time professional evaluators on staff or to hire outside consultants to prove program impact.
- Measurement misfits: Driven by funding demands, program staff spend precious time and resources capturing mandated data such as report cards, test scores, attendance records, etc., with the full knowledge that these metrics do not fully tell their story and are not fully attributable to their programs. Youth work is diminished to results that are easily measurable, but not necessarily most important.
- Big fat data: Evidence-based programming is developed by studying those programs that typically have big data and highlights only those practices that have been used “at scale.” This then excludes the majority of smaller, often more innovative, programs from the narrative about “what works.”
- Analytic agitation: Research and even evaluation reports are written and consumed by academics and funders. The best studies address the question: Did the program make statistically significant gains for the average youth? However, it leaves practitioners with limited practical information about how to improve outcomes for ALL youth — or the most difficult to serve.
- Data delays: From start to finish, evaluation takes a long time. When findings finally become accessible, program planning has already been completed for the next cycle. Plus most nonprofits can only afford evaluation every few years, so learnings are quickly outdated.
Why, in this rich information and technology era, aren’t we solving these problems? My current work centers on this and frames my recommendations for the OST field and those working to evaluate its work. These recommendations include:
- Measure what matters: Today cutting-edge research on Social Emotional Learning (SEL) and Positive Youth Development (PYD) allows us to focus on what matters most to practitioners — the development of the whole child. Recent studies allow us to use PYD and SEL outcomes as proxy measures for increased academic gains, decreased risk behaviors and thriving. I suggest we begin using these measures throughout the field to increase the quality and usability of data for individual programs while increasing opportunities for shared learning.
- Value rather than evaluate: If we utilized these shared metrics along with cutting-edge next-generation analytics (such as predictive and prescriptive statistics and machine learning), we would be able to learn much more about what “works” for each and every youth. In this way, we could begin valuing all the various pathways taken toward successful outcomes rather than just those taken by the “average” youth.
- Provide timely insights at a low cost: We need to begin taking advantage of new technologies that allow us to gather data, immediately analyze it and use it in unique reports that include tips for how to best meet the needs of each and every youth. Such technologies would increase data utilization and ultimately increase the impact on youth. Best of all, it would drastically decrease the costs of evaluation.
It is not until front-line workers’ questions are at the center of the discussion that it becomes possible to deliberate on such ideas as data-driven decisions and even evidence-based practices. As a collective, we have the potential to generate useful information to improve our impact and the field.
Kim Sabo Flores is senior vice president at Algorhythm. Contact her at info@algorhythm.io.