School-based mentoring – a cornerstone of the rapidly expanding mentoring field – took a big hit last month when President Obama announced plans to eliminate funding for the federal Student Mentoring Program, citing a new study that says the program has had little impact.
The evaluation marks the third time the government has studied the program since it was authorized in 2002 as part of the No Child Left Behind legislation – and the third time the results have been disappointing.
Beyond the federal program, the new developments raise questions about school-based mentoring overall. Big Brothers Big Sisters of America (BBBSA) – which has invested more effort in the approach than anyone – has revamped its school-based program and issued a statement about the Obama budget plan, saying, “We believe that well-run school-based mentoring programs can and do have real impact.”
The federal government has had trouble proving that. The first study of the federal program, released by the Government Accountability Office in 2002, exposed a lack of planning and monitoring, and found no way to determine outcomes because no evaluation system was in place.
The second study, released in 2006, found that the federal effort duplicated other programs and there were still no evaluations.
The newest study, a full-blown evaluation by Abt Associates, found that the $300 million program doesn’t make much difference. It reiterated the organizational and monitoring problems identified in the earlier studies.
In addition, a study of several BBBSA programs, also released in 2007, found that their impacts were few and short-lived, and almost entirely academic. (See “School-based Mentoring: Does it Make the Grade?”, September 2007.)
Because it is built around the school structure and calendar, the school-based mentoring approach is inherently limited in its ability to create the close, long-term relationships that are the hallmark of high-quality mentoring. The studies raise the possibility that school-based mentoring is simply better than nothing, offering a short-term academic and interpersonal boost for kids who need more of both, while not producing measurable impacts that extend beyond that immediate help.
In some ways, the federal Student Mentoring Program was set up to fail. The approach was promoted as one way to alleviate the dropout problem, the gang problem, the disconnected youth problem, etc., long before there was any evidence of whether it worked. The Department of Education threw money at grantees but provided little direction or oversight, and doesn’t appear to have grasped the severity of the program’s problems or to have made required changes.
BBBSA, on the other hand, announced in September that 20 of its agencies would participate in a pilot program to create an Enhanced School-Based Mentoring standard. One objective is to increase the average mentor-student match length to 15 months, the BBBSA said.
BBBSA says its school-based programs reach 125,000 youths. Some of them are funded by the federal funding stream that Obama wants to eliminate in his 2010 budget.
Mentoring advocates and congressional supporters vow to overturn that plan.
The Student Mentoring Program awards competitive federal grants to schools and community- and faith-based organizations to provide mentors to at-risk children in grades four through eight at their schools. Since 2004, the federal government has made program grants totaling nearly $300 million to such organizations as the Los Angeles Unified School District, Catholic Charities of Denver, Big Brothers Big Sisters of Greater Miami and the Cincinnati Youth Collaborative.
School-based mentoring programs have proliferated for a variety of reasons: The use of school facilities keeps costs low, students are readily accessible during school hours, and school resources allow a focus on academic activities.
Rather than legislate any particular program practices for grantees, NCLB required only that funds be used for activities to improve interpersonal relationships, increase personal responsibility and community involvement, discourage risk behaviors and delinquency, reduce dropout rates and improve academic achievement, according to the Abt report, Impact Evaluation of the U.S. Department of Education’s Student Mentoring Program.
These design flaws, and the failure to address those flaws, ultimately resulted in weak evaluation outcomes, said Lawrence Bernstein, a principal associate at Abt and the project director of the evaluation study.
“The legislation … and the program guidance … said to focus on the academic and social needs of students,” he said. “Beyond that, there weren’t any prescriptive protocols for how people were going to conduct their mentoring activities, or how they were going to supervise their mentors, or how they were going to train their mentors.”
Of the 255 Student Mentoring Program grantees who received federal funding in 2004 or 2005, only 32 met the research team’s selection criteria regarding their ability to recruit an adequate number of students and their commitment to data collection requirements. Those 32 grantees recruited 2,573 students in two cohorts (Fall 2005 and Fall 2006) who were randomly assigned either to a mentoring services treatment group (1,272) or to a control group that did not receive services from a Student Mentoring Program grantee (1,301). Students assigned to the control group were free to seek out other mentoring services.
The mentors were both teenagers and adults.
The student sample was 53 percent female, 41 percent African-American and 31 percent Hispanic. The average age was 11. Nearly nine in 10 were eligible for free or reduced lunch, and 52 percent came from two-parent households. Sixty percent of the students were below proficiency in reading, math or both at the beginning of the school year, and 25 percent were at risk for delinquency as defined by self-reported behavior. Twenty-six percent of the students said they had mentors the previous school year.
What They Looked For
The evaluation measured 17 outcomes based on the requirements of the authorizing legislation, as represented by the impact of the programs in three “domains”:
• Interpersonal relationships with adults, personal responsibility and community involvement.
• School engagement (e.g., attendance, positive attitude toward school) and academic achievement.
• High-risk or delinquent behavior.
The researchers looked at self-reported measures, such as “future orientation,” school efficacy and delinquency, and school-reported measures, such as absences, grades, test scores and misconduct.
What They Found
The Student Mentoring Program did not lead to statistically significant impacts for students as a whole in any of the three outcome domains – interpersonal relationships/personal responsibility; school engagement/academic achievement; or high-risk/delinquent behavior.
Some subgroups did show small significant effects, including differences between boys and girls, and differences according to age groups.
One possible reason for the paltry effects: The programs were slow to deliver services. “We assumed, and the [Department of Education] assumed, that programs would hit the ground running in September … that all of the students would be matched and would begin receiving mentoring from day one,” Bernstein said. “That was not the case.”
Among the key findings:
• One in 10 mentors reported not having undergone a reference or background check, despite that being a requirement of the grant.
• While 96 percent of the mentors reported receiving an average of 3.4 hours of pre-match training, only 41 percent said that post-match, ongoing training was available.
• Most mentors reported meeting one on one with their students for one hour on a weekly basis. However the programs took, on average, 81 days from the start of the school year to match students and mentors, resulting in an average relationship of 5.8 months.
• Seventeen percent of the students assigned to the treatment group never received a mentor, and about one-third of the control group ended up seeking and receiving mentoring services from a community-based program – conditions that “diluted” the impact of the treatment, but which are not atypical for evaluations in which participation can’t be mandated, and in which it would be unethical to prevent control group participants from seeking alternative services.
Such site-level findings “should not have come as a surprise to the field; however, they did come as a surprise to the program office” at the Department of Education, Bernstein said.
“The average student received about 25 hours of mentoring. And you have to ask yourself [is] … ‘did the structure of the program, the way it was implemented – did it give the mentors a fair chance to do their work and for the programs to find effects?’ ” he said.
“I would probably say that that wasn’t the case.”