What is a juvenile recidivism rate?
In Maine, it’s a measure of how many juveniles are adjudicated again in juvenile court or convicted in adult court. In Missouri, it measures only those juveniles who are re-incarcerated, and does not count any juveniles who reoffend while they are still on parole.
“You’ve got states that do only first-time offenders, or only juvenile corrections,” said Kim Godfrey, deputy director of the Council of Juvenile Correctional Administrators (CJCA), on different states’ approaches to the measure. “That is what it is, but it’s not the entire picture.”
No matter how it’s calculated, juvenile justice advocates and state agency leaders cringe at the idea that their services and youths are judged by such an inherently negative standard. But in a field where evidence of effectiveness is increasingly asked f or, recidivism figures are frequently requested by media, legislators and governors’ offices for three reasons:
1) It reflects what the public most wants to know: whether a juvenile program or service is preventing juvenile crime.
2) If a youth has not been arrested again, it is generally an indication that he or she is making better decisions (although some just get better, or luckier, at not getting caught).
3) It is easy to calculate compared to other things that would indicate success, such as educational achievement, employment or family life. Recidivism can be calculated from specifically recorded events like arrests or convictions; other indicators often require intense tracking or self-reporting.
A cadre of JJ leaders agreed in 2005 that if recidivism has been fated to become a primary gauge of the system, it should at least become a better-developed gauge. It is that meeting that spurred an effort now led by CJCA to establish more standard and thoughtful approach to gauging recidivism.
Recidivism has existed as a correctional calculation for decades, but only recently has a majority of states begun to track it formally in connection with juvenile justice. At the moment, most track the most basic population: youth who end up in locked facilities. A look at the numbers, based on state reports in CJCA’s 2009 Yearbook:
–40 states (including Puerto Rico and D.C.) track it as of 2007, up from 26 in 2005.
–90 percent of states that track recidivism use the measurement for evaluation and assessment; 80 percent use it for agency planning.
–20 states measure recidivism only for juveniles leaving secure care; only four states report measuring recidivism based on the program or facility a juvenile attended.
–No states attempt to calculate recidivism rates grouped by specific charges (i.e., how many juveniles who are adjudicated for armed robbery reoffended.)
To explain why many think recidivism needs to evolve as a measurement, allow us to analogize between sports and juvenile justice.
One of the best things about sports stats is that they are set up to facilitate intelligent debates. There are stats that allow for easy comparisons of a player on the St. Louis Cardinals and the Kansas City Royals.
Those stats are kept by one national league, and everyone uses the same calculations to arrive at them. The slugging percentage of a hitter on the Cardinals teams is calculated the exact same way as that of a hitter on the Royals. And what those stats do not tell you about a player or team can mostly be sussed out by the naked eye, because there are cameras and spectators at every game.
When evaluating juvenile justice programs? Not the case. There are no fans, no cameras, and rarely is there a reporter around to observe regularly. But that doesn’t stop a governor or state legislators from wanting to know if that state’s JJ agency is performing well. And the first question, said Godfrey, is always about recidivism.
“The next question is, ‘How do we [compare] to the state next door, or a state with a similar population?’ ” said Maine’s Barry Stoodley, one of the longest-tenured state juvenile justice directors in the nation.
So the stats matter, because they generally stand alone with little visual accompaniment. That makes the lack of something like a National Juvenile Justice League, making everyone calculate statistics the same way, a problem.
States’ recidivism measurements are all over the place. Some states count juveniles who are on parole, others start measuring once parole is complete; some states track juveniles for two years, some three. About half the states that track recidivism do not count a former juvenile who was later arrested and sent to the adult system, which is pretty baffling.
Appropriately, it is CJCA that undertook a federally funded project to help states move toward a national standard for measuring juvenile recidivism. Money for the project was tacked on to CJCA’s usual grant from OJJDP to help juvenile facilities meet performance-based standards (PbS).
State directors “do want a national consensus on effective ways to measure it,” Godfrey said. But “unless studies use the same population, definition and time frame,” nobody will fully rely on recidivism as a comparative measuring stick.
The goal was to “try and devise a way of standardized measurement, and standardized reporting that transcend a variety of organizational structures,” said Stoodley, who leads the CJCA committee on recidivism. This is crucial is there can be any hope of one day establishing a national standard, he said, because the number of responsibilities assigned to state agencies differs greatly. Some handle everything from pretrial diversion to commitment, some do probation, and some only deal with adjudicated youth.
Godfrey and the committee have developed a three-part model for developing a recidivism calculation. The first phase involves coupling new adjudications with at least one other point-in-time measurement (arrest, for example) to track recidivism for offenders who have been released from a secure facility. CJCA recommends a 24-month time frame.
Once a sound measurement is in place for securely committed offenders, Stoodley said, the second part of the strategy is to extend recidivism tracking to probation populations and, ideally, youth who are either diverted or sentenced to participation in community programs.
The third aspect would be to develop a scoring mechanism, so that recidivism measures would also weigh what Stoodley calls criminogenic risk factors for each youth. This, Stoodley said, helps account for the fact that some jurisdictions simply have youth who are more likely to reoffend than other jurisdictions.
A forthcoming white paper by Temple University Professor Phil Harris will discuss the CJCA committee’s findings on recidivism measurements and its suggested models.
No amount of improvement can change the fact that recidivism is probably already relied upon too much, and may end up becoming even more powerful if made more precise. It’s not by any means a perfect measurement of effectiveness.
Lots of juveniles will make a bunch of stupid decisions before they make the right ones. It could very well be that Program X gets such an offender after say his fourth offense, has a profound effect on him, but he gets picked up two more times before he gets it together for good.
Maybe that youth goes on to be a successful adult, a good father, and he personally feels that Program X got him on the right path. But statistical history will show that the program failed with him: he completed the program and then reoffended.
Longtime JJ researcher Jeff Butts, now the vice president of Public/Private Ventures, thinks recidivism is an imperfect indicator of effective juvenile justice services because it’s prone to the effects of bureaucracy and procedure. He e-mailed us a hypothetical example in which two jurisdictions would use a standardized recidivism measurement, but the results probably would not tell the true story. Butts’ thoughts:
What if recidivism were defined as “a new arrest within 12 months of the previous court disposition”? And what if Community A makes heavy use of informal diversion to keep kids out of court while Community B sends nearly every case to court? In Community A, only the most serious cases would have a previous court disposition. Their denominator would be smaller, and thus they would have a higher recidivism rate than Community B. Community B could correctly claim a lower recidivism rate, but their “success” would be a function of their case flow procedures and not the effectiveness of their rehabilitation efforts.
Plenty of people on the recidivism committee agree with Butts on the flaws of measuring recidivism, which is why CJCA will also hand over a white paper by Butts on some positive youth development (PYD) outcomes that should also become standard measurements for juvenile delinquency program success.
Publishing the work of Harris and Butts essentially will be the end products of the first OJJDP grant, which was made in 2007 under Bush-era administrator J. Robert Flores. CJCA will make a final round of edits to Harris’ paper before turning it over to OJJDP; Butts’ PYD piece will follow it sometime next year.
The hope of the committee is that the Obama Justice Department will want to take the next step on the project. The next phase would involve finding a several states or jurisdictions – “ideally we’d find two or three that were quite different,” Stoodley said – to test the model measurements.
“We are definitely hoping OJJDP will be interested and engaged in going forward,” said Stoodley.