Archives: 2014 & Earlier

Juvenile Drug Courts

It’s been 10 years since the nation’s fledgling juvenile drug courts began receiving federal funds to rehabilitate teen drug abusers. But despite federal investments of more than $1 billion over the past decade and an explosion in the number of such courts to nearly 300 in 2003, little rigorous research has been conducted on their effectiveness.

In a new report, Juvenile Drug Courts and Teen Substance Abuse (Urban Institute Press, 2004), Urban Institute researchers Jeffrey A. Butts and John Roman examine the history, mission, operations and evaluation of juvenile drug courts to find out what’s known about their ability to reduce drug use and recidivism.

In the mid-1990s – when arrest rates for juvenile drug violations more than doubled – state and local jurisdictions began creating juvenile drug courts (JDCs), based largely on the anecdotal success of adult drug courts established in the late 1980s. Both adult and juvenile drug courts combine treatment with close supervision and the leverage of judicial authority to change the behavior of users.

Butts and Roman suggest that when federal drug court funds became available to juvenile courts in 1995, some jurisdictions saw an opportunity to provide increased resources to troubled youth caught up in a juvenile justice system that had become more punitive. “This was an earnest endeavor to get services to kids,” Roman says.

Since then, however, juvenile drug courts have evolved with widely varying characteristics, largely through uncontrolled innovations. “No one encourages them to think theoretically,” Butts says. “They careen back and forth between ideas.” At they same time, the programs have been scrutinized mainly by a “haphazard program of inadequate and potentially redundant evaluations” that have failed to clearly define their components, document their impact or prove their success, the researchers write. “It doesn’t take much to evaluate a program” to the satisfaction of funders, Butts says. “JDCs pretty much just have to check off a box that says, ‘We’ve been evaluated.’ ”

But in a departure from the usual researchers’ cry for more research, Butts and Roman assert that less will produce more. They argue that the current system requiring JDCs to arrange for an evaluation at the conclusion of each three- to four-year federal funding cycle produces limited data and provides little evidence on whether and how the courts work. Like many youth programs, the courts are asked to justify their continuation by producing proof of positive results. For JDCs, the renewal of their federal funding depends upon it.

“That’s when the professor from the local university is called in and presented with a shoebox full of data,” Butts says. “The easiest thing to do is to compare recidivism among youth who graduated from the drug court program versus those who dropped out or terminated from the program. That’s why, when you look at these evaluations, you see something like ‘90 percent of program graduates were arrest-free 12 months later.’ Of course they are, because you’ve taken all of the likely recidivists out of the equation.”

The authors describe this type of impact evaluation as based on a “black box” model, in which drug use and arrest rates are measured before, during and after the offender passes through the “black box” of drug court. Such limited evaluations create myriad problems. Most lack a basic cause-and-effect hypothesis of how involvement in drug court might affect behavior. Most are designed to be nonexperimental or weakly quasi-experimental – that is, they don’t randomly assign participants to treatment and control groups, thus limiting their ability to minimize the effects of unobserved factors (like the characteristics of participants) on outcomes. Finally, “black box” evaluations fail to break the processes of the court into components that can be observed and studied individually.

Unfortunately, as many youth workers well know, the thoroughness of evaluations is often sacrificed to accommodate time and financial constraints. In an effort to fill the void of scientifically based research, several practitioner and research groups, including the National Drug Court Institute and the RAND Corp., have compiled best-practice guidelines that list operational features considered by some as key to the success of JDCs. These may help courts that are starting from scratch, but don’t provide enough evidence upon which to build a conceptual framework for effective programs.

In their study, Butts and Roman suggest a framework to direct researchers’ attention to the important ingredients of program operations, encourage the formulation and testing of viable hypotheses, and suggest useful measurement approaches and data collection techniques.

They conclude that federal money could best be spent on targeted efforts to establish evidence-based operational principles for JDCs through the use of these or similar frameworks, and that a system of independent accreditation based on the implementation of those principles might become the basis for funding renewal.

“The danger in that,” Butts says, “is that people will buy into the concept of accreditation instead of evaluation, or that the accreditation process will become politicized.”

Steven Belenko, a senior scientist at the Treatment Research Institute at the University of Pennsylvania and an advisory committee member of the National Evaluation of Juvenile Drug Courts, says most JDCs aren’t ready for accreditation. “Many have 10 to 15 clients. Almost none have 100 clients, which is the minimum you need to do a decent evaluation,” he says. Belenko agrees that the next step is to fund a small number of JDC programs to be tested by a rigorous experimental design.

Comments
To Top
Skip to content