Evidence-based practices and programs (EBPs) are all the buzz these days: The phrase seems to be popping up with increasing frequency in conference workshops, industry publications and the lexicon of professional associations and government agencies.
Based on the widely accepted view that the application of rigorous scientific research improves quality of care, researchers, policy-makers and funders are urging the youth work field to embrace a select group of EBPs. Oregon, for instance, passed a law that requires more spending on EBPs for mental health and substance abuse treatment, increasing from 25 percent of all state funding for such services last year to 75 percent by 2011.
Some service providers, however, are scratching their heads over how, when and even if they should implement such practices. They especially wonder whether EBPs laid out in manuals are flexible enough to be adapted for their programs.
“Making research findings and their implications easily accessible to practitioners is one big hurdle that, thankfully, is being addressed by several organizations, but we still have a long way to go,” Sue Steib, director of research to practice for the Child Welfare League of America (CWLA), wrote in an e-mail.
The issue arose repeatedly at CWLA’s annual national conference in Washington in March, where several practitioners and researchers hosted discussions that offered differing views on implementing EBPs.
Catching on
Although definitions vary, EBPs are generally defined as interventions that consistently show scientific evidence of improved outcomes. There seems to be a growing field of lists and databases to help practitioners find such practices, including the National Registry of Effective Programs of the U.S. Substance Abuse and Mental Health Services Administration, and the University of Colorado’s Blueprints for Violence Prevention.
States, meanwhile, “are actively promoting the development of new EBPs, while playing a major role in delivering established EBPs,” says the website of the National Association of State Mental Health Program Directors Research Institute. The site provides information about EBP initiatives in all but four states.
But where do these lists come from?
The rigorous studies that produce such evidence tend to share certain hallmarks: randomized assignment of subjects to a treatment group and a control group, better outcomes in the treatment group than in the control group, a manual for use in replication, and independent replication of the outcomes by at least three other researchers.
Given the rigor of those incubatory conditions, EBPs almost always emerge from well-funded academic or other research institutions – not from the youth services field. That research-to-practice gap creates a void that service providers, under pressure to implement EBPs, often fill with their own speculation and skepticism.
View From the Ground
Rosa Warder, program manager for direct services in Alameda County at United Advocates for Children of California (UACC), raised several concerns in her presentation at the CWLA conference, “Controversies in the Evidence-Based Practices Movement.” Warder said that many family organizations believe that “the EBP train has left the station,” despite the lack of consensus on the definition of EBPs or at what point they are ready to be deployed.
Warder emphasized that definitions of EBP vary according to the communities in which they are used. She also said definitions usually exclude experiential and observational evidence, and don’t address the real-world concerns of families and youth who need mental health or substance abuse treatment.
She noted that the family advocacy movement has successfully promoted treatment for youth that is family-driven, strength-based, individualized and focused on the whole child and family – elements that appear, at least on the surface, to be at odds with some of the standardized treatment protocols recommended by EBPs.
Steib, the co-presenter at a CWLA conference session titled “Using Research and Data to Understand and Improve Child Welfare Practice,” agrees. “EBP policies need to recognize that most consumers of child welfare services receive individual and family casework,” she wrote.
“While many of the more rigorously researched models … can be beneficial to them, we also need to remember that there are important casework practices that are also evidence-based but have never been tested as a consolidated model,” Steib continued. “We have to be careful … to approach EBP practice in a reasonable way that recognizes the practical constraints under which child welfare service providers work every day.”
View from the Ivory Tower
The concerns of Steib and Warder might seem a little wishy-washy to some EBP purists.
In a CWLA session called “Linking Needs to Evidence-Based Practice,” presenter Kay Hodges, a psychology professor at Eastern Michigan University, demonstrated the affinity of clinical researchers for standardized approaches.
Hodges, who developed and markets a mental health rating scale for youth called the Child and Adolescent Functional Assessment Scale, explained that the EBT movement grew out of a broader medical focus on “using evidence from diagnostic assessment technologies and clinical research to inform clinical practice.” She said that focus de-emphasizes the “typical reliance on unsystematic clinical experience.”
Hodges went on to match some of the most highly regarded EBPs with the psychosocial disorders they treat best: cognitive behavioral therapy for depression and trauma; Parent Management Training for attention deficit/hyperactivity disorder; and Multisystemic Therapy, Functional Family Therapy and Multi-dimensional Treatment Foster Care for conduct disorder. She then worked through how the components of those EBPs address the needs of sexually abused, depressed and delinquent youth when applied with fidelity.
Although Hodges pointed out that EBPs were “not practice parameters,” her 67 slides of “stress spirals,” reward point tracking charts and subscales of the Juvenile Inventory for Functioning seemed to leave little room for flexibility.
Common Ground
Despite the gap between researchers and practitioners, many programs successsfully implement EPBs. According to research conducted by Dee Roth, co-principal investigator of the Ohio Department of Mental Health’s groundbreaking Innovation Diffusion and Adoption Research Project, the biggest factor for those considering the adoption of an EBP is “their perception of the level of risk they’re taking on.”
Roth says the study found that agency leaders going through this decision most often discuss whether there is “good science” behind the practice being considered, whether their agencies can manage the risks involved in the practice, and the opinions of peers who have implemented the same practices.
“Some of the literature says that the character of an agency that [adopts an EBP] is that it is willing to take risks,” Roth said. “What we found is that the ones willing to do this are those who think the risks are less.”
Applicability and flexibility are issues that often arise in discussions about EBPs, Steib says.
“In some applications, fidelity to an exact model is needed, but in many others there is room for adaptation,” she wrote. “The important thing is that those who are implementing a model or practice know that they are adapting, are clear about what they are doing, and are conducting process [evaluation] as well as outcome evaluation. Such adaptations build the knowledge base if they are evaluated and reported.”
No matter how much agencies chafe at the idea of implementing EBPs, they can’t dismiss the value of evaluation.
“We know that agencies in Ohio and everywhere else are doing a lot of things that no evidence has ever shown works,” Roth said. When researchers ask those agencies, “How do you even know what you’re doing works now?” she said, “The answer is, ‘You don’t.’”
Despite some apprehension, Steib wrote, “We need to be guided by research.”