Archives: 2014 & Earlier

Uh-Oh: What to do When a Study Shows Nothing?

The White House Office of National Drug Control Policy (ONDCP) got some good news last month: Drug use is down for the fourth straight year among 12- to 17-year-olds, according to the National Survey on Drug Use and Health conducted by the U.S. Substance Abuse and Mental Health Services Administration. (See Report Roundup, October 2006.)

ONDCP points to the decline as proof of success for its $1.4 billon national Youth Anti-Drug Media Campaign, built around messages about various positive activities being “the anti-drug.” The agency hopes the downward drug use trends spark an upward trend in the campaign’s funding. President Bush has requested $120 million for 2007, up from $100 million this year.

But no one can say for sure what the campaign’s impact has been, despite two evaluations mandated by Congress.

Both of those analyses – the first by the research group Westat Inc. in 2005, and the second by the federal Government Accountability Office (GAO) in August – found no scientific basis to claim a cause-and-effect connection between the campaign and declines in youth drug use.

That has led to finger-pointing among ONDCP, Westat and GAO about how and why the evaluators came to that potentially expensive conclusion.

Some of the debate hinges on whether the campaign’s outcomes can be considered causal (meaning kids who saw a certain number of ads used drugs a certain percent less) or corollary (while the ads were airing, drug use by youth declined significantly). Some question whether a scientific-based evaluation is even capable of measuring the impact of a commercial-based advertising campaign.

There are also accusations of poor communication between agencies, bureaucratic foot-dragging and dubious decision-making regarding the design of the campaign and its evaluation mechanism.

Pressed to Innovate

“When the campaign started in 1998 … it was controversial in Congress,” noted ONDCP spokesman Tom Riley. “There was some sense of misgiving about spending so much on an advertising campaign.”

So Congress pressed ONDCP for “direct causal evidence” that the media campaign, as opposed to other societal factors, was reducing drug use, said Nancy Kingsbury, GAO’s managing director of applied research and methods.

ONDCP and its partner, the National Institute on Drug Abuse, gave Westat $42.7 million to design a mechanism to evaluate the campaign’s effects on youth drug use from 1999 to 2004.

According to the GAO report, Congress made clear that the Westat evaluation alone would determine the campaign’s effectiveness and its prospects for future funding.

“Congress wanted [the campaign] to have a very airtight causation mechanism they could point to as a measure of its success or failure,” Riley said.

Westat designed and annually administered the National Survey of Parents and Youth, which ran from 2000 to 2004.

The objective was to assess “changes in various outcomes within individuals over time in relation to their exposure (dose) of campaign messages,” according to the GAO report. Those outcomes included drug use and youth and parent attitudes about drug use – factors believed to influence drug behavior.

But was Westat’s goal realistic, given the subliminal nature of advertising campaigns? Other drug-use prevention marketers say no.

“They tried to isolate causality by looking at a cohort of kids and saying, ‘The reason you made the decisions you made was because of the advertising,’ ” said Steve Pasierb, CEO of The Partnership for a Drug-Free America. “Advertising doesn’t work that way.”

GAO’s Side

The Westat interim evaluations were showing no evidence of significant effects of the advertising on marijuana initiation by youth, or on declines in use by existing marijuana users. And ONDCP was not providing those results to Congress, Kingsbury said.

As it became clearer that the evaluation wouldn’t show a causal link between the ads and changes in drug use, ONDCP “changed their strategy about how they were going to talk about it with the Congress,” Kingsbury said. ONDCP started pointing to “a number of surveys out there that showed declining trends in drug usage.

And they offered them up as the evidence that [the campaign] had a positive effect.”

In 2004, the Senate Appropriations Committee directed GAO to review the integrity of Westat’s evaluation process.

The GAO report, “Contractor’s National Evaluation Did Not Find That the Youth Anti-Drug Media Campaign Was Effective in Reducing Youth Drug Use,” concluded that Westat’s findings were sound and recommended that Congress “consider limiting appropriations for the campaign … until ONDCP provides credible evidence” of its effectiveness in reducing youth drug use.

ONDCP’s Side

Changes were made, and more are needed, the ONDCP’s Riley said.

He said ONDCP had tried all along to use Westat’s evaluation to positively inform, adapt and refine the anti-drug campaign’s message, but the agency now believes that a more advertising-based evaluation of campaign effectiveness – such as those used by national retailers like Pepsi and McDonald’s – would be more appropriate.

“The more we heard from people in the advertising world … the more we started to have questions about [the Westat evaluation’s] utility,” Riley said.

According to Riley, one of the “clanging bells” that led to that conclusion was the “irrefutable evidence” that teen drug use was declining only in the exact group (13- to 17-year-olds) targeted by the ad campaign. “We kept asking both the contractors and GAO, ‘To what do you attribute these very striking drops in teen drug rates and increases in teen perceptions of harm?’ ” Riley said. “Each time they would shrug and say, ‘Well, that’s not our job to answer.’ Which is fair enough, except it is our job to answer.”

Riley said ONDCP is now testing each ad with focus groups before it is aired, and that data gleaned from those groups is being used to hone ONDCP’s message. As of yet, no outside group is evaluating the effects of those messages.

“We’re looking for an evaluation mechanism that provides that kind of feedback we can put into future advertising,” Riley said.

“We kind of thought that GAO was going to point toward those directions. They didn’t. I mean, that’s their business, but they kind of left everybody hanging.”

Kingsbury sees ONDCP’s reaction as typical of big programs that receive poor evaluations. “Their argument today is that they’ve made some more changes to try to make the campaign better,” she said.

Comments
To Top
Skip to content