Anti-Gun Programs: Where’s the Bang?

Print More

Recent reports about an alleged surge in youth violence have officials in cities throughout the country looking for ways to clamp down on gun use by juveniles. That mission faces two big obstacles.

First, the reported youth crime surge might be bogus. (See Mike Males, September 2006.)

Second, despite years of intense efforts in some cities to combat youth gun violence, and a push by the federal government to knit together the successful models into a research-driven strategy, there remains scant scientific evidence about what, if anything, works.

Consider Boston, where everyone seems to be wondering what to do about an increase in youth violence. The Boston Globe reported this summer that the number of juveniles arrested for weapons violations doubled from 2004 to 2005, and overall juvenile arrests for violent crimes swelled 14 percent.

For some Bostonians, the solution lies in reviving the famous Operation Ceasefire – the country’s first major partnership between law enforcement agencies and academia (Harvard University) to focus on reducing youth violence through research, agency collaboration, strategic communication and criminal sanctions. Implemented in 1996, the project was credited with reducing youth homicides – which averaged 44 per year in the early 1990s – by more than 60 percent.

Ceasefire was so widely hailed that in 1998 the U.S. Department of Justice launched a demonstration project, the Strategic Approach to Community Safety Initiative, to test whether the approach could be replicated in 10 other cities. Three years later, based on the evaluation of that initiative, the department launched Project Safe Neighborhoods – a $1.5 billion program administered through the country’s 94 federal judicial districts that claims a continued reliance on and investment in the kind of data-driven gun violence prevention strategies first used by Ceasefire.

That’s a big show of faith in evaluation – particularly when there is no solid scientific data regarding youth gun violence to inform policy decisions.

Gaps in the Science Base

In 2004, a National Research Council (NRC) committee charged with reviewing research on firearms violence looked at more than 80 programs, including Ceasefire, and found “almost no evidence that violence-prevention programs intended to steer children away from guns have had any effects on their behavior, knowledge, or attitudes regarding firearms.”

That doesn’t mean nothing worked. But “the research on evaluating gun-prevention programs was so weak it didn’t meet a standard where you could use it to conclude that any particular program had been successful or not,” says Charles Wellford, the NRC committee’s chairman and a professor of criminology and criminal justice at the University of Maryland.

The committee concluded that “although there have been some well-designed studies,” policy questions about gun violence “cannot be answered definitively because of large gaps in the existing science base.”

Filling the Gap?

The good news: The committee said, “We do know what kind of data and research is needed to fill those gaps and, in turn, inform policy debates in a more meaningful way.” It called for:

* Better data collection on gun ownership and use; individuals’ encounters with violence; the effect of right-to-carry laws; gun market statistics; gun-safety technologies; and how policing interventions and tougher sentencing policies affect firearms violence.

* The continued development of the FBI’s National Incident-Based Reporting System (NIBRS) and the U.S. Centers for Disease Control’s (CDC) National Violent Death Reporting System.

* Assessing the potential of longitudinal, national surveys of youth – such as Monitoring the Future and the Youth Risk Behavior Survey – to provide useful data on gun use and access.

Those surveys don’t provide such data now.

Two years and hundreds of lives later, have data-collection and research on gun violence evolved to become more useful for policymakers?

“I haven’t seen anything since the report that would qualify as credible evidence that a particular program works,” Wellford says. “When you’re asking the question, ‘Does a certain intervention that seeks to prevent kids from picking up weapons, or to avoid weapons, work?’ – those approaches are so broad they don’t allow us to assess a specific program’s effect.”

Wellford also says little progress has been made on data collection by law enforcement and public health officials. By 2005, nearly 20 years after its inception, the FBI’s NIBRS had been fully implemented in only 29 states, according to the Justice Research and Statistics Association. Likewise, by 2004, the National Violent Death Reporting System was up and running in only 17 states, according to the CDC.

As for improving youth survey research to understand weapons possession and use, Wellford says, “I haven’t seen any developments at all.”

Use What We Have

To some, that doesn’t mean the data are useless. “The NRC report says that we should be collecting better data, and I agree with that completely,” says Jens Ludwig, associate professor of public policy at Georgetown University in Washington.

But to say that the current data and evidence offer little to glean for public policy purposes “is too strong,” he says.

“Mayors and governors and Congress people and police chiefs and youth corrections officers – they have to make decisions right now, and they’re going to make those decisions. So is using this imperfect evidence better than ignoring the evidence altogether? I would say yes it is.”

In a recent paper, “Aiming for Evidence-Based Gun Policy,” Ludwig and colleague Philip Cook of Duke University used quantitative and qualitative data to make several policy recommendations that could have immediate impact. Those recommendations include an improved gun-registration system, intensive police patrols directed at illicit gun-carrying in high-violence neighborhoods and rewards for information leading to the arrest of people carrying or using guns.

How did they arrive at such conclusions? “With the usual scientific standard, we think of qualitative evidence” – such as interviews, the effects of policy changes, case studies and observation – “as being suggestive, but not definitive proof of anything,” Ludwig says. “But in a policy sense, one could look at qualitative evidence and say, ‘That’s pretty promising.’ ”
While some academicians might say that’s not good enough to change public policy, Ludwig suggests that reality dictates otherwise.

“If it was my family living in a neighborhood that was experiencing a great deal of gun-involved gang violence, I would want my tax dollars to go to something like a gang-oriented Ceasefire-style program,” he says. “Even if, as a scientist, I would say it’s not quite as solid as we want.”