Deterrence is a difficult thing to prove. It requires showing that something didn't happen. America's firearm-prohibitionists always manage to frighten voters into abrogating their Second Amendment rights by pointing to the 35,000 yearly firearm-related deaths: homicides, suicides, and accidents - all of them lumped together deceptively into one convenient, neatly packaged figure. Until now, responsible American gun owners could not consistently point out a pattern of lives saved with the use of a gun. Instead, they were forced to cite anecdotal, non-scientific evidence.
In August 1996, this equation changed irrevocably. In a landmark study, Dr. John R. Lott Jr. and David Mustard used modern data-management methods to provide the scientific evidence for the deterrence provided by firearms in the hands of ordinary citizens in America. They gathered 15 years of U.S. crime and violence data for their investigation. What follows is a discussion of two of the more scientific criticisms of the Lott-Mustard research, and Lott's expansion of that research subsequent to his earlier study.
In the April 1998 issue of Economic Inquiry, William Bartley and Dr. Mark Cohen noted:
"We find that the deterrence results [of Lott's study] are robust enough to make them difficult to dismiss…"
They were commenting on the claims made earlier by Dr. John Lott and David Mustard about the deterrent effect of ordinary American citizens carrying concealed handguns. Bartley and Cohen continued:
"In particular, we find strong support for the hypothesis that right-to-carry laws are associated with a decrease in the trend in violent crime rates."
The Bartley-Cohen findings summarized their own technically sophisticated investigation of the massive body of data collected by Lott and Mustard. Lott made his data readily available to all who wanted it. Bartley and Cohen were two researchers who took Lott up on his offer. When asked what motivated him to become involved in this project, Cohen replied:
"John [Lott] asked if I would be interested in using his data set to do some [experimental] work on handguns….[This] was an interesting data set that would be very costly to replicate…[and] the topic is of obvious policy relevance."
The data that Lott and Mustard had collected amounted to all the U.S. crime statistics from 1977 through 1992 that they could get their hands on. Lott and Mustard's analysis was the first systematic study using nationwide data about the effects of right-to-carry laws on crime.
Right-to-carry (RTC) laws, known also as "shall-issue" or nondiscretionary concealed-carry laws, require authorities to issue a license for the carrying of concealed handguns to all qualified applicants ("qualified" depends on each state), without any necessity of demonstrating "need" on the part of the applicant.
Unlike physical sciences such as chemistry, some of the social sciences do not afford researchers the luxury of repeatable experiments. Bartley and Cohen pointed out that, because of this, economists must often depend on one-time naturally occurring experiments for their research. The passage of RTC laws - and how those laws affect behavior - is one such naturally occurring experiment.
Central to the scientific method is the comparison of "treatment" groups with "control" groups. The treatment group is subjected to some procedure. The control group has similar characteristics, but is not subjected to that procedure. Between 1977 and 1992, there were ten states that adopted RTC laws. These comprise the "treatment" group. Those states not adopting such laws during the same period comprise the "control" group.
In order to correctly attribute observed changes to a particular treatment - the effect RTC laws have on violent crime, for example - a researcher must first rule out the likelihood that the changes were the result of other factors or the result of pure chance.
Measuring the effects of RTC laws is a complex task, because so many factors operate at the same time. Is a change in the crime rate really a result of more law-abiding citizens carrying firearms, or did it change because more arrests were made by the police? Was the change simply a result of the normal fluctuation in crime cycles? And how can one measure the effects of concealed-carry laws when they differ from state to state?
This is where the role of statistical testing comes in - it provides a means of accounting for many simultaneous events. One of the more common statistical techniques used for controlling the effects of many factors at the same time is called "regression analysis". A regression is a mathematical computation that helps make sense out of all the diverse pieces of raw data a researcher collects.
Regression analysis begins with the construction of a theoretical model for the phenomenon under investigation, such as how the crime rate might be affected by RTC laws. The researcher must then decide what factors should be included in the mathematical equation describing that model.
Because different theories about behavior might legitimately suggest differences in how a researcher handles the raw data, it is not possible to determine with certainty the selection of variables to be used for any given model. The possibility therefore always exists of unintentionally excluding a factor that is significant to the outcome of the study or including a factor which is not significant. Cohen elaborated:
"Depending on one's theory of how crime occurs and how it is deterred, different variables might be important. Thus we systematically removed each variable/set of variables to see how it affects the key variable of interest (the 'shall issue' variable). This gave us thousands of different estimates of the underlying relationship between shall-issue laws and crime."
For the 1997 Lott-Mustard study, Lott ran approximately 800 regressions of his data. (Without access to a computer, Lott estimated those calculations would have taken "many lifetimes" to perform.) Not satisfied that all reasonable combinations of the data had been exhausted by Lott, Bartley and Cohen expanded the parameters of Lott's model, and ran 20,000 regressions of their own!
While Bartley and Cohen's analysis failed to corroborate Lott's additional finding of criminals substituting property crimes for violent crimes, it fully supported Lott's conclusions about the deterrence of violent crime.
Two other researchers who took Lott up on his offer are Dan Black and Daniel Nagin. In re-examining the findings of Lott and Mustard, they used a different model relating RTC laws to crime and a different set of variables. But unlike Bartley and Cohen, Black and Nagin didn't use all the data in their analysis and very carefully hand-picked the data that they did use. Their conclusions, published in the January 1998 issue of the Journal of Legal Studies, were entirely different from those of Lott and Mustard.
Black and Nagin's paper was a blatant attempt to prove that RTC laws harm the public. In the face of overwhelming proof to the contrary, they came up empty-handed in finding support for the premise that harm resulted from armed law-abiding citizens. Nevertheless, that didn't prevent them from advising against premising public policy on Lott's findings.
In their first example of hand-picking data, Black and Nagin "corrected" for missing arrest rates in the Lott and Mustard data set by limiting their analysis to include only counties having populations greater than 100,000. Their justification for doing so was that larger counties had far fewer gaps in their data on arrest rates than smaller ones did.
However, Lott and Mustard had already addressed this in their study and concluded that this factor was not relevant to the findings. Where Lott and Mustard used data from more than 3,000 counties, Black and Nagin discarded almost 90% of that total, limiting their sample to only 393 counties. Even in spite of this, Black and Nagin stated that their findings were "reasonably similar to those reported [by] Lott and Mustard".
Among the criticisms Black and Nagin lodged against the Lott-Mustard model was that it assumed the impact on crime to be constant over time and assumed the impact of RTC laws to be the same across all 10 states that passed such laws between 1977 and 1992. This was simply untrue. Lott's rebuttal to the Black and Nagin charges appeared as the next article in the Journal of Legal Studies, wherein he remarked:
"The most surprising aspect of this whole exchange is that Black and Nagin's claims of what is or is not included in our article are so easily verified by the reader."
However, Black and Nagin's charge provided them a pretext for dropping data from the entire state of Florida in their analysis. Seizing upon the observation of "widely varying estimates" of crime rates between some states, they claimed that this was tantamount to "classic evidence" that the Lott-Mustard model was "misspecified".
They further charged that large variations in state-specific estimates of the impact of RTC laws was cause for "concern that the Lott and Mustard results could be driven by a single state…", and that one such state was Florida. In fact, Black and Nagin claimed that virtually all of the benefits attributable to RTC laws resulted from the Florida data alone and that dropping the Florida data caused the deterrent effects to vanish.
If, in fact, the contribution to the reduction in violent crime from one state really was so great that leaving it out of the analysis negated Lott's conclusion - as Black and Nagin charged - that would indeed constitute a potentially fatal flaw in Lott's methodology.
Lott himself recognized the consequences of this possibility, and acknowledged that "there is some justification for concern that results are being driven by a few unusual observations".
As soon as criticism about the Florida data began to surface, Lott ran an additional 1,000 regressions of his data, this time excluding Florida's contribution. Even doing that, Lott's results were not the same as what Black and Nagin had claimed to find.
What Lott found instead was that in 992 of those 1,000 regressions, the deterrent effects from RTC laws remained virtually unchanged. In the remaining eight regressions, the deterrent effects melted into statistical "insignificance" - a finding that would have been predicted by chance.
It is noteworthy that Bartley and Cohen, in their own analysis of the Lott-Mustard data, "paid particular attention to the concerns raised by Black and Nagin." In spite of that, the deterrence results they found were "robust enough to make them difficult to dismiss as unfounded or contrived," as Black and Nagin had attempted to do.
In concluding their attack on Lott and his research, Black and Nagin dropped a quiet bombshell:
In fact, try as they might - no matter how they twisted Lott's data - they could not demonstrate that harm resulted from RTC laws.
"We find no statistically significant evidence that RTC laws have an impact on any of the crime rates."
But in his reply to Black and Nagin, Lott pointed out the real significance of their claim: Between their model and his original model of concealed firearms in America, the spectrum ranges from a deterrent effect on violent crime, to no effect at all. And, as Lott further pointed out:
"…even if this were the state of the current debate, it represents a big change in the bounds of the debate, where many academics have argued that more guns lead to more violence."
The statistical shenanigans of Black and Nagin afforded them the opportunity to claim that "[Lott and Mustard's] results cannot be used responsibly to formulate public policy." Once published, that claim can - and will - be cited by other researchers, politicians, and journalists, intentionally or unknowingly, to reflect adversely on Lott's legitimate conclusions. And then - as Black and Nagin knew so well - it would be repeated as gospel, ad nauseum, without critical analysis of its basis.
But perhaps the firearm-prohibitionists have finally outsmarted themselves. Black and Nagin's pronouncement amounts to nothing short of the death-knell for the anti-gun lobby's fear-mongering "Dodge City" prophesy. For, by their own admission, there are no bloodbaths washing the streets of America from armed law-abiding citizens!
One question remains unanswered: Why are Black and Nagin so opposed to citizens having the means to defend themselves? Are they and their ilk so irrationally fearful of guns, that they just want them out of civilian hands, period? Surely they have no reason to fear firearms in the hands of the law-abiding - they said exactly that, themselves.
If, in their own words, such laws produced no change in crime - one way or the other - then why all the opposition to letting ordinary law-abiding citizens carry concealed firearms? Don't concealed-carry licenses put into place the machinery for government to draw up master "lists" of potential "troublemakers"?
That fact was amply demonstrated by New York City's former mayor David Dinkins, who sent S.W.A.T. teams after owners of previously legal and then banned firearms. One would think that incentive enough for the firearm-prohibitionists. Could it be they fear, even more, that the greater the number of gun-owners, the greater will be the realization of the lies told to them about firearms? And the greater the difficulty in achieving their "final solution" - total civilian disarmament in America?
Dr. Joanne D. Eisen is engaged in the private practice of Family Dentistry. She is President, Association of Dentists for Accuracy in Scientific Media (ADASM), a national organization of dentists concerned with preserving the integrity of the professional dental literature, against the politicization which has corrupted America's medical literature.
Dr. Paul Gallant is engaged in the private practice of Family Optometry, Wesley Hills, NY. He is Chairman, Committee for Law-Abiding Gun-Owners, Rockland (LAGR), a 2nd Amendment grassroots group, based in Rockland County, NY.
The authors may be reached atLAGR, P.O. Box 354, Thiells, NY 10984-0354
Reproduced with permission from Guns & Ammo Magazine, December 1998.
©1998 by Petersen Publishing Co.; All rights reserved.