К оглавлению
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 
68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 
85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 
102 103 104 105 106 107 108 109 110 111 112 113 114 115 

How many times each year do civilians use firearms defensively? The

answers provided to this seemingly simple question have been confusing.

Consider the findings from two of the most widely cited studies in the field:

McDowall et al. (1998), using the data from 1992 and 1994 waves of the

National Crime Victimization Survey (NCVS), found roughly 116,000 defensive

gun uses per year, and Kleck and Gertz (1995), using data from the

1993 National Self-Defense Survey (NSDS), found around 2.5 million defensive

gun uses each year.

Many other surveys provide information on the prevalence of defensive

gun use. Using the original National Crime Survey, McDowall and

Wiersema (1994) estimate 64,615 annual incidents from 1987 to 1990. At

least 19 other surveys have resulted in estimated numbers of defensive gun

uses that are similar (i.e., statistically indistinguishable) to the results founds

by Kleck and Gertz. No other surveys have found numbers consistent with

the NCVS (other gun use surveys are reviewed in Kleck and Gertz, 1995,

and Kleck, 2001a).

To characterize the wide gap in the estimated prevalence rate, it is

sufficient to consider the estimates derived from the NSDS and recent

waves of the NCVS. These two estimates differ by a factor of nearly 22.

While strikingly large, the difference in the estimated prevalence rate

should, in fact, come as no surprise. As revealed in Table 5-1, the two

surveys are markedly different, covering different populations, interviewing

respondents by different methods, using different recall periods, and

asking different questions.

The NCVS is an ongoing annual survey conducted by the federal

government (i.e., the Census Bureau on behalf of the Department of

Justice) that relies on a complex rotating panel design to survey a representative

sample of nearly 100,000 noninstitutionalized adults (age

12 and over), from 50,000 households. To elicit defensive gun use

incidents, the survey first assesses whether the respondent has been the

victim of particular classes of crime—rape, assault, burglary, personal

and household larceny, or car theft—during the past six months, and

then asks several follow-up questions about self-defense. In particular,

victims are asked:

Was there anything you did or tried to do about the incident while it was

going on?

Did you do anything (else) with the idea of protecting yourself or your

property while the incident was going on?

Responses to these follow-up probes are coded into a number of categories,

including whether the respondent attacked or threatened the offender with

a gun.

The NSDS was a one-shot cross-sectional phone survey conducted by a

private polling firm, Research Network, of a representative sample of nearly

5,000 adults (age 18 and over). The survey, which focused on firearms use,

first assessed whether the respondent used a gun defensively during the past

five years, and then asked details about the incident. In particular, respondents

were first asked:

TABLE 5-1 Comparing Sampling Design of the NCVS and NSDS

National Crime Victimization

Survey National Self-Defense Survey

Coverage • Noninstitutionalized • U.S population, age 18 and

U.S. population, age over, with phones, 1993

12 and over, each year • DGU questions to all

since 1973 respondents

• Defensive gun use

questions to victims


Sample design • Rotating panel design • One-shot cross-section

• Stratified, multistage • Stratified by region (South

cluster sample of and West oversampled)

housing units • Random digit dialing

• Telephone and

personal contacts

Sample size Approximately 50,000 4,997 individuals

households and 100,000


Response rate Approximately 95% of 61% of eligible numbers

eligible housing units answered by human beings

Sponsorship U.S. Census Bureau for U.S. Research Network

Bureau of Justice Statistics

Estimated defensive 116,398 annual incidents 2,549,862 annual incidents

gun use using 1993-1994 data from

redesigned survey

SOURCE: McDowall et al. (2000: Table 1). Used with kind permission of Springer Science

and Business Media.

Within the past five years, have you yourself or another member of your

household used a handgun, even if it was not fired, for self-protection or

for the protection of property at home, work, or elsewhere? Please do not

include military service, police work, or work as a security guard.

If the answer was yes, they were then asked:

Did this incident [any of these incidents] happen in the past 12 months?

The discrepancies in the prevalence estimates of defensive gun use can

and should be better understood. Remarkably little scientific research has

been conducted to evaluate the validity of DGU estimates, yet the possible

explanations are relatively easy to categorize and study. The two surveys

are either (1) measuring something different or (2) affected by response

problems in different ways, or (3) both. Statistical variability, usually reflected

by the standard error or confidence interval of the parameter, also

plays some role but cannot explain these order of magnitude differences.


Perhaps the most obvious explanation for the wide variation in the

range of DGU estimates is that the surveys measure different variables. In

the NSDS, for example, all respondents are asked the gun use questions. In

contrast, the NCVS inquires only about use among persons who claim to be

victims of rape, assault, burglary, personal and household larceny, and car

theft. The NCVS excludes preemptive uses of firearms, uses that occur in

crimes not screened for in the survey (e.g., commercial robbery, trespassing,

and arson), and uses for crimes not revealed by respondents.1

McDowall et al. (2000) found some evidence that these differences in

coverage play an important role. In an experimental survey that overrepresents

firearms owners, 3,006 respondents were asked both sets of questions

about defensive gun use, with random variation in which questions came

first in the interview. By holding the survey sampling procedures constant

(e.g., consistent confidentiality concerns and recall periods), the authors

focus on the effects of questionnaire content. Overall, in this experiment,

the NCVS survey items yielded three times fewer reports of defensive gun

use than questionnaires that ask all respondents about defensive uses.

The McDowall et al. (2000) crossover experiment is informative and is

exactly the type of methodological research that will begin to explain the

sharp divergence in gun use estimates and how best to measure defensive

gun use. There remains, however, much work to be done. The sample used

1It is well known, for example, that incidents of rape and domestic violence are substantially

underreported in the NCVS (National Research Council, 2003).

in this survey is not representative, and the methods shed light on only one

of the many competing hypotheses. Furthermore, this limited evidence is

difficult to interpret. Even with a consistent sampling design, inaccurate

reporting may still play an important role. For example, estimates from an

NCVS type of question would be biased if victims were reluctant to report

unsuccessful defensive gun use. Likewise, the estimates found using the

NSDS-type survey would be biased if respondents report defensive gun uses

based on mistaken perceptions of harmless encounters.

Even if we accept the notion of fully accurate reporting, or at least

consistent inaccuracies across the surveys, details on the cause of these

differences are especially important. If these discrepancies result because of

incomplete reporting of victimization among the classes considered (e.g.,

rape and domestic violence) in the NCVS, then one must address the measurement

error questions again. Certainly, we are interested in the behavior

of all victims, not just those who self-report. If instead, the differences occur

because the NSDS-type question includes preemptive uses, then the relevant

debate might focus on which variable is of interest.

In any case, much of the confusion surrounding the debate seems to

center on what is meant by defensive gun use. Self-defense is an ambiguous

term that involves both objective components about ownership and use

and subjective features about intent (National Research Council, 1993).2

Whether one is a defender (of oneself or others) or a perpetrator, for

example, may depend on perspective. Some reports of defensive gun use

may involve illegal carrying and possession (Kleck and Gertz, 1995; Kleck,

2001b), and some uses against supposed criminals may legally amount to

aggravated assault (Duncan, 2000a, 2000b; McDowall et al., 2000;

Hemenway et al., 2000; Hemenway and Azrael, 2000). Likewise, protecting

oneself against possible or perceived harm may be different from protecting

oneself while being victimized.

Given this ambiguity, perhaps one of the more important and difficult

problems is to develop a common language for understanding defensive and

offensive gun use. Uniform concepts and a common language will serve to

facilitate future survey work, guide scholarly discussions, and enhance understanding

of the complex ways in which firearms are related to crime, violence

and injury. More generally, a commonly understood language can also influence

the development of firearms policy and violence policy more generally.

2This lack of a clear definition may also contribute to inaccurate response. If scholars who

think about these issues have yet to come up with a clear definition for the behavior of

interest, it may be unreasonable to rely on the accuracy of respondents whom, in some cases,

may not understand or interpret the question as intended.

Although defining and measuring different types of gun use (both offensive

and defensive) is not a simple matter, a typology similar to the one

developed by Kleck may be a useful starting point (1997: Figure 7.1).

Figure 5-1 provides a rough summary of the development of a violent or

criminal encounter. Firearms and other weapons may be involved at different

points in the development of a crime, from threats to realized crimes

and injury. At each stage of a potentially threatening encounter, one may be

Event Outcomes

Criminal Events

Noncriminal Events

Crime or







Completed Crime/

No Injury


No Attack





FIGURE 5-1 Stages and outcome of potential criminal encounters.

SOURCE: Adapted from Kleck (1997: Figure 7.1).

interested in learning about the basic circumstances, about firearms use and

other actions, about the intent of the respondent, and about outcomes. The

relatively subjective nature of threats, which may or may not develop into

criminal events, may justify placing these uses in a separate category (Kleck,

2001b:236). More generally, it would seem useful to distinguish between

the more objective and subjective features of firearms use. Eliciting and

interpreting relatively objective questions about whether and how one uses

a gun may be relatively simple and lead to consensus on these basic matters.

Eliciting and interpreting relatively subjective questions on intent may be

much more complex and less amenable to consensus conclusions.3

Ultimately, researchers may conclude that it is impossible to effectively

measure many aspects of defensive gun use. As noted above, counting

crimes averted before the threat stage, and measuring deterrence more

generally, may be impossible. Successful deterrence, after all, may yield no

overt event to count. Imagine, for example, measuring defensive gun use for

a person who routinely carries a handgun in a visible holster. How many

times has this person “used a handgun, even it was not fired, for selfprotection?”

(i.e., the NSDS definition of defensive gun use). In this regard,

much of the debate on the number of defensive gun uses may stem from an

ill-defined question, rather than measurement error per se.

Response Problems in Firearms Use Surveys

Questions about the quality of self-reports of firearms use are inevitable.

Response problems occur to some degree in nearly all surveys but are

arguably more severe in surveys of firearm-related activities in which some

individuals may be reluctant to admit that they use a firearm, and others

may brag about or exaggerate such behavior.4 If some sampled individuals

give incorrect answers (inaccurate response) and others fail to answer the

survey at all (nonresponse), investigators may draw incorrect conclusions

from the data provided by a survey.

3A number of scholars have made explicit recommendations for collecting detailed narratives

on the nature of the event. See, for example, recommendations made by Cook and

Ludwig (1998), Smith (1997), and Kleck (2000). Hemenway and Azrael (2000) and

Hemenway et al. (2000) collected and analyzed detailed narratives on gun use incidents that

reveal that they are often complex and difficult to categorize.

4These same measurement problems were discussed in a report by the National Research

Council (2001) that explored the data problems associated with monitoring illicit drug consumption.