National Violent Death Reporting System

К оглавлению
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 
68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 
85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 
102 103 104 105 106 107 108 109 110 111 112 113 114 115 

In 2002, Congress appropriated funds to the CDC to begin creating the

NVDRS. This system builds on earlier pilot work sponsored by private foundations

coordinated through the Harvard School of Public Health’s Injury

Control Research Center. The NVDRS aims to create a comprehensive individual-

level data set in each state that links data from medical examiners and

coroners, police departments, death certificates, and crime labs on each death

resulting from violence (homicide, suicide, unintentional firearm-related

deaths, and undetermined causes). A set of uniform data elements has been

proposed that would allow a set of minimum plus desirable variables to be

collected using standardized definitions and codes. The NVDRS is designed

to provide detailed characteristics of the circumstances surrounding firearmrelated

deaths, including detailed descriptions of the firearms used. Because

similar characteristics would be collected on nonfirearm-related violent incidents,

a more complete picture of all violent incidents would be available for

analysis than from any existing ongoing data collection effort. The prototype

that CDC is implementing in the first six states (Maryland, Massachusetts,

New Jersey, Oregon, South Carolina, and Virginia) is being carried out with

an initial investment of $2.25 million. Expansion to the remaining states is

estimated to cost approximately $20 million (http://www.aast.org/nvdrs).

The NIBRS and the NVDRS are emerging data sources designed to

provide more information on the circumstances involved in violent events.

The NIBRS would provide details on violent crimes. The NVDRS would

provide details on violent deaths. Whether and to what extent these data, if

fully implemented, could be effectively used to answer some of the complex

firearms policy questions is an open question. Consistency of definitions and

data protocols over many different administrative data sources is a highly

4This description is adapted from that provided by the Justice Research and Statistics

Association (http://www.jrsa.org). Other useful information sources on NIBRS, including

downloadable data sets and codebooks, are the FBI’s Uniform Crime Reporting program

(http://www.fbi/ucr/nibrs) and the National Consortium for Justice Information and Statistics

(http://www.search.org/nibrs).

complex undertaking that is nearly certain to result in reporting errors. Even

if the data are reliable and accurate, the NIBRS and the NVDRS, as with the

UCR, are administrative data that by their nature provide information on

events rather than people. Neither survey alone will provide information on

the use of firearms in the general population, how firearms are acquired, or

how they are used in noncriminal and nonfatal instances.

Data on Firearms Ownership, Use, and Markets

Almost every empirical question about firearms and violence requires

periodic, scientifically acceptable measures of firearms acquisition, availability,

and use. The difficulty of measuring the extent of firearms possession,

the ways in which firearms are acquired, and the myriad uses of

firearms comes up in every chapter of the report.

Several types of ownership data are used in the literature: (1) surveys to

measure acquisition, availability and use; (2) administrative data or other

convenience samples providing information on possession and use among

particular populations (e.g., arrestees) or associated with particular events

(e.g., crime); or (3) proxies that indicate firearms possession and use.

Surveys

Surveys would seem to be the most direct approach to measuring firearms

possession, availability, and perhaps use. The General Social Survey is

the primary source of information for tracking U.S. household firearms

ownership over time since the early 1970s. The GSS is an ongoing, nationally

representative set of sample surveys on a broad range of social issues

conducted by the National Opinion Research Center (NORC). A total of

23 national surveys have been conducted since the inception of the GSS in

1972 (annually until 1993, biennially since 1994, with samples of approximately

1,500 subjects). As an omnibus survey, many topics are covered, but

no topic area is treated with a great deal of depth. Because the GSS is

designed to provide information on trends in attitudes and opinions, many

questions are repeated from year to year. Pertinent to firearms research, the

GSS includes questions on whether guns (handguns, rifles, shotguns) are

owned by the respondent or other household members. Surveys prior to

1995 included an item on prevalence of being threatened by or shot at with

a gun, but these questions have been omitted in recent years.5

5NORC incorporates methodological experiments into each year of the GSS data collection,

involving item wording, context effects, use of different types of response scales, and

other assessments of validity and reliability (see http://www.norc.uchicago.edu/projects/

gensoc1.asp).

The GSS surveys provide basic information on household ownership in

the United States and the nine census regions, but not much else specific to

firearms policy. They cannot be used to infer ownership at finer geographic

levels.6 They do not inquire into the number of guns owned, the reasons for

owning them, or how they are used in practice. As a household survey, the

GSS sampling frame omits transients and others without a stable residence

who may be at high risk for firearm violence. The data offer no direct

indication of illicit firearms transfers.

Many other surveys of varying quality have been used to reveal possession

or use of firearms. The NCVS, for example, has been used to study

what victims of crime report about the weapons used in the crimes against

them and to provide rough estimates of the characteristics of offenders

using those weapons (Bureau of Justice Statistics, 1994). In 1994, the National

Institute of Justice funded the Police Foundation to conduct a nationally

representative telephone survey on private ownership and the use of

firearms by adults in the United States. The study covered topics such as the

size, composition, and ownership of the nation’s private gun inventory;

methods of and reasons for firearms acquisition, storage, and carrying of

guns; and defensive use of firearms against criminal attackers. The study

oversampled racial minorities and gun-owning households. The data provide

greater detail about patterns of firearms ownership than the GSS, and

they provide an estimate of the use of firearms for defense against perceived

threats. Chapter 5 reviews other surveys used to elicit information on defensive

gun use, such as the National Self-Defense Survey.

While surveys of firearms acquisition, possession, and use are of varying

quality and scope, they all share common methodological and survey

sampling-related problems. The most fundamental of these is the potential

for response errors to survey questionnaires. Critics argue that asking people

whether they own a firearm, what kind it is, and how it is used may lead to

invalid responses because ownership is a controversial matter for one or

more reasons: some people may own a firearm illegally, some may own it

legally but worry that they may use it illegally, and some may react to the

intense public controversy about firearm ownership by becoming less (or

even more) likely to admit to ownership (Blackman, 2003).7 Because only

one member of the household is selected to respond, even well-intentioned

6Area identifiers permit use of the GSS survey data to assess household ownership prevalence

across a representative sample of U.S. metropolitan areas and nonmetropolitan counties,

although access to the area-identified data requires special permission from NORC.

7While in most surveys respondents are provided confidentiality, the concern is still expressed

that violations of that confidentiality directly or through data mining could lead to

the identification of specific respondents in a way that might allow the identification of

firearms owners.

respondents may not know about household possession or use. In addition,

critics of survey approaches have raised concerns about how survey data

might be used to establish what would be close to a national registry of

firearm possessors.

The committee is not aware of any research assessing the magnitude or

impact of response errors in surveys of firearms ownership and use. Similar

concerns have been expressed about other sensitive behaviors for which

research evidence on misreporting may be relevant. Surveys on victimization,

such as the NCVS, and on the prevalence of drug use, such as Monitoring

the Future and the National Household Survey of Drug Abuse, have

undergone continuing and careful research efforts to identify the sources of

response error and to correct for them (see National Research Council,

2001, 2003; Harrison and Hughes, 1997). The large literature assessing the

magnitude of misreporting self-reported drug use surveys, for example,

reveals consistent evidence that some respondents misreport their drug use

behavior and that misreporting depends on the social desirability of the

drug (see National Research Council, 2001, and Harrison and Hughes,

1997, for reviews of this literature).8 Moreover, the validity rates can be

affected by the data collection methodology. Surveys that can effectively

ensure confidentiality and anonymity and that are conducted in noncoercive

settings are thought to have relatively low misreporting rates. Despite this

large body of research, very little information exists on the magnitude or

trends in invalid reporting in illicit drug use surveys (National Research

Council, 2001).

While there is some information on reporting errors in surveys on other

sensitive topics, the relevance of this literature for understanding invalid

reporting of firearms ownership and use is uncertain. In many ways, the

controversy over firearms appears exceptional. There is, as noted in the

introduction, hardly a more contentious issue, with the public highly polarized

over the legal and research foundations for competing policy options.

Furthermore, the durable nature of firearms may arguably lead some respondents

to provide invalid reports because of fears about future events

(e.g., a ban on certain types of guns) even if they have no concerns about the

legality of past events.

Nonresponse creates a similar problem. Response rates in the GSS are

between 75 and 80 percent (Smith, 1995), less than 65 percent in the

Police Foundation Survey, and even lower in some of the defensive gun use

8These studies have been conducted largely on samples of persons who have much higher

rates of drug use than the general population (e.g., arrestees). A few studies have attempted to

evaluate misreporting in broad-based representative samples, but these lack direct evidence

and instead make strong, unverifiable assumptions to infer validity rates (National Research

Council, 2001).

surveys described in Chapter 5. Nonresponse rates make it difficult to

draw precise inferences about ownership rates and use as the data are

uninformative about nonrespondents. With nonreponse rates of 25 percent

or more, the existing surveys alone cannot reveal the rates of ownership

or use. Prevalence rates can be identified only if one makes sufficiently

strong assumptions about the behavior of nonrespondents. Generally,

nonresponse is assumed to be random, thus implying that prevalence

among nonrespondents is the same as prevalence among respondents. The

committee is not aware of empirical evidence that supports the view that

nonresponse is random. Indeed, studies of nonresponse in surveys of drug

consumption provide limited empirical evidence to the contrary (see National

Research Council, 2001). These studies find differences between

respondents and nonrespondents in terms of both drug use and other

observed covariates (Caspar, 1992; Gfroerer et al., 1997).

Concerns about response errors in self-reported surveys of firearms

possession and use require much more systematic research before surveys

can be judged to provide accurate data to address critical issues in the study

of firearms and violence. The many substantial resources that have been

devoted to addressing the measurement issues in the collection of other

sensitive data will almost certainly be useful, yet the issues surrounding

firearms may be unique. The committee thinks that new research will extend

and strengthen what is currently known about response errors on

sensitive topics generally. Without systematic research on these specific

matters, scientists can only speculate.