Description of Program Evaluation

К оглавлению
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 
68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 
85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 
102 103 104 105 106 107 108 109 110 111 112 113 114 115 

Hospital-based 4-hour Health Partners Research

program. While children are Foundation (1999), in randomized

being lectured on trauma treatment and control groups 2

resuscitation, a gunshot victim weeks before and after the

(teenage actor) is brought in, program, found that levels of

and children are asked to help discomfort with aggression

resuscitate, but “patient” dies. increased after program. No

Children are then directed to changes in behavior around

counselors to discuss their firearms were found in this

emotions and told that the evaluation.

situation was not real but a

realistic rendering of what

happens in emergency rooms

every day

The main purpose of this 9- No evaluation of effectiveness as

week program is employment of 2002

training and GED preparation.

One section of one day is

spent on gun violence

prevention; students are shown

a video depicting a violent

scene of a juvenile shot in a

drug dispute. After the video,

children share personal

experiences and think up

behaviors that can prevent

violent outcomes.

Peers meet with youth who The National Council on Crime

have been suspended from and Delinquency (2001) conducted

school for carrying weapons or a randomized prospective study of

engaging in destructive the program assessing attitudes

behavior. Peers also visit and behavior toward guns and

adolescents recovering from truancy rates following

violent injuries who convince completion of the program, but

them not to retaliate. results are not yet available.

continued

 (locked, loaded, etc.), whereas the distal behavior goal might be to reduce

the rare acts of gun violence involving children. If the program is designed

to educate young children about firearms, then a proximal behavior goal

would be avoidance of a nearby gun, and a distal behavior goal would be

the reduction of child gun accidents.

TABLE 8-1 Continued

Developer,

Sponsor Target

and/or Type of Age or

Program Publisher Program Grade

Hands Without Guns Office of Peer-based Middle

Justice education school

Programs, and and high

Education outreach school

Fund to End students

Handgun

Violence,

Joshua

Horwitz.

Based in

Washington,

DC, but

implemented

in several

U.S. cities

Child Development- A Interrelated Police

Community Policing collaborative training and officers

(CD-CP) Program effort by the consultation and

New Haven, focusing on mental

CT, sharing health

Department knowledge professions

of Police and

Services and developing

the Child ongoing

Study collegial

Center at the relationships

Yale between

University police and

School of mental

Medicine health

workers.

The outcome data may come from a number of sources—self-report, proxy

report (e.g., peers, teachers, parents) and direct observation using school

records, and criminal records. Most of the programs described in this chapter

assess children’s knowledge or attitudes about firearms, and most used selfreport

and questionnaires to assess change in knowledge or attitudes.

Description of Program Evaluation

Public health and Internal evaluation of the

education campaign program (1999) reports

aimed at providing a that pre- and postforum

for youth campaign surveys with a

encouraging them to sample of 400

develop their own Washington, DC,

constructive responses students show that kids

to gun violence. who could identify the

program were less likely

to carry guns than those

who had never heard of

the program.

Police supervisors No evaluation of

spend 3 full days in effectiveness as of 2002

training activities to

become familiar with

developmental

concepts, patterns of

psychological

disturbance, methods of

clinical intervention,

and settings for

treatment.

Mental health

clinicians spend time

with police officers in

squad cars, at police

stations, and on the

street learning directly

from officers about

their day-to-day

activities.

A review of the literature reveals only one standardized measure of

children’s attitudes toward firearms and violence: the Attitudes Toward

Guns and Violence Questionnaire (AGVQ), developed by Shapiro and his

colleagues (1997) at the Applewood Centers in Cleveland, Ohio. The

AGVQ demonstrates satisfactory internal consistency (Cronbach’s alpha

= .94) and concurrent validity, with 23 items relating to violence, guns, or

conflict behavior answered on a 3-point Likert-type scale (disagree, not

sure, agree). A factor analysis of the AGVQ revealed four factors associated

with participants owning or wanting to own a gun: (1) aggressive

response to shame: the belief that shame resulting from being insulted can

be undone only through aggression; (2) comfort with aggression: general

beliefs, values, and feelings about aggression and violence; (3) excitement:

feelings of being excited and stimulated by guns; and (4) power/safety:

feeling the need to carry a gun to be powerful and safe on the streets.

Shapiro and his colleagues (1998), administering the AGVQ to 1,619

children and adolescents, found that the measure was useful for predicting

gun ownership. Validity coefficients were lower for girls in elementary

school.

Measuring behavior in the presence of firearms is more difficult and

rarely done as part of the evaluation of firearm violence programs.

When behavior is measured, one of two sources of information is typically

obtained:

• Community-wide or school-wide measures of the consequences of

gun-carrying or gun violence—for example, school suspensions, mortality

and morbidity rates, arrest rates for firearm-related offenses, suicide attempts

using firearms. The behaviors that firearm violence programs are

typically designed to modify or prevent are often rare events (e.g., accidental

firearm deaths), so from a program evaluation point of view it is difficult

to assess the effectiveness of a program designed to keep something of low

frequency from actually happening. This is because data must be collected

from a large number of individuals and often over a long period of time to

obtain adequate numbers for analysis.

• Program participants’ description of their experiences around firearms

through focus groups, class discussions, or questionnaires. Younger

children may be asked if they have ever seen or touched a gun, and adolescents

may be asked if they carry a gun or if they would use a gun in certain

situations. While this information may be of interest, self-reports are subject

to biases that may lead to underreporting, particularly when children and

adolescents are asked about socially sensitive behaviors (Moskowitz, 1989).

The most direct outcome measure of behavior is an unobtrusive observation

of children and adolescents when they encounter a gun. None of

the firearms safety programs we discuss has actually utilized this method

of evaluation, however, usually because of policy regulations at schools

prohibiting even disabled firearms on campus. Nonetheless, direct observation

may be the most accurate method of discerning what a child or

adolescent would do when confronted with a firearm. Researchers who

have directly observed children’s behaviors around firearms following an

intervention have found high rates of gun play (see Hardy et al., 1996;

Hardy, 2002b).

The best evaluation of a firearm violence prevention program should

assess its impact on knowledge, attitudes, and behavior from a variety of

sources, particularly since these variables are not highly correlated. Inconsistencies

between children’s knowledge and behavior following participation

in more general violence prevention programs is well documented

(Arcus, 1995). Moreover, Wilson-Brewer and colleagues (1991) found in a

survey of 51 programs that fewer than half claimed to reduce actual violence

levels. Those that did claim to do so had limited empirical data to

support their claims.

The correlation between children’s knowledge about guns and the likelihood

that they will handle a gun is less well studied. However, a recent

study by Hardy (2002b) suggests that the two outcomes following a firearm

violence prevention program are unrelated. In this study, 70 children ages 4

to 7 were observed in a structured play setting in which they had access to

a semiautomatic pistol. Observers coded several behaviors, including gun

safety statements (“Don’t touch that!”) and gun touching. Assuming that

children who say “Don’t touch that gun!” to another child have some

knowledge that guns are dangerous (or for some other reason should not be

touched), one might expect that these children would themselves not touch

the guns. Nonetheless, 15 of the 24 children who made such comments in

the study subsequently touched the gun themselves during the 10-minute

interval.

Another way Hardy (2002b) assessed the correlation between firearms

safety knowledge and behavior was to examine the relationship between a

child’s belief that a gun is real and his or her behavior around that gun.

Again, however, the evidence suggests no significant relationship. Specifically,

the children who correctly identified the real gun as such were no less

likely to play with the gun (n = 19) than were children who believed the gun

was a toy (n = 16). These findings were later replicated in a study with

children ages 9 to 15 (Hardy, 2002a).

Study Design

Once the appropriate outcome measures are identified and operationally

defined, program developers must decide on the design of the evalua212

tion. Serious evaluations have the goal of excluding alternative explanations

for the result; the goal is to ensure that any changes noted in the

targeted knowledge, attitudes, and behaviors are due to the program and

are not due to extraneous variables and events—environmental changes,

developmental changes, practice effects, etc.

There are several steps that program developers can take so as to

exclude such alternative explanations. First, depending on whether the

program is individual-based, school-based or community-based, developers

should identify the target population; for example, a school-based

prevention program may be developed for grade schools, or a mediabased

campaign may be developed for rural communities. Next, the evaluation

should be based on a sample of individuals, schools, or communities

that are representative of the target population; otherwise the obtained

results may depend in some unknown way on the sample and may not be

generalizable to the population. For example, if the sample includes only

grade schools with highly motivated teachers, then the results may not be

generalizable to all grade schools. The key point is that the sample should

be representative of an identified population; in the above example, the

population is more accurately identified as grade schools with highly

motivated teachers.

A second step that program developers can take to exclude alternative

explanations is to assess the targeted knowledge, attitudes, and behaviors in

a control or comparison group not exposed to the program. Ideally, the

comparison group should differ from the treatment group only in the subsequent

exposure to the program. Developers can compare baseline data

concerning the knowledge, attitudes, or behaviors targeted for change to

check that the groups do not differ in systematic ways prior to the intervention.

Of course the comparison group and experimental group may differ in

unmeasured ways. The ideal way to exclude alternative explanations, including

explanations due to unmeasured differences between groups, is by

random assignment of individuals or schools or communities to the experimental

and comparison conditions. (See Weisburd and Petrosino, forthcoming;

Flay, 2002; and Boruch et al., 2004, for discussions of the advantages

of randomization in the field of criminology, for school-based

prevention programs, and for place-based trials, respectively.) Randomized

trials exclude alternative explanations for the estimated differences between

the groups because, on average, randomization produces groups that differ

only in terms of the prevention intervention. That is, the randomized trials

produce defensible evidence because alternative explanations for outcome

are spread evenly across the treatment and comparison groups. Even when

we randomize to experimental and comparison conditions, it is useful to

collect and compare baseline data concerning the knowledge, attitudes, or

behavior(s) targeted for change to check that the groups do not, by chance,

differ in systematic ways prior to the intervention.