REPORTING ON VIOLENCE
HOW TO EVALUATE A STUDY
Research that is the most valid and useful to a journalist should be
published in a peer-reviewed journal using the latest accepted statistical methods.
...adapted from Seeing Through Statistics, by Dr. Jessica Utts,
professor of statistics at the University of California, Davis. Wadsworth Publishing
Determine whether the research was an observational study, an experiment, a sample
survey, a combination or just based on anecdotes. It is important to note that because of
the nature of violence, most violence research is observational not experimental. That
means that researchers who do observational studies cannot directly attribute a cause to
an effect. All they can do is link factors to effects. For example, until 1996, when
scientists could specifically demonstrate changes on a cellular level, scientific research
could only link smoking to lung cancer, heart disease and stroke, albeit with great
- Observational study:
- Resembles an experiment except that the manipulation occurs naturally rather than being
imposed by the experimenter. For example, researchers could note the birth weight of
infants born to mothers who smoked, but they can't experimentally manipulate mothers to
smoke. Similarly, researchers can note that an increase in homicides among youth has
paralleled the increase in the manufacture and availability of handguns known as Saturday
night specials, but they can't experimentally give youth in one city Saturday night
specials and take them away from youth in another city to show a direct cause and effect.
- Measures the effect of manipulating a variable in some way. For example, receiving a
drug or medical treatment, going through a training program, eating a special diet, etc.
One of the few experiments to be done in evaluating prevention programs for juvenile
delinquents was one by the Oregon Social Learning Center and is described in the
Prevention Approaches section in this book. (see page 135)
- Sample survey:
- A subgroup of a large population is questioned on a set of topics. The results are used
as if they were representative of the larger population, which they will be if the sample
was chosen correctly. Used in political and opinion polls. The National Crime
Victimization Survey, administered by the Bureau of Justice Statistics of the U.S.
Department of Justice and detailed in the Violence Data Resources section of this book, is
a good example of a survey used to gather information about crime in the United States.
- They may or may not be representative of an outcome. They are just descriptions of the
actions of one or a small number of individuals.
To familiarize yourself with the research, consider the seven critical components:
- Component 1:
- The source of the research and funding. For example, if it is a firearms study, is it
being funded by the Centers for Disease Control and Prevention or the National Rifle
- Component 2:
- The researchers who had contact with the participants. Participants will often give
answers or behave in ways to try to please the researchers. For example, if law
enforcement officials administer the National Crime Victimization Survey, people are less
likely to be honest about certain risk factors, such as whether they use illegal drugs.
- Component 3:
- The individuals or objects studied and how they were selected. For example, if only men
in prisons are interviewed about alcohol use and family violence, then the study is biased
because those interviewed are the people who are more likely to be convicted, including
those to whom juries are not sympathetic or those who live in counties or cities where
police departments do not enforce a pro-arrest policy for family violence.
- Component 4:
- The exact nature of the measurements made or questions asked. For example, if you wanted
to do a survey on attitudes about family violence, how would you define family violence or
abuse? Is it when a man hits a woman? Does screaming at her and threatening physical harm,
even though he says he would not hit her, constitute abuse? Is spanking a child with a
hand abuse? Is spanking a child with a belt abuse? In addition, the wording and the
ordering of the questions influence answers. For example, a question about "street
people" and violent incidents would probably elicit a different response than a
question about violence and "families who have no home."
- Component 5:
- The setting in which the measurements were taken. A study can be easily biased by
timing, if, for example, opinions about the "three-strikes" law were sought
after a highly publicized kidnapping or murder. Or, if interviewers ask a woman questions
about abuse when her husband is in the same room, she is likely to answer differently than
if alone with the understanding that she will not be identified.
- Component 6:
- The extraneous differences in groups being compared. For example, in 28 studies that
examined the drinking patterns of groups of people convicted of assault, researchers found
that 24 percent to 84 percent of the offenders and 24 percent to 40 percent of the victims
had been drinking. Were there differences among those groups to account for the range in
- Component 7:
- The magnitude of any claimed effects or differences. To judge whether the results of a
study have any practical importance, you have to know how large the effects were. For
example, in the 28 studies that examined the drinking patterns of groups of people
convicted for assault, the range of reported drinking was huge - from 24 percent to 84
percent. More information must be known about the studies to determine whether the studies
have practical importance, including knowing how many people were in the groups.
- Determine if the information is complete.
- If necessary, see if you can either find the original source of the report or contact
the authors for missing information.
- Ask if the results make sense in the larger scope of things.
- If they are counter to previously accepted knowledge, see if you can learn anything
about possible explanations from the authors.
- Ask yourself if you can think of an alternative explanation for the results and check it
out with the researchers as well as experts in the field who have reviewed the research.
- Determine if the results are meaningful enough to encourage you to change your
lifestyle, attitudes or beliefs on the basis of the research.
WATCH OUT FOR:
- Observational studies that draw a cause-and-effect conclusion:
- Only experiments can establish cause and effect. Observational studies make
correlations; that is, they can say that they can observe a relationship between
variables. But a correlation does not mean that a researcher can say that a particular
event actually caused the response. For example, researchers can report an increase in
homicides at the same time they observe more Saturday night specials available in a
community, but they cannot say the increase in the number of guns caused the increase in
homicides. However, the researchers may state that they observe a strong (or weak)
relationship between the two variables, homicides and gun availability. Remember that most
violence research is, by its very nature, observational not experimental, and so direct
cause and effect will seldom, if ever, be established.
- Nonrepresentative samples:
- Are the people or objects representative of the larger group for which conclusions are
to be drawn? For example, in a study of family violence, if researchers choose only
families in East Palo Alto, then the sample would not be valid for the population at large
in San Mateo County, because the income level and the ethnic makeup are not representative
of the whole county.
- Samples that are too small:
- In this study of family violence, interviewing members of one family from each city in
San Mateo County would not be enough to represent the population at large. But how many
are enough? That depends on the range within the variables identified as significant
within the population. The more diverse the variables are within each group, the larger
the sample size needs to be to detect differences among the groups.
- Volunteer or convenience samples:
- If a magazine or television station runs a survey and asks readers or viewers to
respond, the results only reflect the opinions of those who decide to respond. The
responding group is not representative of any larger group. Neither are convenient or
haphazard samples, such as person-on-the-street interviews.
- Nonexpert opinions:
- Opinions about research from people who are not familiar with the research or educated
in the statistical methods used in the research. For example, the opinion of an attorney
who has not done statistical research or been educated in its methods is useful and
appropriate if the comment concerns the policy implications of a particular study but not
the validity of the study itself.