PUBLISHER'S NOTE: This excellent study of "faulty forensics" is authored by Jessica Brand, Legal Director of The Fair Punishment Project at Harvard. I became aware of the Fair Punishment Project when writing on the wrongful prosecution of Rodricus Crawford by the State of Alabama. (A report published by the Project informed me that Caddo Parish, where Rodricus had the misfortune of facing prosecution, was responsible for a disproportionate number of death sentences.) Kudos to Jessica Brand for this perceptive,contemporary study of flawed forensics.
Harold Levy: Publisher; The Charles Smith Blog.
-----------------------------------------------------------
PASSAGE OF THE DAY: "Meanwhile, at least one judge has recognized the danger of forensic expert testimony. In a 2016 concurrence, Judge Catherine Easterly of the D.C. Court of Appeals lambasted expert testimony about toolmark matching: “As matters currently stand, a certainty statement regarding toolmark pattern matching has the same probative value as the vision of a psychic: it reflects nothing more than the individual’s foundationless faith in what he believes to be true. This is not evidence on which we can in good conscience rely, particularly in criminal cases … [T]he District of Columbia courts must bar the admission of these certainty statements, whether or not the government has a policy that prohibits their elicitation. We cannot be complicit in their use.”
------------------------------------------------------------
COMMENTARY: "'Faulty Forensics: Explained,' by Jessica Brand, published by 'In Justice Today' on May 4, 2018. In Justice Today is devoted to producing compelling, original journalism and commentary on the subject of criminal justice reform. Jessica Brand is Legal Director of The Fair Punishment Project. "In our Explainer
series, Fair Punishment Project lawyers help unpack some of the most
complicated issues in the criminal justice system. We break down the
problems behind the headlines — like bail, civil asset forfeiture, or
the Brady doctrine — so that
everyone can understand them. Wherever possible, we try to utilize the
stories of those affected by the criminal justice system to show how
these laws and principles should work, and how they often fail. We will
update our Explainers quarterly to keep them current."
The entire story can be found at:
GIST: "In
1992, three homemade bombs exploded in seemingly random locations
around Colorado. When police later learned that sometime after the bombs
went off, Jimmy Genrich had requested a copy of The Anarchist Cookbook
from a bookstore, he became their top suspect. In a search of his
house, they found no gunpowder or bomb-making materials, just some
common household tools — pliers and wire cutters. They then sent those
tools to their lab to see if they made markings or toolmarks similar to
those found on the bombs. At
trial, forensic examiner John O’Neil matched the tools to all three
bombs and, incredibly, to an earlier bomb from 1989 that analysts
believed the same person had made — a bomb Genrich could not have made
because he had an ironclad alibi. No research existed showing that tools
such as wire cutters or pliers could leave unique markings, nor did
studies show that examiners such as O’Neil could accurately match
markings left by a known tool to those found in crime scene evidence.
And yet O’Neil told the jury it was no problem, and that the marks
“matched … to the exclusion of any other tool” in the world. Based on
little other evidence, the jury convicted Genrich. Twenty-five
years later, the Innocence Project is challenging Genrich’s conviction
and the scientific basis of this type of toolmark testimony, calling it
“indefensible.” [Meehan Crist and Tim Requarth / The Nation] There are literally hundreds of cases like this, where faulty forensic testimony
has led to a wrongful conviction. And yet as scientists have questioned
the reliability and validity of “pattern-matching” evidence — such as
fingerprints, bite marks, and hair — prosecutors are digging in their
heels and continuing to rely on it. In this explainer, we explore the
state of pattern-matching evidence in criminal trials.
What is pattern-matching evidence?
In
a pattern-matching, or “feature-comparison,” field of study, an
examiner evaluates characteristics visible on evidence found at the
crime scene — e.g., a fingerprint, a marking on a fired bullet
(“toolmark”), handwriting on a note — and compares those features to a
sample collected from a suspect. If the characteristics, or patterns,
look the same, the examiner declares a match. [Jennifer Friedman & Jessica Brand / Santa Clara Law Review]
Typical
pattern-matching fields include the analysis of latent fingerprints,
microscopic hair, shoe prints and footwear, bite marks, firearms, and
handwriting. [“A Path Forward” / National Academy of Sciences”]
Examiners in almost every pattern-matching field follow a method of
analysis called “ACE-V” (Analyze a sample, Compare, Evaluate — Verify). [Jamie Walvisch / Phys.org]
Here are two common types of pattern-matching evidence:
Fingerprints:
Fingerprint analysts try to match a print found at the crime scene (a
“latent” print) to a suspect’s print. They look at features on the
latent print — the way ridges start, stop, and flow, for example — and
note those they believe are “significant.” Analysts then compare those
features to ones identified on the suspect print and determine whether
there is sufficient similarity between the two. (Notably, some analysts
will deviate from this method and look at the latent print alongside the
suspect’s print before deciding which characteristics are important.) [President’s Council of Advisors on Science and Technology]
Firearms:
Firearm examiners try to determine if shell casings or bullets found at
a crime scene are fired from a particular gun. They examine the
collected bullets through a microscope, mark down characteristics, and
compare these to characteristics on bullets test-fired from a known gun.
If there is sufficient similarity, they declare a match. [“A Path Forward” / National Academy of Sciences”]
What’s wrong with pattern-matching evidence?
There are a number of reasons pattern-matching evidence is deeply flawed, experts have found. Here are just a few:
These conclusions are based on widely held, but unproven, assumptions.
The
idea that handwriting, fingerprints, shoeprints, hair, or even markings
left by a particular gun, are unique is fundamental to forensic
science. The finding of a conclusive match, between two fingerprints for example, is known as “individualization.” [Kelly Servick / Science Mag]
However, despite this common assumption, examiners actually have no credible evidence or proof that hair, bullet markings, or things like partial fingerprints are unique — in any of these pattern matching fields. In February 2018, The Nation
conducted a comprehensive study of forensic pattern-matching analysis
(referenced earlier in this explainer, in relation to Jimmy Genrich).
The study revealed “a startling lack of scientific support for forensic
pattern-matching techniques.” Disturbingly, the authors also described
“a legal system that failed to separate nonsense from science in capital
cases; and consensus among prosecutors all the way up to the attorney
general that scientifically dubious forensic techniques should not only
be protected, but expanded.” [Meehan Crist and Tim Requarth / The Nation] Similarly,
no studies show that one person’s bite mark is unique and therefore
different from everyone else’s bite mark in the world. [Radley Balko / Washington Post] No studies show that all markings left on bullets by guns are unique. [Stephen Cooper / HuffPost]
And no studies show that one person’s fingerprints — unless perhaps a
completely perfect, fully rolled print — are completely different than
everyone else’s fingerprints. It’s just assumed. [Sarah Knapton / The Telegraph]
Examiners often don’t actually know whether certain features they rely upon to declare a “match” are unique or even rare.
On
any given Air Jordan sneaker, there are a certain number of shared
characteristics: a swoosh mark, a tread put into the soles. That may
also be true of handwriting. Many of us were taught to write cursive by
tracing over letters, after all, so it stands to reason that some of us
may write in similar ways. But examiners do not know how rare certain
features are, like a high arch in a cursive “r” or crossing one’s
sevens. They therefore can’t tell you how important, or discriminating,
it is when they see shared characteristics between handwriting samples.
The same may be true of characteristics on fingerprints, marks left by
teeth, and the like. [Jonathan Jones / Frontline]
There are no objective standards to guide how examiners reach their conclusions.
How
many characteristics must be shared before an examiner can definitively
declare “a match”? It is entirely up to the discretion of the
individual examiner, based on what the examiner usually chalks up to
“training and experience.” Think Goldilocks. Once she determines the
number that is “just right,” she can pick. “In some ways, the process is
no more complicated than a child’s picture-matching game,” wrote the
authors of one recent article. [Liliana Segura & Jordan Smith / The Intercept] This is true for every pattern-matching field — it’s almost entirely subjective. [“A Path Forward” / National Academy of Sciences”]
Unsurprisingly, this can lead to inconsistent and incompatible conclusions.
In
Davenport, Iowa, police searching a murder crime scene found a
fingerprint on a blood-soaked cigarette box. That print formed the
evidence against 29-year-old Chad Enderle. At trial, prosecutors pointed
to seven points of similarity between the crime scene print and
Enderle’s print to declare a match. But was that enough? Several experts
hired by the newspaper to cover the case said they could not draw any
conclusions about whether it matched Enderle. But the defense lawyer
didn’t call an expert and the jury convicted Enderle. [Susan Du, Stephanie Haines, Gideon Resnick & Tori Simkovic / The Quad-City Times]
Why faulty forensics persist
Despite
countless errors like these, experts continue to use these flawed
methods and prosecutors still rely on their results. Here’s why:
Experts are often overconfident in their abilities to declare a match.
These fields have not established an “error rate” — an
estimate of how often examiners erroneously declare a “match,” or how
often they find something inconclusive or a non-match when the items are
from the same source. Even if your hair or fingerprints are “unique,”
if experts can’t accurately declare a match, that matters. [Brandon L. Garrett / The Baffler] Analysts
nonetheless give very confident-sounding conclusions — and juries often
believe them wholesale. “To a reasonable degree of scientific
certainty” — that’s what analysts usually say when they declare a match,
and it sounds good. But it actually has no real meaning. As John Oliver
explained on his HBO show: “It’s one of those terms like basic or trill
that has no commonly understood definition.” [John Oliver / Last Week Tonight] Yet, in trial after trial, jurors find these questionable conclusions extremely persuasive. [Radley Balko / Washington Post] Why
did jurors wrongfully convict Santae Tribble of murdering a Washington,
D.C., taxi driver, despite his rock-solid alibi supported by witness
testimony? “The main evidence was the hair in the stocking cap,” a juror
told reporters. “That’s what the jury based everything on.” [Henry Gass / Christian Science Monitor] But
it was someone else’s hair. Twenty-eight years later, after Tribble had
served his entire sentence, DNA evidence excluded him as the source of
the hair. Incredibly, DNA analysis established that one of the crime
scene hairs, initially identified by an examiner as a human hair,
belonged to a dog. [Spencer S. Hsu / Washington Post]
Labs are not independent — and that can lead to biased decision-making.
Crime labs are often embedded in police departments, with the head of the lab reporting to the head of the police department. [“A Path Forward” / National Academy of Sciences] In some places, prosecutors write lab workers’ performance reviews. [Radley Balko / HuffPost]
This gives lab workers an incentive to produce results favorable to the
government. Research has also shown that lab technicians can be
influenced by details of the case and what they expect to find, a
phenomenon known as “cognitive bias.” [Sue Russell / Pacific Standard] Lab workers may also have a financial motive. According to a 2013 study,
many crime labs across the country received money for each conviction
they helped obtain. At the time, statutes in Florida and North Carolina
provided remuneration only “upon conviction”; Alabama, Arizona,
California, Missouri, Wisconsin, Tennessee, New Mexico, Kentucky, New
Jersey, and Virginia had similar fee-based systems. [Jordan Michael Smith / Business Insider] In
North Carolina, a state-run crime lab produced a training manual that
instructed analysts to consider defendants and their attorneys as
enemies and warned of “defense whores” — experts hired by defense
attorneys. [Radley Balko / Reason]
Courts are complicit
Despite
its flaws, judges regularly allow prosecutors to admit forensic
evidence. In place of hearings, many take “judicial notice” of the
field’s reliability, accepting as fact that the field is accurate
without requiring the government to prove it. As Radley Balko from the Washington Post writes: “Judges continue to allow practitioners of these other fields to testify even after the scientific community has discredited them, and even after
DNA testing has exonerated people who were convicted, because
practitioners from those fields told jurors that the defendant and only
the defendant could have committed the crime.” [Radley Balko / Washington Post]
In
Blair County, Pennsylvania, in 2017, Judge Jolene G. Kopriva ruled that
prosecutors could present bite mark testimony in a murder trial.
Kopriva didn’t even hold an evidentiary hearing to examine whether it’s a
reliable science, notwithstanding the mounting criticism of the field.
Why? Because courts have always admitted it. [Kay Stephens / Altoona Mirror]
Getting it wrong
Not
surprisingly, flawed evidence leads to flawed outcomes. According to
the Innocence Project, faulty forensic testimony has contributed to 46
percent of all wrongful convictions in cases with subsequent DNA
exonerations. [Innocence Project]
Similarly, UVA Law Professor Brandon Garrett examined legal documents
and trial transcripts for the first 250 DNA exonerees, and discovered
that more than half had cases tainted by “invalid, unreliable,
concealed, or erroneous forensic evidence.” [Beth Schwartzapfel / Newsweek]
Hair analysis
In
2015, the FBI admitted that its own examiners presented flawed
microscopic hair comparison testimony in over 95 percent of cases over a
two-decade span. Thirty-three people had received the death penalty in
those cases, and nine were executed. [Pema Levy / Mother Jones]
Kirk Odom, for example, was wrongfully imprisoned for 22 years because
of hair evidence. Convicted of a 1981 rape and robbery, he served his
entire term in prison before DNA evidence exonerated him in 2012. [Spencer S. Hsu / Washington Post] In
1985, in Springfield, Massachusetts, testimony from a hair matching
“expert” put George Perrot in prison — where he stayed for 30
years — for a rape he did not commit. The 78-year-old victim said Perrot
was not the assailant, because, unlike the rapist, he had a beard.
Nonetheless, the prosecution moved forward on the basis of a single hair
found at the scene that the examiner claimed could only match Perrot.
Three decades later, a court reversed the conviction after finding no
scientific basis for a claim that a specific person is the only possible
source of a hair. Prosecutors have dropped the charges. [Danny McDonald / Boston Globe] In
1982, police in Nampa, Idaho, charged Charles Fain with the rape and
murder of a 9-year-old girl. The government claimed Fain’s hair matched
hair discovered at the crime scene. A jury convicted him and sentenced
him to death. DNA testing later exonerated him, and, in 2001, after he’d
spent two decades in prison, a judge overturned his conviction. [Raymond Bonner / New York Times]
Bite mark analysis
In 1999, 26 members of the American Board of Forensic Odontology
participated in an informal proficiency test regarding their work on
bite marks. They were given seven sets of dental molds and asked to
match them to four bite marks from real cases. They reached erroneous
results 63 percent of the time. [60 Minutes] One bite mark study has shown that forensic dentists can’t even determine if a bite mark is caused by human teeth. [Pema Levy / Mother Jones] That
didn’t keep bite mark “expert” Michael West from testifying in trial
after trial. In 1994, West testified that the bite mark pattern found on
an 84-year-old victim’s body matched Eddie Lee Howard’s teeth. Based
largely on West’s testimony, the jury convicted Howard and sentenced him
to death. Experts have since called bite mark testimony “scientifically unreliable.”
And sure enough, 14 years later, DNA testing on the knife believed to
be the murder weapon excluded Howard as a contributor. Yet the state
continues to argue that Howard’s conviction should be upheld on the
basis of West’s testimony. [Radley Balko / Washington Post] West,
who in 1994 was suspended from the American Board of Forensic
Odontology and basically forced to resign in 2006, is at least partially
responsible for several other wrongful convictions as well. [Radley Balko / Washington Post] West
himself has even discredited his own testimony, now stating that he “no
longer believe[s] in bite mark analysis. I don’t think it should be
used in court.” [Innocence Project]
Fingerprint analysis
The FBI has found that fingerprint
examiners could have an error rate, or false match call, as high as 1
in 306 cases, with another study indicating examiners get it wrong as
often as 1 in every 18 cases. [Jordan Smith / The Intercept]
A third study of 169 fingerprint examiners found a 7.5 percent false
negative rate (where examiners erroneously found prints came from two
different people), and a 0.1 percent false positive rate. [Kelly Servick / Science Mag]
In
2004, police accused American attorney Brandon Mayfield of the
notorious Madrid train bombing after experts claimed his fingerprint
matched one found on a bag of detonators. Eventually, four experts
agreed with this finding. Police arrested him and detained him for two
weeks until the police realized their mistake and were forced to release
him. [Steve Pokin / Springfield News-Leader]
In
Boston, Stephan Cowans was convicted, in part on fingerprint evidence,
in the 1997 shooting of a police officer. But seven years later, DNA
evidence exonerated him and an examiner stated that the match was
faulty. [Innocence Project] A
2012 review of the St. Paul, Minnesota, crime lab found that over 40
percent of fingerprint cases had “seriously deficient work.” And “[d]ue
to the complete lack of annotation of actions taken during the original
examination process, it is difficult to determine the examination
processes, including what work was attempted or accomplished.” [Madeleine Baran / MPR News]
Firearm analysis:
According to one study, firearm examiners may have a false positive rate as high as 2.2 percent, meaning analysts may erroneously declare a match as frequently as 1 in 46 times. This is a far cry from the “near perfect” accuracy that examiners often claim. [President’s Council of Advisors on Science and Technology] In 1993, a jury convicted Patrick Pursley of murder on the basis of firearms testimony. The experts declared that casings and bullets found on the scene matched a gun linked to Pursley “to the exclusion of all other firearms.” Years later, an expert for the state agreed that the examiner should never have made such a definitive statement. Instead, he should have stated that Pursley’s gun “couldn’t be eliminated.” In addition, the defense’s experts found that Pursley’s gun was not the source of the crime scene evidence. Digital imaging supported the defense. [Waiting for Justice / Northwestern Law Bluhm Legal Clinic] In 2017, a court granted Pursley a new trial. [Georgette Braun / Rockford Register Star]
Rethinking faulty forensics
Scientists
from across the country are calling for the justice system to rethink
its willingness to admit pattern-matching evidence. In 2009, the National Research Council of the National Academy of Science released a groundbreaking report concluding
that forensic science methods “typically lack mandatory and enforceable
standards, founded on rigorous research and testing, certification
requirements, and accreditation programs.” [Peter Neufeld / New York TimesIn 2016, the President’s Council of Advisors on Science and Technology (PCAST),
a group of pre-eminent scientists, issued a scathing report on
pattern-matching evidence. The report concluded that most of the field
lacked “scientific validity” — i.e., research showing examiners could
accurately and reliably do their jobs. [Jordan Smith / The Intercept]
Until the field conducted better research proving its accuracy, the
Council stated that forensic science had no place in the American
courtroom. The study found that, regarding bite mark analysis, the error
rate was so high that resources shouldn’t be wasted to attempt to show
it can be used accurately. [Radley Balko / Washington Post] After
the PCAST report came out, then-Attorney General Loretta Lynch, citing
no studies, stated emphatically that “when used properly, forensic
science evidence helps juries identify the guilty and clear the
innocent.” [Jordan Smith / The Intercept]
“We appreciate [PCAST’s] contribution to the field of scientific
inquiry,” Lynch said, “[but] the department will not be adopting the
recommendations related to the admissibility of forensic science
evidence.” [Radley Balko / Washington Post] The National District Attorneys Association (NDAA) called the PCAST report “scientifically irresponsible.” [Jessica Pishko / The Nation]
“Adopting any of their recommendations would have a devastating effect
on the ability of law enforcement, prosecutors and the defense bar to
fully investigate their cases, exclude innocent suspects, implicate the
guilty, and achieve true justice at trial,” the association noted. [Rebecca McCray / Take Part] The NDAA also wrote that
PCAST “clearly and obviously disregard[ed] large bodies of scientific
evidence … and rel[ied], at times, on unreliable and discredited
research.” But when PCAST sent out a subsequent request for additional
studies, neither the NDAA nor the Department of Justice identified any. [PCAST Addendum] This
problem is getting worse under the current administration. Attorney
General Jeff Sessions has disbanded the National Commission on Forensic
Science, formed to improve both the study and use of forensic science,
and which had issued over 40 consensus recommendation documents to
improve forensic science. [Suzanne Bell / Slate]
He then developed a DOJ Task Force on Crime Reduction and Public
Safety, tasked with “support[ing] law enforcement” and “restor[ing]
public safety.” [Pema Levy / Mother Jones] But there are also new attempts to rein in the use of disproven forensic methods. In Texas, the
Forensic Science Commission has called for a ban on bite marks. “I
think pretty much everybody agrees that there is no scientific basis for
a statistical probability associated with a bite mark,” said Dr. Henry
Kessler, chair of the subcommittee on bite mark analysis. [Meagan Flynn / Houston Press] A
bill before the Virginia General Assembly, now carried over until 2019,
would provide individuals convicted on now-discredited forensic science
a legal avenue to contest their convictions. The bill is modeled after
similar legislation enacted in Texas and California. The Virginia
Commonwealth’s Attorneys Association opposes the legislation, arguing:
“It allows all sorts of opportunities to ‘game’ the system.” [Frank Green / Richmond Times-Dispatch] Meanwhile, at least one judge has recognized
the danger of forensic expert testimony. In a 2016 concurrence, Judge
Catherine Easterly of the D.C. Court of Appeals lambasted expert
testimony about toolmark matching: “As matters currently stand, a
certainty statement regarding toolmark pattern matching has the same
probative value as the vision of a psychic: it reflects nothing more
than the individual’s foundationless faith in what he believes to be
true. This is not evidence on which we can in good conscience rely,
particularly in criminal cases … [T]he District of Columbia courts must
bar the admission of these certainty statements, whether or not the
government has a policy that prohibits their elicitation. We cannot be
complicit in their use.” [Spencer S. Hsu / Washington Post.
PUBLISHER'S NOTE: I am monitoring this case/issue. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic" section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/c