PASSAGE OF THE DAY: "Scientists have to be vigilant about combating unconscious bias by conducting double-blind studies and subjecting their work to peer review and statistical analysis. To gain acceptance in the scientific community, studies must also be reproducible. To be legitimate, a scientific test should have a calculable margin for error. None of this is true in the pattern-matching fields of forensics. So in response, defenders of these disciplines have shifted: These fields aren’t really science. They’re “soft sciences,” similar to fields such as psychiatry or economics. They might not undergo the rigors of the scientific method, the argument goes, but they still have evidentiary value. This is the line that Rosenstein and his boss, Attorney General Jeff Sessions, have taken at the Justice Department in brushing aside scientists’ criticism. The Obama administration created the National Commission on Forensic Science so that scientists could assess the reliability and validity of some of these areas of forensics. One of Sessions’s first acts as attorney general was to allow the commission’s charter to expire without renewal. In his talk last year, Rosenstein announced a new program that would evaluate forensic fields, but it would be within the Justice Department, it would not include any “hard” scientists, and it would be led by a career prosecutor with a history of opposing efforts to bring transparency, accountability and scientific accuracy to forensics. Here’s Rosenstein’s argument from his talk on Tuesday."
-----------------------------------------------------------
COMMENTARY: "Rod Rosenstein still doesn’t get the problem with forensics," by Radley Balko, publsihed on his Blog 'The Watch' by The Washington Post, on August 9, 2018. (Radley Balko
GIST: "Deputy Attorney General Rod J. Rosenstein gave a speech on Tuesday to the National Symposium on Forensic Science in Washington. This isn’t his first such speech: He gave a similar talk in February to the American Academy of Forensic Sciences conference and another about this time last year to the International Association for Identification. I critiqued that last speech
here at The Watch. In the year since, nothing much has changed. Despite
a stream of crime lab scandals, the doubt cast on forensics by DNA
exonerations and blistering critiques of entire fields of forensics from
the scientific community, Rosenstein insists that we should stop
insisting that “forensic science” meet the standards of “science,” and
that we should trust the Justice Department to fix these problems
internally, without input from independent scientific bodies. For decades, police and prosecutors have pushed the fields of forensics known as pattern matching as a science. They
got away with it because the scientific community largely steered clear
of the criminal-justice system. But in the 1990s, DNA testing — a
field that was developed and honed in the scientific community —
became common. DNA tests started to show that some of the people that
forensics experts had declared guilty were, in fact, innocent. In the
years since, the scientific community has become increasingly vocal
about, well, the lack of science in forensic science, particular in pattern-matching disciplines. In
most pattern-matching fields, an analyst looks at two pieces of
evidence — fingerprints, bite marks, the ballistics marks on bullets,
footprints, tire tracks, hair fibers, clothing fibers, or “tool marks”
from a screwdriver, hammer, pry bar or other object — and determines
whether they’re a match. In others, like blood-spatter analysis,
experts don’t even attempt to match two pieces of evidence. They simply
draw conclusions based on assumptions about how blood moves through the
air. These are entirely subjective fields. And that’s the heart of the
problem. Even objective fields of science are plagued by confirmation
bias. Scientists have to be vigilant about combating unconscious bias by
conducting double-blind studies and subjecting their work to peer
review and statistical analysis. To gain acceptance in the scientific
community, studies must also be reproducible. To be legitimate, a
scientific test should have a calculable margin for error. None
of this is true in the pattern-matching fields of forensics. So in
response, defenders of these disciplines have shifted: These fields
aren’t really science. They’re “soft sciences,” similar to fields such
as psychiatry or economics. They might not undergo the rigors of the
scientific method, the argument goes, but they still have evidentiary
value. This
is the line that Rosenstein and his boss, Attorney General Jeff
Sessions, have taken at the Justice Department in brushing aside
scientists’ criticism. The Obama administration created the National Commission on Forensic Science so that scientists could assess the reliability and validity of some of these areas of forensics. One of Sessions’s first acts as attorney general was to allow the commission’s charter to expire without renewal. In his talk last year,
Rosenstein announced a new program that would evaluate forensic fields,
but it would be within the Justice Department, it would not include any
“hard” scientists, and it would be led by a career prosecutor
with a history of opposing efforts to bring transparency,
accountability and scientific accuracy to forensics. Here’s Rosenstein’s
argument from his talk on Tuesday. "Most
of you work on the front lines of the criminal justice system, where
forensic science has been under attack in recent years. Some critics
would like to see forensic evidence excluded from state and federal
courtrooms. You regularly face Frye and Daubert motions that challenge the admission of routine forensic methods. Many
of the challenged methods involve the comparison of evidence patterns
like fingerprints, shell casings, and shoe marks to known sources.
Critics argue that the methods have not undergone the right type or
amount of validation, or that they involve too much human interpretation
and judgment to be accepted as “scientific” methods. Those
arguments are based on the false premise that a scientific method must
be instrument-based, automated, and quantitative, excluding human
interpretation and judgment. Such critiques contributed to a recent
proposal to amend Federal Rule of Evidence 702 for cases involving
forensic evidence. The effort stems from an erroneously narrow view of
the nature of science and its application to forensic evidence. Federal
Rule of Evidence 702 uses the phrase “scientific, technical, or other
specialized knowledge,” which makes clear that it is designed to permit
testimony that calls on skills and judgment beyond the knowledge of
laypersons, and not merely of scientists who work in laboratories. Forensic
science is not only quantitative or automated. It need not be entirely
free from human assumptions, choices, and judgments. That is not just
true of forensic science. It is also the case in other applied expert
fields like medicine, computer science, and engineering." Often
when pattern-matching analysts testify, they go to great lengths to
describe how careful and precise they are at collecting and preserving
evidence. They talk about all the precautions and steps they take before
performing their analysis. It can sound impressive — and it’s all
entirely beside the point. You can be the most careful, precise and
cautious expert witness on the planet when it comes to preparing
evidence for analysis, but if your actual analysis is no more than
“eyeballing it,” your method of analysis still isn’t science. Rosenstein’s
speech on Tuesday has a similar effect. It’s all true, it all sounds
impressive … and it all misses the point entirely. That the federal
rules of evidence allow for expert testimony that “is not only
quantitative or automated” is precisely the problem. That’s how the system got into trouble. Rosenstein then went on to describe what the Justice Department is
doing to improve forensic testimony, such as closer monitoring and
evaluation of the testimony of FBI experts, and instituting uniform
language that experts should use to quantify their level of certainty.
Both initiatives, he said, are “designed to maintain the consistency and
quality of our lab reports and testimonial presentations to ensure that
they meet the highest scientific and ethical standards.” Again,
both of these initiatives sound impressive. But if the testimony of
pattern-matching experts is being evaluated by other pattern matching
experts, by federal law enforcement agents who buy into pattern-matching
analysis, or really by anyone who stands to benefit from a
less-skeptical outlook on forensics, you aren’t really changing
anything. I’ve used this analogy many times, but it fits: If you were to
assemble a commission to evaluate the scientific validity of tarot card
reading, you wouldn’t populate that commission with other tarot card
readers. Yet this is one of the most common critiques law enforcement
officials make of the various scientific bodies that have issued
warnings about forensics — that they lack any members who actually
practice the fields of forensics being criticized.
There’s
a similar issue with uniformity of language. Yes, if there were a
standard set of phrases all forensic analysts used to express their
level of certainty about a piece of evidence, that would be preferable
to not having such a system. But if the analysis itself
is based on little more than each expert’s subjective judgment — if
there’s no measurable, quantifiable, reproducible explanation for why a
hair sample is “consistent with” a suspect rather than “a match” to the
suspect — then everything boils down to the credibility of that expert. None
of this is to say that all pattern-matching fields are useless. Some —
like bite-mark matching — have little to no value at all and should be
prohibited from courtrooms. Other fields could be useful in excluding
possible suspects but are less reliable at identifying one suspect to
the exclusion of all others, such as hair fiber analysis. And a few,
like fingerprint analysis, could still be useful for that sort of
identification, though even here analysts often overstate their certainty. So how should
we assess which fields of forensics are legitimate and which aren’t?
Since Rosenstein and other advocates object to the term “scientific” —
though note that in the very same speech, Rosenstein can’t help using
the term to describe the Justice Department’s reforms — let’s set that
debate aside. If we’re going to allow forensic expert witnesses to
“match” two or more pieces of evidence in order to implicate a suspect,
what is it that we want that testimony to be? If it isn’t that it be
scientific, or that it adhere to Justice Department standards, or that
it be within the guidelines of some obscure forensic governing body,
what is it? I think there are two things we’re looking for. First, we want these analysts to be right.
If an expert says the evidence implicates a suspect, we want that
suspect actually to be guilty. If a fingerprint analyst says a print
found at the crime scene matches a suspect, we want that suspect to at
least have been at the crime scene. Second, we want expert testimony to be reliable.
In too many areas of pattern-matching forensics, you’ll often have two
reputable, certified experts offer diametrically opposing testimony
about the same piece of evidence. If two well-regarded experts can look
at the same piece of evidence and come to opposite conclusions, there
isn’t enough certainty about that particular field to include it in a
court of law. (Of course, if two experts contradict one another at
trial, that also invokes the first rule — one of them must be wrong.) At
this point, jurors are no longer assessing the facts; they’re assessing
which expert they find more credible. And when we assess experts’
credibility, we tend to look at all sorts of factors that have little to
do with the facts, such as their clothes, their mannerisms and the
attorney questioning them. In fact, witnesses who offer their opinions
with resolute yet baseless certainty will often seem more credible to
jurors than experts who couch their opinions in the careful language of a
scientist. So here’s a proposal: For each
field of pattern-matching forensics, we need an independent body to
administer a proficiency test that measures accuracy, reliability or
both. In the field of ballistics, for example, it wouldn’t be difficult
to ask analysts to match a given number of bullets to a given number of
guns. If they don’t meet a minimum level of accuracy, they’d be barred
from testifying in court. (Given the stakes, that minimum standard
should probably be close to 100 percent.) You could do the same for many
other fields: If you’re giving testimony about footprint matches that
sends people to prison, it doesn’t seem overly onerous to ask you to
first prove that you know how to match footprints. For
some fields — such as bite-mark or blood-spatter analysis, or tool
marks on human skin — an accuracy test would be difficult: We can’t
really bite people or slash them to see how their blood splashes against
the wall. For these fields, we could instead measure the field’s
reliability as a whole, using photos from previous cases. If these
fields are legitimate, there should be widespread consensus among
practitioners on what conclusions — if any — can be drawn from the
evidence. For blood-spatter evidence,
for example, there should be wide agreement about whether a photo of
blood spatter on a wall indicates there was a struggle, whether a point
of origin can be deduced from the splatter, and if so, what that point
of origin is. Again, if there’s no consensus (and again, the bar here
should be pretty high), then we should reconsider whether we want to
allow experts from these fields to testify in court at all. For
bite-mark analysis, there should be strong consensus over whether a
mark in a photo really is a bite, if it’s a human bite, and if it can be
matched to a sample bite plate from a possible suspect. In fact, a few
years ago, the leading certifying organization for bite-mark analysts administered just such a test.
The results were disconcerting. Of the 100 photos, there were just
eight on which 90 percent or more of the test-takers agreed on whether
the photo depicted a human bite that could be analyzed as a possible
match. And the study showed that the more experience analysts had, the less
agreement between them about whether the marks were even human bites.
The study didn’t even get to the point of actually asking the analysts
to match the marks to possible suspects, but it seems safe to presume
that there would be even less consensus there. (Despite this study, of
the two courts in the United States that have heard challenges to
bite-mark evidence since it came out, both sided with prosecutors and
allowed the evidence to be admitted.) At the
end of the day, the most important thing about expert testimony is that
it be correct — that jurors aren’t misled, and innocent people aren’t
getting implicated. So let’s test just how right these experts are. If
Rosenstein is right about the current state of forensics science, the
analysts will pass with flying colors, or we’ll at least find that
there’s widespread consensus on the subjective but critical questions in
these fields. But I suspect this won’t be the case."
The entire commentary can be read at:
PUBLISHER'S NOTE: I am monitoring this case/issue. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic" section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/
Harold Levy: Publisher; The Charles Smith Blog;