POST: "Investigative Report Details Flaws in Forensic "Science," published by The Equal Justice Initiative (EJI) on February 7, 218. The EJI is an organization "committed to ending mass incarceration and excessive punishment in the United States, to challenging racial and economic injustice, and to protecting basic human rights for the most vulnerable people in American society."
GIST: "Investigative reporters for the Nation found a troubling lack of scientific support for forensic pattern-matching techniques like toolmark and bitemark analysis. They concluded that the legal system has failed to keep unreliable, unscientific evidence out of the courtroom, even in capital cases, and prosecutors are working to preserve and even expand their ability to present such evidence.
Forensic "Science" Is Not Really Science: In
contrast with forensic DNA analysis, which relies on scientific
principles like the known variations in the human genome,
"pattern-matching" disciplines that compare bite marks, hairs, shoe
prints, tire tracks, or fingerprints involve an enormous amount of
subjective judgment in determining what counts as a match. The Nation
specifically investigated handheld toolmarks analysis, in which
examiners compare marks left by knives, bolt cutters, bayonets,
scissors, screwdrivers, pipe wrenches, pliers, or wire-strippers. Toolmark
analysis emerged out of a national push in the early 20th century to
professionalize police investigative techniques, the Nation reports. Law
enforcement borrowed terms from science, establishing crime
'laboratories' staffed by forensic 'scientists' who announced 'theories'
closed in their own specialized jargon. But forensic 'science' focused
on inventing clever ways to solve cases and win convictions; it was
never about forming theories and testing them according to basic
scientific standards. By adopting the trappings of science, the forensic
disciplines co-opted its authority while abandoning its methods. In 2009, an independent survey by
the National Academy of Sciences found that "[m]uch forensic
evidence—including, for example, bitemarks and firearm and toolmark
identifications—is introduced in criminal trials without any meaningful
scientific validation, determination of error rates, or reliability
testing to explain the limits of the discipline. The report found
no scientific basis for forensic examiners's claims of certainty in
court, no professional guidelines for testimony, no standard
accreditation or certifications for labs, and little research on
variability, reliability, or human bias. What little research has
been done eviscerates forensic examiners' claims of infallibility.
Bite-mark examiners claimed a coincidental match would occur less than
one in 10 quadrillion times, but when actually tested, the most
experienced examiners were wrong about one in six times. The FBI
Firearms-Toolmarks Unit chief claimed a qualified examiner will rarely
if ever make a misidentification, but in 2008, the Detroit Police
Department's crime lab was shut down when auditors found that its
examiners made one error in every 10 cases. And after the head of the
FBI’s fingerprint laboratory claimed its error rate was one in 11
million, tests of fingerprint examiners showed error rates as high as
one in 24. In 2013, the Justice Department established the
National Commission on Forensic Science. Comprised of forensic
practitioners, scientists, and attorneys, it recommended a new code of
professional conduct for laboratories and that examiners should stop
using the misleading phrase "to a reasonable degree of scientific
certainty" in testimony. The President's scientific advisory
council, PCAST, concluded in a follow-up report three years later that
"lack of rigor in the assessment of the scientific validity of forensic
evidence" is "a real and significant weakness in the judicial system." The council
identified a lack of scientific support for firearm and toolmark
analysis, noting that only one appropriately designed study on error
rates existed, and it showed a false-match rate of one in 100. The Nation's review found a similarly shocking lack of empirical data on error rates in handheld toolmark examination. Courts Have Not Prevented Unreliable, Unscientific Evidence; Judges
decide what evidence can be presented in court. Most of them lack
scientific training and the ability to assess the scientific validity of
a forensic technique, and the legal standard in place for decades did
not require the prosecution to prove that a technique is reliable. Once a
technique has been allowed into court, subsequent judges continue to
allow it by citing precedent—which forensic examiners also cite to claim
their techniques are reliable. In this circular way, legal rulings—which never really vetted the science to begin with—substitute for scientific proof. In
1993, the Supreme Court mandated that judges allow only scientific
evidence supported by testable claims and required proponents of the
evidence to provide measures of how often examiners make mistakes.
Federal courts and most states have adopted this standard, but it has
little impact because most judges still rely on precedent. As Harry T.
Edwards, chief judge for the D.C. Circuit Court of Appeals, explained: Judges
believe that because we said it before, it must be right, and because
these practitioners have been around for a long time, it must be right.
In other words, history is the proof. Recently, a few judges have acknowledged the problem. In 2016, D.C. Court of Appeals Judge Catherine Easterly wrote in a robbery case involving a firearm: As
matters currently stand, a certainty statement regarding toolmark
pattern matching has the same probative value as the vision of a
psychic: it reflects nothing more than the individual's foundationless
faith in what he believes to be true. Prosecutors Preserve — and Expand — Reliance on Forensic Evidence Most
prosecutors resist reform because it could weaken one of their most
powerful tools, threaten currrent cases, and call past convictions into
question. The Justice Department knew for years that its hair
comparison examiners made mistakes, but even after a whistle-blower
forced a review of 2900 cases, the department kept the review secret. In
2015, it finally conceded
that its examiners gave flawed testimony in 96 percent of cases,
including 33 of 35 death penalty cases reviewed. Nine of those
defendants had already been executed. Because forensic analysts
identify as part of the prosecution team, unconscious cognitive bias can
infect results, such as examiners unconsciously seeking evidence that
confirms their colleagues' view of the case. The NAS report
stressed the need for reform to be independent of prosecutorial
agencies, but Attorney General Jeff Sessions allowed the National
Commission on Forensic Sciences to expire, suspended the ongoing review
of standards for examiner testimony, and put DOJ back in charge of
forensic science "reform." The accelerating use of digital
forensics — already, artificial intelligence predicts criminal
"hotspots" and software purports to analyze tiny amounts of DNA in
complex mixtures from multiple people — underscores the urgency of
addressing the problems in forensics that lead to wrongful convictions.
Experts warn that these issues, including secrecy, pro-prosecution bias,
inflated claims that lack empirical support, and courts' failure to
keep out unreliable evidence, will be compounded as forensics becomes
more automated."
The entire post can be found at:
The entire post can be found at:
PUBLISHER'S NOTE: I am monitoring this case/issue. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic" section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/c