PUBLISHER'S NOTE: Note the 'Glossary of Bias' below: Very helpful. HL;
-----------------------------------------------
PASSAGE OF THE DAY: "Dror’s previous studies on bias in forensics caused grumbling, but nothing like the reaction to the 2021 paper. This time, he used a survey to see whether bias could affect decision-making among medical examiners. He concluded that nonmedical evidence such as the race of the decedent or their relation to the caregiver—details that most medical examiners routinely consider—were actually a source of bias. Eighty-five of the country’s most prominent pathologists demanded its retraction. The National Association of Medical Examiners (NAME) alleged ethical misconduct and demanded that Dror’s employer, UCL, stop his research. The editor of the Journal of Forensic Scienceswrote that he hadn’t seen so many arguments in the journal’s 65-year history, or so much anger. After decades challenging forensic experts, Dror had gotten into a fight that threatened his career." Read on: HL;
-------------------------------------------------
STORY: "The bias hunter: Itiel Dror is determined to reveal the role of bias in forensics, even if it sparks outrage," by journalist Douglas Starr published by 'Science' on May 12, 2022.
GIST: "In February 2021, cognitive psychologist Itiel Dror set off a firestorm in the forensics community. In a paper, he suggested forensic pathologists were more likely to pronounce a child’s death a murder versus an accident if the victim was Black and brought to the hospital by the mother’s boyfriend than if they were white and brought in by the grandmother. It was the latest of Dror’s many experiments suggesting forensic scientists are subconsciously influenced by cognitive biases—biases that can put innocent people in jail.
Dror, a researcher at University College London (UCL), has spent decades using real-world cases and data to show how experts in fields as diverse as hospital care and aviation can reverse themselves when presented with the same evidence in different contexts. But his most public work has involved forensic science, a field reckoning with a history of unscientific methods. In 2009, the National Research Council published a groundbreaking report that most forensic sciences—including the analysis of bullets, hair, bite marks, and even fingerprints—are based more on tradition than on quantifiable science. Since then, hundreds of studies and legal cases have revealed flaws in forensic sciences.
Dror’s work forms a connective tissue among them. He has shown that most problems with forensics do not originate with “bad apple” technicians who have infiltrated crime labs. Rather they come from the same kind of subconscious bias that affects everyone’s daily decisions—the shortcuts and generalizations our brains rely on to process reality. “We don’t actually see the environment,” Dror says. “We perceive stimuli from the environment that our brain represents to us,” shaped by feelings and past experience.
“In the span of a decade, cognitive bias went from being almost totally unheard of in forensics to common knowledge in the lab,” Brandon Garrett, a professor at the Duke University School of Law, wrote in his book Autopsy of a Crime Lab: Exposing the Flaws in Forensics. “We can especially thank Itiel Dror for helping bring about the sea change.”
Dror now travels the world testifying in trials, taking part in commissions, and offering training to police departments, forensic laboratories, judges, militaries, corporations, government agencies, and hospitals. National agencies, forensic labs, and police forces have adopted his approach to shielding experts from information that could bias them.
“I don’t know anybody else who’s doing everything that Itiel is doing,” says Bridget Mary McCormack, chief justice of the Michigan Supreme Court, who worked with Dror on a U.S. Department of Justice task force and collaborated on studies with him. “His work is monumentally important to figuring out how we can do better. To my mind it’s critical to the future of the rule of law.”
Dror’s previous studies on bias in forensics caused grumbling, but nothing like the reaction to the 2021 paper. This time, he used a survey to see whether bias could affect decision-making among medical examiners. He concluded that nonmedical evidence such as the race of the decedent or their relation to the caregiver—details that most medical examiners routinely consider—were actually a source of bias.
Eighty-five of the country’s most prominent pathologists demanded its retraction. The National Association of Medical Examiners (NAME) alleged ethical misconduct and demanded that Dror’s employer, UCL, stop his research. The editor of the Journal of Forensic Scienceswrote that he hadn’t seen so many arguments in the journal’s 65-year history, or so much anger. After decades challenging forensic experts, Dror had gotten into a fight that threatened his career.
DROR APPEARS to be a mild-mannered man, with salt-and-pepper hair and wire-rimmed glasses; but that impression disappears the moment he begins talking. He gains momentum like a runaway train, detailing his latest study, making a quick detour to bring up an example, slipping in a funny anecdote, then circling around to put a cap on his original point. He speaks with a mixture of accents and intonations from his upbringing in Israel, his graduate work in the United States, and his professional life in the United Kingdom.
Two diamond studs in his left ear hint at a nonconformist streak. As the child of academic parents who took frequent sabbaticals, Dror attended five elementary schools on three continents. “I’d be the new kid who didn’t know the language very well,” he says. “I didn’t have time to assimilate or conform. It was very difficult, but it gave me a lot of independence of thought.”
Dror hated reading and the discipline of school. But things turned around at age 19, after he broke his back during paratrooper training in the Israeli army—an incident he describes as a “destructively creative” shake-up to his system. Confined to a body cast for 7 months, he took to reading the books he had ignored during high school. He home-tested, and got As in the courses he had previously failed. He studied philosophy at Tel Aviv University, took a year off to work on a kibbutz, and later did a doctorate in psychology at Harvard University, studying mental imagery and decision-making.
One of his projects examined how U.S. Air Force pilots use mental imagery to recognize enemy jets traveling at high speeds. The work caught the attention of David Charlton, a prominent British fingerprint examiner who had started to have doubts about his field.
“I often wondered if when making fingerprint comparisons my eyes were the same from one day to the next,” Charlton says. “And then I came across this paper suggesting that the perception of aircraft pilots could change, depending on stresses or circumstances. And I wondered if it applies to fingerprints as well.”
Fingerprints don’t lie. … But it’s also true that fingerprints don’t speak. It’s the human examiner who makes the judgment, and humans are fallible.
- ITIEL DROR UNIVERSITY COLLEGE LONDON
He had reason for concern. The United Kingdom had been shaken by the scandal of Shirley McKie, a Scottish police constable who was charged with perjury after investigators claimed to find her thumbprint at a murder scene in 1997. McKie was cleared when two American experts testified that the thumbprint could not have been hers. The Americans had their own scandal in 2004, when FBI detained an American lawyer, Brandon Mayfield, as a suspect in a terrorist bombing of a Madrid train station. Among 20 near-matches in their fingerprint database, agents focused on Mayfield, who had converted to Islam and provided legal defense to a Portland, Oregon, resident with Taliban connections. When Spanish authorities found the real bomber, Mayfield sued the U.S. government, which agreed to a $2 million settlement.
Those cases deepened Charlton’s doubts about his own objectivity. He contacted Dror, who suggested they do some research together. They found five fingerprint experts who knew about the Mayfield case but had not seen the fingerprints. Dror and Charlton sent each expert a pair of prints from one of the expert’s own previous cases, which they had personally verified as “matched,” but told them the prints came from the notorious case of FBI’s mismatch of Mayfield’s prints with the terrorist’s.
Four of the five experts contradicted their previous decision: Three now concluded the pair was a mismatch, and one felt he needed more information. They seemed to have been influenced by the passage of time and extraneous information.
“It was so simple and elegant,” Peter Neufeld, co-founder of the Innocence Project, says of the study. “And when people in the forensic community read it, they got it.”
In a follow-up study, Dror and Charlton gave six experts sets of prints they had previously examined along with biasing information—that the suspect had either confessed or had an alibi. Four of the six experts changed their past findings.
The results turned some of Charlton’s colleagues against him. “A lot of people wondered if I was trying to destroy the profession,” he says. Angry letters poured in to Fingerprint Whorld, the professional journal of which Charlton was editor. The chair of the Fingerprint Society wrote that any fingerprint examiner who could be swayed by images or stories “is so immature he/she should seek employment at Disneyland.”
Charlton was so upset by these reactions that he considered abandoning his career. “Don’t worry, this is normal,” he remembers Dror telling him. “It’s part of the human condition. Now let’s do more research and see how we can improve things.”
Kerry Robinson was exonerated after serving 18 years for a conviction based in part on contested DNA analysis.GEORGIA INNOCENCE PROJECT
Dror looked at other biasing factors in fingerprint analysis, some of which were shockingly innocuous. When police retrieve a print from a crime scene, they consult an FBI computer database containing millions of fingerprints and receive several possible matches, in order of the most likely possibilities. Dror found that experts were likely to pick “matches” near the top of the list even after he had scrambled their order, perhaps because of the subconscious tendency to overly trust computer technology.
“People would say to me fingerprints don’t lie,” Dror says. “And I would say yes, but it’s also true that fingerprints don’t speak. It’s the human examiner who makes the judgment, and humans are fallible.”
Dror and his colleagues are quick to point out that bias does not always equal prejudice, but it can foster injustice. Studies have shown, for example, that Black schoolchildren get punished more readily than white children for the same misbehavior, because many teachers subconsciously assume Black children will continue to misbehave. And in forensic science, bias can subconsciously influence experts to interpret data in a way that incriminates a suspect.
If something as seemingly infallible as fingerprints could be biased, what could be next? Dror set his sights on DNA. When the authors of the National Research Council study criticized forensic sciences, they made an exception for DNA analysis, a method developed in the lab that was statistically verifiable and scientifically sound.
But as DNA analysis has gotten more sensitive and sophisticated, it has also come to rely more on human interpretation. For example, when investigators find a mixture of several people’s DNA at a crime scene, it’s up to the analyst to tease apart the contributors. It’s a complicated and subtle process, one that Dror found can be influenced by context. Consider the case of Kerry Robinson in Georgia, who was accused in 2002 of taking part in a gang rape. The state based its case on the plea bargain testimony of Tyrone White, who investigators had identified as the main perpetrator and who bore Robinson a grudge. The state’s two DNA experts found that Robinson’s DNA “could not be excluded,” from the mixture of DNA found at the crime scene, and the jury found him guilty.
Greg Hampikian, a genetics professor then at the Georgia Innocence Project, sent DNA data from the case to Dror, who shared it with 17 DNA analysts unfamiliar with the case. Only one agreed with Georgia’s analysts; the other 16 either excluded Robinson’s DNA or said they could not come up with a result. Dror’s conclusion: Even DNA analysis, the “gold standard” of forensic science, was subject to human bias. The state did not release Robinson until 2020, when Hampikian submitted other exonerating information. Robinson had already served 18 years of his 20-year sentence.
Over the years Dror and other researchers have found bias just about everywhere they’ve looked—in toxicologists, forensic anthropologists, arson investigators, and others who must make judgments about often ambiguous crime scene evidence. Yet juries find forensic evidence compelling, Dror and others have found.
Many examiners feel “impervious to bias,” says Saul Kassin, a psychologist at John Jay College of Criminal Justice, “as if they’re not human like the rest of us.” In 2017, Kassin and Dror asked more than 400 forensic scientists from 21 countries about their perceptions of bias. They found that whereas nearly three-quarters of the examiners saw bias as a general problem, just over 52% saw it as a concern in their own specialty, and only 26% felt that bias might affect them personally.
A GLOSSARY OF BIAS
Itiel Dror and his collaborators have coined various terms to describe how bias sneaks into forensic analysis—and how experts perceive and react to their biases.
TARGET-DRIVEN BIAS Subconsciously working backward from a suspect to crime scene evidence, and thus fitting the evidence to the suspect—akin to shooting an arrow at a target and drawing a bull’s-eye around where it hits
CONFIRMATION BIAS Focusing on one suspect and highlighting the evidence that supports their guilt, while ignoring or dismissing evidence to the contrary
BIAS CASCADE When bias spills from one part of the investigation to another, such as when the same person who collects evidence from a crime scene later does the laboratory analysis and is influenced by the emotional impact of the crime scene
BIAS SNOWBALL A kind of echo chamber effect in which bias gets amplified because those who become biased then bias others, and so on
BIAS BLIND SPOT The belief that although other experts are subject to bias, you certainly are not
EXPERT IMMUNITY The belief that being an expert makes a person objective and unaffected by bias
ILLUSION OF CONTROL The belief that when an expert is aware of bias, they can overcome it by a sheer act of will
BAD APPLES The belief that bias is a matter of incompetence or bad character
TECHNOLOGICAL PROTECTION The belief that the use of technology, such as computerized fingerprint matching or artificial intelligence, guards against bias
Dror says the best approach to fighting bias is to shield experts from extraneous information, similar to the “blinding” in scientific experiments. He calls the process Linear Sequential Unmasking, in which the analyst only sees the evidence that’s directly relevant to their task.
Some authorities have endorsed the approach.
The United Kingdom’s Forensic Science Regulator recommends it as “the most powerful means of safeguarding against the introduction of contextual bias.” FBI adopted the process following the Mayfield case: Because humans tend to see similarities between objects viewed side by side, agents now document the features of a crime scene fingerprint on its own before comparing it to a suspect’s prints.
After consultation with Dror, police in the Netherlands began to blind fingerprint examiners to details of a crime investigation that might influence their analysis, such as the condition of the body or the urgency of the case, says John Riemen, the police force’s lead biometrics specialist. The approach ensures “you’re looking at fingerprints, and not at your biases,” he says.
IT WAS AN ATTEMPT to win medical examiners over to this approach that landed Dror in hot water. In 2019, he got a message from Daniel Atherton, a pathologist at the University of Alabama, Birmingham, who wanted him to look at some data he had collected.
Atherton had sent a survey to 713 pathologists across the country positing one of two scenarios in which a toddler with a skull fracture and brain hemorrhage was brought to an emergency room and died shortly thereafter.
In one scenario, the child was white and was brought in by the grandmother. In the other, the child was Black and brought in by the mother’s boyfriend. The survey asked participants to decide whether the manner of death was undetermined, accidental, or homicide.
Dror analyzed the results and found that of the 133 people who answered the survey, 32 concluded the death was a homicide. And a disproportionate number of those—23—had received the scenario with the Black child and the boyfriend. Participants reading the “Black condition” were five times more likely to conclude homicide than accident, whereas participants in the “White condition” ruled accident more than twice as frequently as homicide.
“Their decisions were noticeably affected by medically irrelevant contextual information,” Dror, Atherton, and their colleagues wrote in their paper, published in the Journal of Forensic Sciences.
The paper also included a survey of 10 years of Nevada death certificates showing an apparent correlation between Black deaths and findings of homicide versus accident—influenced, perhaps, by cultural biases. “I just wanted to get that information out there to begin a discussion,” Dror says of the study.
He got more of a discussion than he expected. The journal was swamped with angry letters from medical examiners. One derided the study as “rank pseudoscience.” Another, signed by the president of NAME along with 84 other pathologists, excoriated the study as “fatally flawed” and “an abject failure of the peer review process,” and demanded its retraction. (Michael Peat, editor of the journal, declined to retract the article, saying it had been peer reviewed before publication and rereviewed by a respected biostatistician following the complaints.)
Many pathologists pointed out that the experimental design linked two unrelated variables—the race of the child and their relationship to the caretaker. They were further inflamed by Dror’s labeling the scenarios “Black condition” and “White condition,” when they had reason to suspect that the caretaker, not the race, was the relevant variable. Statistics show a boyfriend of any race is far more likely to harm a child in his care than a grandmother.
“To introduce race … appears to be an effort to label the survey responders, and their colleagues by proxy, as racist,” said the letter from the 85 practitioners. “Had this survey been done with the races reversed … White cases were more likely to be called homicide and Black cases more likely to be called accident.” They contended that Dror was using inflammatory language to get headlines. And they noted that other factors could have played a role in the pathologists’ decisions, such as their level of experience, local crime statistics, and office policies, none of which Dror had considered.
Stephen Soumerai, an expert in research design at Harvard Medical School, agrees that linking a known risk factor for homicide (caregiver relationship) to a nonwhite race is problematic. And the survey of Nevada death certificates failed to investigate other possible explanations beyond race, he says. “The hypothesis is reasonable and important, but the research does not adhere to basic principles of research design,” he says.
Dror admits he would have been wise to use neutral terms to designate the two experimental groups. But he doesn’t concede that the study is flawed. “It is a first study to examine and establish that there is bias in forensic pathology,” he says. Dror agrees that statistics do show an unrelated caretaker is more likely to harm a child than a grandmother. But such generalizations should not affect how examiners diagnose individual cases.
Judy Melinek, CEO of PathologyExpert, Inc. who practices forensic pathology in Wellington, New Zealand, agrees. “I’ve seen too many cases where innocent caregivers were prosecuted for accidental child deaths because forensic pathologists made assumptions based on larger trends.”
Brandon Mayfield was detained as a bombing suspect based on a flawed FBI fingerprint analysis.AP PHOTO/DON RYAN
Dror says antibias strategies are especially important to medical examiners because many work hand in hand with police who might influence them.
One solution is to have a laboratory’s case manager safeguard details about an investigation and unveil them to a medical examiner only as needed to determine the manner of death—similar to the Linear Sequential Unmasking used by the Dutch police, among others. “It’s all about looking at the right evidence in the right sequence,” Dror says.
That approach represents a “clueless” understanding of how medical examiners work—one that cognitive psychologists have held for years, says William Oliver, a retired professor of pathology at East Carolina University’s Brody School of Medicine and a former board member of NAME. Unlike other forensic examiners, who match patterns from a particular type of evidence, medical examiners must gather all the information about the case that they can to make a correct diagnosis, he says.
They determine both cause of death—the injury or illness that killed a person—and the manner of death, which describes how the death came about. If a dead man is found sitting in his car with the engine idling, the garage door closed, and high levels of carbon monoxide in his blood, the autopsy would likely conclude that the cause of death was carbon monoxide poisoning. But the manner of death would remain “undetermined” unless investigators found signs pointing to suicide, such as a note, recent job loss or divorce, or statements from friends that he had been depressed.
“Manner is not a scientific determination, and it is not meant to be,” Oliver says. Aggregate statistics—like the rates at which grandmothers and unrelated caretakers harm children—are crucial to making that judgment, he says.
The acrimony around Dror’s paper snowballed. On 19 March 2021, Brian Peterson, a member of NAME and chief medical examiner for Milwaukee County in Wisconsin, filed a formal ethics complaint with the association against the four pathologists who collaborated with Dror. Their paper would “do incalculable damage to our profession,” exposing every medical examiner to withering cross-examination at trials, he wrote.
“I was shocked at the reaction,” says Joye Carter, a forensic pathology consultant and co-author on the study, who was named in the complaint. “We’re supposed to be fact finders, but people got whipped up into this ridiculous attitude that they were being persecuted.”
Both the Innocence Project and the Legal Defense Fund came to the pathologists’ defense, and NAME dismissed Peterson’s complaint in May 2021. (Carter, a prominent pathologist who was the first Black chief medical examiner in the United States, resigned from NAME. “There’s no way I can be part of a group like this,” she says.)
But Dror faced a separate attack from NAME’s leadership. In an 8 March 2021 letter to UCL, NAME’s then-President James Gill and Executive Vice President Mary Ann Sens accused him of intentional ethics violations, including misleading participants by not telling them the study was about race and bias.
The letter triggered a hearing at UCL’s ethics board, which Dror says could have ended in his dismissal.
He argued that disclosing the nature of the study would have biased the results, and at one point he became so emotional that he had to leave the room to regain his composure.
Ultimately, the board found in his favor, ruling that “the allegation is mistaken.”
The question of bias in autopsies rocketed to the headlines after Minneapolis police officer Derek Chauvin killed George Floyd on 25 May 2020.
During the trial in April 2021, the local medical examiner for Hennepin County in Minnesota testified that the manner of death was “homicide,” as did other pathologists.
But an expert hired by Chauvin’s defense team, former Maryland Chief Medical Examiner David Fowler, testified that Floyd had so many underlying health challenges that the manner of death was “undetermined.”
Chauvin was found guilty, but Fowler’s testimony outraged other pathologists and physicians, who saw in his conclusions a pro-police bias.
More than 400 of them signed a petition to Maryland Attorney General Brian Frosh demanding an investigation into all the death-in-police-custody cases during Fowler’s 17 years in office.
Frosh recruited seven international experts to design the study, including Dror. And despite all the blowback Dror has received for trespassing in the field of forensic pathology, he agreed to participate.
“If my work results even in one person not getting wrongly convicted, or one guilty person not going free, then it’s worth all the grief I’ve been getting,” he says. “And maybe not just one person. Hopefully this is going to change the domain.”
The entire story can be read at:
https://www.science.org/content/article/forensic-experts-biased-scientists-claims-spark-outrage
PUBLISHER'S NOTE: I am monitoring this case/issue. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic" section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/charlessmith. Information on "The Charles Smith Blog Award"- and its nomination process - can be found at: http://smithforensic.blogspot.com/2011/05/charles-smith-blog-award-nominations.html Please send any comments or information on other cases and issues of interest to the readers of this blog to: hlevy15@gmail.com. Harold Levy: Publisher: The Charles Smith Blog;