PASSAGE OF THE DAY: "The opening assessment concurs with the long-held view that all 10 of a person’s fingerprints display distinctive ridge patterns – the loops, arches and whorls of the human fingerprint – and that such features provide valuable tools for identifying specific people. Yet, it concludes, there are insufficient scientific data on the potential pool of all human fingerprints to prove that a set of fingerprints constitutes a unique identifier of a single person, nor are there enough data to determine how many people might display similar features. “Uncertainty in this area means there is an inadequate scientific foundation for determining how many features, or what types, are needed in order for an examiner to draw definitive conclusions about whether a latent print was made by a given individual and hence… examiners should not draw such conclusions,” the report says."
STORY: "Fingerprint Source Identity Lacks Scientific Basis for Legal Certainty," by Anne Q. Hoy, published by AAAS News on September 15, 2017.
SUB-HEADING: "More Research into Validity of Fingerprint Comparisons Needed, Forensic Report Says.
PHOTO CAPTION: "The conclusions of latent fingerprint examiners are often drawn from experience but they lack scientific data to confirm their findings, a AAAS working group forensic science report says.
GIST: "Courtroom testimony and reports stating or even those implying that fingerprints collected from a crime scene belong to a single person are indefensible and lack scientific foundation, a new AAAS working group report on the quality of latent fingerprint analysis says. “We have concluded that latent print examiners should avoid claiming that they can associate a latent print with a single source and should particularly avoid claiming or implying that they can do so infallibly, with 100% accuracy,” states the report. For decades, juries across the United States have been asked to weigh the validity and reliability of evidence relating to latent fingerprints, the samples collected from crime scenes that fingerprint examiners later compare with those known to belong to identified sources. Forensic examiners have long proclaimed high levels of certainty that latent prints, based on their analysis, originated from an “identified” person, statements that multiple reports have called “scientifically indefensible.” Studies by the National Research Council in 2009, a National Institute of Standards and Technology’s working group on latent fingerprint analysis in 2012, and, most recently, the President’s Council of Advisors on Science and Technology in 2016, reached similar conclusions. Such assertions have led to false arrests and convictions. While most examiners no longer claim the “100% accuracy” of a fingerprint analysis, the moderating terms now used in court testimony and reports continue to state that examiners can “identify” or are “practically certain of” the source of a latent print, says the report. Empirical tests are necessary to measure the accuracy and establish the validity of latent fingerprint examinations, states the AAAS report as does the earlier PCAST report. “In reality, there is not, at present, an adequate scientific basis for either claim,” the AAAS report says. “There is no basis for estimating the number of individuals who might be the source of a particular latent print. Hence, a latent print examiner has no more basis for concluding that the pool of possible sources is probably limited to a single person than for concluding it is certainly limited to a single person.” The “Forensic Science Assessments: A Quality and Gap Analysis of Latent Fingerprint Analysis” report makes clear that while latent fingerprint examiners can successfully rule out most of the population from being the source of a latent fingerprint based on observed features, insufficient data exist to determine how unique fingerprint features really are, thus making it scientifically baseless to claim that an analysis has enabled examiners to narrow the pool of sources to a single person. Latent fingerprints are concealed impressions of all or one of the fingers left by an unknown person and made visible by law enforcement officials using techniques such as lasers or powders, often as part of crime investigations. Crime scene investigators lift the impressions from their surface. Fingerprint examiners later analyze the multiple characteristics of a print by comparing it against known fingerprints in an effort to find similarities and render judgments about common traits. The report provides an extensive review of the literature on the forensic science of latent fingerprint analysis. It evaluates where such analysis is well grounded in science, identifies practices that lack scientific foundation and recommends areas in need of further research. It was written by a working group that included an academic statistician, biometric engineer, a psychologist who focuses on decision making and judgment and a forensic scientist. It is the result of a project led by the AAAS Scientific Responsibility, Human Rights and Law Program. “This evaluation is intended to point out where forensic practice is well founded in science and where it is not and to produce a research agenda to serve as the basis for arriving at forensic methods that will inspire greater confidence in our criminal justice system,” says the report, the second of two AAAS reports on the state of forensic science, the first focusing on the quality of fire investigations in the United States. Drawing on the conclusions of existing literature about scientifically appropriate statements examiners should make in testimony and reports, the report says examiners should convey the high level of scientific uncertainty that underlies the analysis they are presenting in court and make clear the findings are subjective and not grounded in evidence. The AAAS report, like PCAST’s, states that absent estimates of the accuracy rates of examiners, testimony by examiners that two fingerprints are indistinguishable is scientifically meaningless and should be deemed insufficient evidence in court. “Consequently, there is no scientific basis on which latent print examiners might form expectations as to whether a particular set of features is likely or unlikely to be repeated. Any expectations latent print examiners may have on this matter rest on speculation and guesswork, rather than empirical evidence,” states the report. Beyond the appropriate reach of examiners’ findings and testimony, the report explores five other areas that have raised questions about the validity and reliability of the science of fingerprint analysis, a discipline that has been used by the legal system for more than 100 years but more rigorously reviewed only recently. The report covers examinations of scientific studies on the variability of human fingerprints; the range of differences of a distinct fingerprint lifted at different points in time, from different digits or hands, or under a variety of conditions; as well as the accuracy of automated fingerprint systems. The report also delves into existing literature that traces how subjective judgments by latent fingerprint examiners can influence their findings, particularly when they are exposed in advance of their analysis to information about an underlying criminal investigation or shown an existing fingerprint of a suspect. Studies on the accuracy rates of fingerprint examiners also are probed. In reviewing how different levels of training and varying perceptive abilities of examiners impact their work, the assessment also factors in how examiners handle evidence of different quality, such as a slightly smudged fingerprint, and the impact of workplace procedures, including the levels of investigative information examiners are exposed to and how such information can influence judgments. The report proposes further research to determine whether the validity of latent fingerprint analysis is strong enough to be used as evidence. Also warranted are studies on how police officers, judges, lawyers and jurors evaluate and understand fingerprint evidence, the report says. The opening assessment concurs with the long-held view that all 10 of a person’s fingerprints display distinctive ridge patterns – the loops, arches and whorls of the human fingerprint – and that such features provide valuable tools for identifying specific people. Yet, it concludes, there are insufficient scientific data on the potential pool of all human fingerprints to prove that a set of fingerprints constitutes a unique identifier of a single person, nor are there enough data to determine how many people might display similar features. “Uncertainty in this area means there is an inadequate scientific foundation for determining how many features, or what types, are needed in order for an examiner to draw definitive conclusions about whether a latent print was made by a given individual and hence… examiners should not draw such conclusions,” the report says. The report points out that research evaluating the accuracy of fingerprint analysts has found that those well trained can effectively compare a fingerprint lifted from a crime scene against a known fingerprint. Yet, such study results may be skewed because, in many instances, fingerprint examiners were aware they were being tested, the report notes. In calling for additional research to ensure that latent print examinations are valid, the AAAS and the PCAST reports propose such studies be conducted without the knowledge of examiners. The testing approach, known as “test-blind,” keeps fingerprint examiners walled off from police reports, rap sheets and other material that can unconsciously influence an examiner’s perceptions before forming an opinion about the fingerprints being studied. The FBI Laboratory, for instance, has adopted such workflow procedures, known as context management procedures, to limit the amount of information examiners are provided at different stages of their analysis to reduce the influence such information can have on conclusions drawn. The report endorses having known-source fingerprints carefully slipped into an examiner’s normal flow of casework, without the examiner’s knowledge, to test accuracy rates in real work settings. Such an approach presents challenges, the report concedes, requiring, for instance, cooperation from the police to produce and enter into the normal workflow systems simulated, or false, prints for examiners to study. Another way to get around cognitive bias, the report states, is to improve the ability of automated fingerprint systems. Already, automated systems play an important role in helping law enforcement officials quickly cull through thousands of fingerprints to identify those with features “most similar” to latent fingerprints under review, the report notes. The systems, however, are unable to match a fingerprint lifted from a crime scene to one gathered earlier by authorities from a known source, nor can they determine whether a comparison of two prints – one from a crime scene, the other from police records – is valid, the report says. Still, the report points to the promise of automated systems, saying they could become effective in determining when a crime scene print matches a known print and in weighing the legal strength of a fingerprint analysis indicating that a pair of prints originated from the same person. “It is possible that automated fingerprint identification systems could evolve over time,” the report says.
The entire story can be found at:
https://www.aaas.org/news/fingerprint-source-identity-lacks-scientific-basis-legal-certainty
See CSI DDS (Forensics in Focus) post by Dr. Mike Bowers, under the heading "The worlds largest science org (sic) blasts forensic latent fingerprints for unsupportable opinions."...The AAAS is relatively new to reviewing forensics, but it has taken on pattern-matchers with the full force of its multidisciplinary membership. Its candor is a refreshing look into latent print matching experts. The AAAS has no law enforcement affiliations like the National Institute of Justice which is the primary funding source to forensic investigators. Simply put, they don’t have political affiliations or the NIJ’s prosecutorial culture. Here’s a piece of the AAAS September 2017 look at Latents. “Drawing on the conclusions of existing literature about scientifically appropriate statements examiners should make in testimony and reports, the report says examiners should convey the high level of scientific uncertainty that underlies the analysis they are presenting in court and make clear the findings are subjective and not grounded in evidence.” This doesn’t read like the recent blurb from the USDOJ DAG Rod J. Rosenstein’s take on police forensics. His statement carries no mention of its flimsy connections, in some cases, with scientific proofs. They have too many skeletons and cases regarding half-baked opinions used by prosecutors to admit much else. Hence my lede’s use of “denying the obvious.”
https://csidds.com/2018/03/03/the-worlds-largest-science-org-blasts-forensic-latent-fingerprints-for-unsupportable-opinions/
PUBLISHER'S NOTE: I am monitoring this case/issue. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic" section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/c