Sunday, October 17, 2010

FINGERPRINT ANALYSIS; SCIENCE OR PSEUDO-SCIENCE? GRITS FOR BREAKFAST WADES IN;

"In a 1995 IAI-approved proficiency test, 22% of the test takers identified the wrong person one or more times. In addition, 36% could not identify prints that test givers said they should have been able to match, which means that a guilty person could have gone free in a real case.

Ken Smith, chairman of IAI certification, suggests the error rate may not fairly represent the profession because the test takers were anonymous and there was no way to determine their credentials.

But fingerprint examiner David Grieve, editor of the Journal of Forensic Science, said in an article that the test results were alarming and noted that reaction in the forensic science community "ranged from shock to disbelief."

That's just one study, Siegel emphasized, and more research is needed in the area, but it's certainly a stunning result."

GRITS FOR BREAKFAST; Grits for Breakfast says it "looks at the Texas criminal justice system, with a little politics and whatever else suits the author's (Jeff Blackburn) fancy thrown in. All opinions are my own. The facts belong to everybody." Its motto: "Welcome to Texas justice: You might beat the rap, but you won't beat the ride."

----------------------------------------------------------------------------------

"There was an astonishing moment yesterday at a breakout session on fingerprint examination at the Texas Forensic Science Seminar, at which Department of Public Safety fingerprint examiner Bryan Strong (who seemed like a really nice guy so I hate to pick on him) was describing how his division implemented the ACEV method of fingerprint examination in ways that may violate the state and prosecutors' obligations under Brady v. Maryland," the October 9, 2010 Grits for Breakfast post which ran on October 9, 2010 under the heading, "Brady violations by DPS fingerprint examiners? Is fingerprint examination even science?," begins.

"ACEV stands for Analysis, Comparison, Evaluation and Verification,"
the post continues.

"That's bureaucrat-speak for looking at the fingerprints visually and subjectively deciding if they're the same based on "training and experience" (as opposed to any sort of objective standard), then having a second examiner look at them to "verify" the results. There is no minimum number of similarities or comparison points required to declare two fingerprints a "match," though many other countries have established such standards. (Notably, at DPS if an examiner finds fewer than 11 points of comparison, two people must verify the conclusion.)

Anyway, Mr. Strong described what happens when the first examiner finds a match but the verifying analyst doesn't agree. In such instances, he said, they notified their supervisor and all of them conferred to make a decision. A defense attorney in the crowd asked what seemed to me an obvious question: When two examiners originally disagreed but a supervisor resolved the issue in favor of a match, was that disagreement recorded in the final report? No, replied Strong, only the conclusion. At this, the audience began to murmur and fidget. Somebody from the back cried out, "Have you ever heard of Brady v. Maryland?," which is the US Supreme Court case requiring the state to turn over all exculpatory evidence to the defense before trial. No he had not, replied a credulous Strong, a statement which elicited an audible gasp from the crowd.

So essentially, if two examiners who looked at the prints come to different conclusions but a supervisor resolves the question against the interests of the defendant, according to this presentation, that information is not routinely disclosed to defense counsel. On its face that's a straightforward violation of Brady. Who knows how many times that scenario has occurred over the years!

A representative from the Texas Attorney General's office then asked Mr. Strong if his division had access to legal counsel at DPS, and he said he believed they did. She told him politely (if somewhat obliquely) that it appeared there were some legal issues surrounding the division's work that he wasn't aware of (problems that likely emanated higher up the chain of command than Strong's level, she gently added) and offered her agency's assistance to retrain folks at the fingerprint examination division on the subject!

There was quite a bit of discussion of fingerprints at the event, some of the most interesting by Dr. Jay Siegel, a forensic scientists who was on the 17-member National Academy of Sciences panel which published a damning report last year calling into question the scientific basis of "pattern evidence," where visual comparisons were the basis for connecting evidence to a defendant. Fingerprints are by far and away the most common and important type of pattern evidence. Siegel says 60% of fingerprint analysts don't actually work in crime labs - they're sworn police officers working in their own departments.

The most high-profile case of a mismatch was that of Brandon Mayfield, an Oregon attorney falsely accused of the Madrid train bombing after four examiners mistakently identified him using the ACEV method. (See the USDOJ Office of Inspector General's report [pdf] on that case.)

Siegel cited eye-popping data from a study published in the 1990s in which the umbrella organization for fingerprint examiners, the International Association of Identification, performed proficiency testing on their own members and came up with a 19% false-positive rate! Though I can't find the study online, I did find this apparent reference to it in an article from the Los Angeles Times:

In a 1995 IAI-approved proficiency test, 22% of the test takers identified the wrong person one or more times. In addition, 36% could not identify prints that test givers said they should have been able to match, which means that a guilty person could have gone free in a real case.

Ken Smith, chairman of IAI certification, suggests the error rate may not fairly represent the profession because the test takers were anonymous and there was no way to determine their credentials.

But fingerprint examiner David Grieve, editor of the Journal of Forensic Science, said in an article that the test results were alarming and noted that reaction in the forensic science community "ranged from shock to disbelief."

That's just one study, Siegel emphasized, and more research is needed in the area, but it's certainly a stunning result. The day before Dr. Joe Bono, President of the American Academy of Forensic Sciences, had said a technique which had a 30-40% error rate (the Horizontal Gaze Nystagmus test for intoxication used in DWI enforcement) didn't qualify as "science." I asked him if a technique resulting in a 19% false positive rate constituted science, but he demurred, saying he hadn't seen the fingerprint study. Siegel said proficiency testing of all forensic examiners, including those dealing with fingerprints, should be "blind," i.e., given to the examiner as part of their routine work so they didn't know they were being tested.

A good argument for the "blind" testing approach may be found in the work of cognitive neuroscientist Itiel Dror, which Dr. Siegel cited and which has previously been discussed on this blog, who conducted a study in which five fingerprint examiners were given pairs of prints which they'd earlier personally matched during their own career. This time, however, they were told the prints were from the Brandon Mayfield case, which had already been well-publicized. The result: Three of them reversed their conclusion, saying the prints they'd previously "matched" did not come from the same person. A fourth said the results were inconclusive. Only one of the five stuck to his guns and said the prints came from the same person!

This makes me wonder if the ACEV approach itself is fundamentally flawed by bias. If knowledge of another examiner's conclusions can so easily taint results, it shouldn't be the case that the second analyst should know that they're being called on to "verify" someone else's match, which implies someone else already reached a conclusion. It seems to me it'd be much cleaner to give the prints to the second examiner without telling them what the first examiner found.

Siegel concluded that there's no scientific proof fingerprint examination (or for that matter, other pattern evidence) can "individualize" their results to the exclusion of other possible sources. Indeed, searching around this morning for the IAI study, I ran across these comments from Dr. Siegel in another forum last year where he persuasively argued that individualizaiton isn't even necessary:

Not only do I beleive that a conclusion of absolute inclusion is not scientifically justified but that it is not necessary. I don’t believe that the concept of “individualization” has any place in science and is not provable in the real world. Why is it necessary to offer a nonsupportable, unprovable conclusion in court, where there is so much at stake and juries are so easily mislead? When a forensic scientists analyzes evidence that is brought to the lab by a criminal investigator, there is a reason why the focus is on that person. (This can set up a situation that invites bias on the part of the examiner, but that is another issue). The fact that there are many similarities between a latent print lifted at a crime scene and a print from a suspect, with no unexplainable differences, should be testimony enough. This would presumably be one piece of corroborating evidence in a net of evidence being offered by the prosecutor. There is no need to “guild the lily” by adding the conclusion that there is no other fingeprint in the world that could be the source for the print from the crime scene. Offering unsupportable conclusions in court reinforces the idea that forensic science really isn’t science; it is a tool of the prosecution.

That position makes a lot of sense to me. The conclusion of a fingerprint "match" by an "expert" is an incredibly damning piece of evidence when really all they're saying - that anyone can prove - is that there are similarities between the two prints. There were also similarities between Brandon Mayfield's print and the Madrid train bomber's, but that didn't mean Mayfield did it."


---------------------------------------------------------------------------------

The post can be found at:

http://gritsforbreakfast.blogspot.com/2010/10/brady-violations-by-dps-fingerprint.html

---------------------------------------------------------------------------------

PUBLISHER'S NOTE: The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic" section which focuses on recent stories related to Dr. Charles Smith. It can be accessed at:

http://www.thestar.com/topic/charlessmith

For a breakdown of some of the cases, issues and controversies this Blog is currently following, please turn to:

http://www.blogger.com/post-edit.g?blogID=120008354894645705&postID=8369513443994476774

Harold Levy: Publisher; The Charles Smith Blog; hlevy15@gmail.com;