Tuesday, January 8, 2019

Back in action: On-Going: Flawed Forensics: UCLA Law Dean Jennifer Mnookin explains the uncertain future of forensic science, in The Verge (reporter Angela Chen) - and concludes that many of the techniques are faulty, but still admissible in court..."Bite marks are, in my opinion, one of the most distressing forms of evidence that continues to be used right now. Not only do we not have good evidence to support the validity of bite mark identification, we actually have very good evidence to support that it’s not valid. In some of the other techniques, we really don’t know if the validity is proven, but with bite marks there are numerous studies showing that bite mark identification experts have a distressing high error rate and can’t even accurately identify whether a mark left on a skin is a bite mark or not, much less whether it belongs to a particular person. So the fact that the courts haven’t consistently and loudly said that bite mark identification should not be permitted is both distressing and, I think, surprising."


PASSAGE OF THE DAY: "The reality, says UCLA School of Law dean Jennifer Mnookin, is that many of these so-called pattern evidence techniques used in forensic science are faulty and not supported by evidence. In fact, when it comes to wrongful conviction cases (where new DNA evidence proves that someone was innocent), bad forensic science is the second most frequent contributing factor, behind only eyewitness testimony. There are real, and harmful, consequences to forensic science in the courtroom."

----------------------------------------------------------

INTERVIEW: "The dean of UCLA Law explains the uncertain future of forensic science: Many of the techniques are faulty, but still admissible in court,"  by Angela Chen, published by The Verge on December 20, 2018. (Thanks to Dr. Michael Bowers for bringing this story to our attention on his most-informative Blog 'CSI DDS'; Forensics and Law in Focus."

GIST: "Shows like Law and Order and CSI have taught a generation of Americans that blood spatters and handwriting analysis are crucial for catching criminals. The reality, says UCLA School of Law dean Jennifer Mnookin, is that many of these so-called pattern evidence techniques used in forensic science are faulty and not supported by evidence. In fact, when it comes to wrongful conviction cases (where new DNA evidence proves that someone was innocent), bad forensic science is the second most frequent contributing factor, behind only eyewitness testimony. There are real, and harmful, consequences to forensic science in the courtroom. The Verge spoke to Mnookin, who recently wrote a paper on the uncertain future of forensic science, about how forensic science is different from laboratory research, which techniques might be credible, and why she’s not optimistic that the system is going to change anytime soon. This interview has been lightly edited for clarity. 

Many people don’t realize that forensic science developed separately from laboratory science and is much less credible. How did that happen?
A lot of these traditional kinds of forensic science began outside of the university context and outside of any research framework. They each have an interesting history behind them. For example, let’s take handwriting identification evidence. Many of the early handwriting experts actually had previously been either bank tellers who were in the habit of looking at people’s handwriting to decide whether to honor checks, or clerks. For a long time, the profession of being a clerk and writing things down was a sort of honorable middle-class profession and the advent of the typewriter changed that and made it much less necessary, and some of these clerks went on to assert expertise in handwriting and some other techniques. A lot of the early developments came from police-adjacent policies that wanted to figure out how to prove things better. There’s nothing wrong with any of these origin stories, except that they don’t develop in ways that invite careful scrutiny and scientific study. When you have a scientific crime detection lab adjacent to law enforcement trying to figure out how can we better solve cases, you’re not necessarily looking at how we can test these new techniques and make sure they’re valid. If you have early handwriting examiners advertising their authority and hoping that lawyers will come to them for help, there may not be any situation where anyone’s doing careful scrutiny of whether they can really do what they claim to do. It’s not that anybody was trying to commit fraud or do something wrong, but these techniques did develop in ways that didn’t lead to them being tested carefully because the judges in these early cases didn’t require it. They just say, “you claim to be an expert? Sure.”

I’m sure that in the realm of forensic science, there are some forms of pattern evidence that are more credible and less credible. What are some examples of that?
 The Texas Forensic Science Commission has put a moratorium on bite mark evidence and others have expressed doubts, but there has not yet been a trial court that has excluded it from evidence on the grounds of it being insufficiently reliable, and that’s shocking. On the other hand, fingerprint evidence has been used since the early 20th century and there was remarkably little serious study of its accuracy or error rates. That’s begun to change in the wake of the 2009 National Academy of Sciences report on forensic science. There has started to be meaningful evidence. It’s not as substantial as I wish it were, but it exists now in meaningful quantity and a number of studies are well-done. There’s pretty clear evidence that fingerprint experts are more accurate than lay people or novices. There is a craft knowledge. There’s been some accuracy and error rate studies that show that, while fingerprint experts do make mistakes, those error rates appear not to be too high in many circumstances. I think fingerprint evidence carefully expressed and limited does have enough validity that it deserves to be a brick in the evidentiary wall. I’m not sure it’s enough to support a conviction without any other evidence.

What are the consequences of all this? I was surprised at the stat that forensic science is the second most frequently found contributing factor in certain wrongful conviction cases. Do we have numbers or a way to quantify what harm has been done?
It’s incredibly hard to get accurate numbers about wrongful conviction rate. It’s a heck of a lot higher than zero, but we don’t have any way of assessing it across all cases. That makes it very challenging to answer the question of how often forensic science evidence introduced in court is mistaken or erroneous because we don’t know how many mistakes we’re making overall. That’s not an encouraging recipe for change.
 
"People have been sounding the alarm about faulty forensic science for years. Some hope that there will be widespread change, but you’re less optimistic. Why is that?
I’m not wildly optimistic. In the time since the National Academy of Sciences report was issued, we really have seen some important forms of engagement and some modest forms of change. It would be a mistake not to recognize and even celebrate that. There’s a new degree of engagement by forensic practitioners, even parts of the law enforcement community, by scholars, and by some judges to take these questions seriously. At the same time, a lot of the changes seem pretty modest and there’s ways in which many judges are still exhibiting somewhat ostrich-like behaviors about forensic science and don’t seem interested in or willing to confront the hard questions that insufficiently validated forms of evidence raise. Plus, we have no institutional space that has both authority and broad stakeholder engagement. I don’t believe there’s a lot of reason to think that we’re going to have a lot of force for change. This administration’s Justice Department has been less interested in thinking about these questions than the Obama administration, and frankly, the Obama administration wasn’t as interested in taking these questions seriously as I wish they would have been.

All this is related to my next question. Change is hard in general, but what are some specific factors that are keeping the courts from changing? 
There are several factors. One is the power of precedent in legal decision-making. You have these techniques and some have been around for a long time, and there’s a bunch of judicial opinions that say they’re admissible and legitimate. They may not be well-reasoned. They may not be based on a thoughtful examination of the underlying validity of the science, but there they are. So you have busy trial court judges making admissibility decisions about techniques that have been around for a long time and the easy thing to do, no question, is to preserve the status quo. Given that we have a system that emphasizes precedent, that’s an even easier thing for judges to do. Many judges have been reluctant to even hold hearings about the question of adequate reliability, or some who permit such hearings end up shrugging and saying, “it could go either way, but we’ve used it for a long time so it’s good enough.” It probably doesn’t help matters that more judges with criminal law backgrounds come from the prosecuting side than the defense side and these techniques feel like they’re in the realm of common sense. That’s the judicial side. On the forensic science side, many don’t have any science background. They come to law enforcement and don’t necessarily have a college degree, either. Now many forensic departments do require an undergrad science degree, but it’s very rare to have PhD-level science training, and many forensic scientists are not themselves scientific researchers, so they’re not well-positioned to research their own discipline or think about it from a research perspective. That doesn’t mean they’re not professionals trying to do a good job, but they’re not well-situated to be engaged in the exercise of establishing validity or to deeply understand what that requires. There’s begun to be some spaces within universities looking at these questions, but still not a lot. So we continue to have a sort of guild mentality with forensic science, judges who have institutional incentives not to look deeply, and prosecutors who often tend to have more resources than the defense attorneys. That’s not an encouraging recipe for change."

The entire story can be read at:
https://www.theverge.com/2018/12/20/18149946/jennifer-mnookin-forensic-science-crime-law-politics-ucla-dean

PUBLISHER'S NOTE: I am monitoring this case/issue. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic" section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/charlessmith. Information on "The Charles Smith Blog Award"- and its nomination process - can be found at: http://smithforensic.blogspot.com/2011/05/charles-smith-blog-award-nominations.html Please send any comments or information on other cases and issues of interest to the readers of this blog to: hlevy15@gmail.com.