GIST: "Ten years ago, the National Academy of Sciences (NAS) published a groundbreaking study on the use of forensics in criminal trials. The study found that, in the “pattern matching” fields of forensics in particular, expert witnesses had been vastly overstating the significance and certainty of their analyses. For some fields, such as bite-mark analysis, the study found no scientific research at all to support the central claims of practitioners. Since then, other panels populated with scientists have come to similar conclusions, including the President’s Council of Advisors on Science and Technology and the Texas Forensic Science Commission. In 2013, Congress and the Obama administration responded to these reports by creating the National Commission on Forensic Science, a panel of lawyers and scientists charged with coming up with standards and protocols in these fields. The Trump administration then allowed the commission’s charter to expire in April 2017. In covering these issues, I have found that there are lots of people willing to talk about the problems with forensics in the courtroom. But solutions are harder to come by — especially solutions that would be politically feasible, findable, and fit the current framework of our judicial-legal system. So I decided to seek solutions from those who work in the areas of law, science and forensics. I sent an inquiry to 33 people, 14 of whom were willing to email answers to a set of six questions. All could be called critics of the way forensics are used in criminal cases today. You can read their full biographies here.
The questions produced some interesting results. There was much agreement on general problems: The courts have done a poor job of keeping junk science and dubious expertise out of criminal trials. The pattern-matching fields of forensics — in which an analyst compares a piece of evidence from a crime scene to a piece of evidence thought to implicate a suspect — are largely subjective, lack structure and standards, and are hobbled by cognitive bias. And the legal system is too reluctant to revisit and correct old cases affected by these problems. When asked about the root causes of these problems, however, there was some disagreement. Some panelists have what could only be characterized as a fatalistic outlook: Our court system is incompatible with sound science, and we can only hope to minimize the damage. Some blamed our adversarial system, which they say isn’t always conducive to sound science. Others advocated for more adversarialism. There was also plenty of blame to go around — ill-informed judges, overworked and badly-informed defense lawyers, overly eager prosecutors, and under-educated jurors. The respondents offered a wide array of ideas and some disagreement about the path forward. But a few proposals found support from multiple respondents. While there was disagreement over whether the United States should move toward court-appointed experts (as opposed to allowing the prosecution and defense to pick their own experts), most agreed that if we’re going to continue with the current system, defense lawyers should be given the same amount of money to hire experts as the prosecution, or at least enough money to hire their own competent experts. Several of the respondents pointed out that judges have a much better record of screening out bad expertise in civil cases, where all parties tend to be well-funded. Many expressed support for the junk-science writ — a law that provides an opening to appeal for those convicted with expert testimony that was later discredited by the scientific community. Texas and California have both passed such a law. Lawmakers in Virginia recently rejected the idea. Most panelists agreed that making forensic labs independent — moving them from under the auspices of law enforcement — would be a huge improvement, and would go a long way toward reducing cognitive bias. But there was disagreement about how easy this would be to accomplish. Some of the respondents thought it would take little effort, while others anticipated a lot of resistance from police and prosecutors. A few panelists cited the Houston Forensic Science Center as the ideal model of a science-driven, truly independent crime lab. Many also suggested the idea of a “case handler,” an independent go-between who works to ensure that analysts get the relevant information they need to do their jobs, but works to block out additional information that could bias their results. I asked the panelists six questions. We’ll post their answers to the first question today, and to the other questions in future posts."

Question 1:


Under current law, judges have been designated as the “gatekeepers” of science in the courtroom. If one side wants to challenge the scientific validity of expert testimony, the lawyers for that side can request a hearing. If the hearing is granted, the judge may then hear evidence for and against the expert testimony, and then decide whether the testimony is reliable and, therefore, admissible.
Critics say judges perform poorly at this task, and there is some persuasive evidence to back up that criticism. This makes some sense, given that most judges are trained in law, not scientific analysis. When making these determinations, judges tend to look to at what other courts have done, instead of the current state of the science.
But in a system like ours, the question becomes: If not judges, then who? Who should determine what expertise a jury will and won’t be allowed to hear at trial? Moreover, forensics has been plagued not only by scientifically dubious areas of specialization, even within the legitimate specialties, but individual witnesses have offered testimony that is unsupported by science. Some have suggested that a national committee of scientists assess the validity of an entire field like bite-mark analysis, or a common diagnosis such as Shaken Baby Syndrome. But it wouldn’t be practical for such a committee to assess every challenge to an expert during criminal trials across the country.
Obviously we need some way of assessing the reliability of scientific and expert testimony. What would the ideal system look like?
Chris Fabricant, Innocence Project
Forensic sciences should be regulated the same way medicine or consumer products are. The 2009 NAS report on forensic sciences recommended a news regulating body called the National Institute for Forensic Sciences. Life and liberty is at issue, yet anything goes in criminal courts, especially and paradoxically in capital litigation.
Sandra Guerra Thompson, University of Houston Law Center, Houston Forensic Science Center
Judges are not entirely to blame for the admission of faulty forensic evidence. Prosecutors and defense attorneys also share the blame. Historically, both judges and advocates were not aware of the problems in many police crime labs, and the weaknesses of various types of forensic evidence. Today there is far greater awareness that issues exist, but judges and lawyers still struggle to comprehend the intricacies of what is a multi-faceted problem.
Judges cannot decide on their own to exclude unreliable evidence. It is up to defense counsel to object to evidence and request a hearing where counsel can proffer testimony explaining the problems with the evidence, and to persuade the judge of the unreliability of a forensic test result.
For several reasons, defense counsel have not adequately challenged forensic evidence. Too often, attorneys may not have the scientific competence to recognize the weakness in the evidence. The endemic underfunding of indigent defense exacerbates this problem. Appointed counsel may lack investigative resources, especially funding for forensic experts to guide them, review the lab reports, conduct independent testing, and testify in court.
Even if the courts provide the resources for defense experts, finding qualified experts to assist the defense is another enormous challenge. Another problem is that too many laboratories provide barebones information and non-standardized terminology in lab reports, making it nearly impossible for defense experts to evaluate the testing process.
John Lentini, fire/arson expert
Ideally, prosecutors would not proffer invalid opinions rendered by unqualified “experts.” But that’s unlikely to happen as long as prosecutors think the opinion testimony will support their case.
Ideally, judges would take their gate-keeping duty seriously, as often happens in civil cases. That is unlikely to happen because judges have continuing relationships with prosecutors and they don’t want to annoy the prosecutors by excluding their expert, no matter how egregious the opinion might be. They don’t want to be called “soft on crime.” Judges also don’t worry too much about being overturned because the Supreme Court in the Joiner decision made “abuse of discretion” the standard for review of a judge’s decision to allow or exclude testimony. Only a tiny minority of such decisions are overturned, so they are almost never appealed.
So what we are left with are the tools of the adversarial system. Challenges to experts should be routine in any contested forensic science case either through an evidentiary hearing or, preferably, by deposition. Depositions are allowed in almost all civil cases, but only a handful of states allow depositions in criminal cases. In those states, conviction rates are no lower than they are in states where expert depositions are not allowed.
Of course, in order to properly challenge an adverse expert, defense attorneys must have the funds to hire their own expert. I have noticed that in recent years, it has become less difficult to obtain such funding, and defense attorneys, particularly public defenders, understand that the assistance of a competent expert in a complicated forensic science field, e.g. fire investigation, is an essential component of effective assistance of counsel.
Simon Cole, University of California at Irvine Department of Criminology, Law & Society, National Registry of Exonerations
I think the question overlooks the most obvious explanation for this phenomenon: outcome orientation. Most judges want the outcome of these admissibility hearings to be that the government gets to use evidence that will help them convict the defendant. I don’t think the following of precedent, for example, comes from some philosophical attachment to precedent, but rather that precedent is a convenient means to the end that most judges want: that the prosecution’s evidence can be used.
I can think of only two alternatives to the current system. One is some sort of scientific tribunal, like a “science court,” e.g., the National Commission on Forensic Science might have been able to play the role of issuing guidance about specific disciplines. Or the National Academies or some other scientific institution could do it. Such a scientific institution approach could work, but courts would need to be willing to defer to the authority of scientific institutions on matters of science.
The second would be the disciplines themselves. The disciplines themselves, or perhaps the American Academy of Forensic Sciences could more actively self-regulate forensic science, rendering legal or scientific regulation less necessary.
In my opinion, as well as that of many other scholars, the focus should be less on the all-or-nothing admissibility decision (does the expert get to testify before the jury?) and more on the content of the expert’s proposed testimony. It might be okay for many experts to testify were they not systematically overstating the probative value of the evidence every time. Judges could focus more on controlling the content of the testimony than on draconian all-or-nothing admissibility decisions. Almost every expert has something of value to say. It’s just that many experts also overstate the value of what they have to say.
Jules Epstein, Temple University Beasley School of Law, National Commission on Forensic Science
You are asking two overlapping questions: who should decide what is generally accepted, and who should decide whether a particular expert in a particular case with a particular opinion/conclusion should be admitted.
When it comes to what’s generally accepted, I am convinced that the litigation model isn’t working — it rarely has parties with equal resources, and when the process is adversarial, the positions become polarized. We’d be better off turning to scientists — the OSAC model; or scientific workgroups; or the NAS — each is better suited to assess scientific reliability/validity of a discipline. At the micro level — a particular expert in a particular case with a particular finding — a judge may be fine once the overall discipline’s parameters have been set.
Barbara Spellman, University of Virginia School of Law
Judges decide whether scientific evidence is reliable and relevant, but “credibility” is left to the jury. Reliability is what is usually in question — an evaluation of whether the science is generally accepted, has been peer reviewed, can be tested, and other factors as outlined in the Supreme Court’s ruling in Daubert v. Merrell Dow Pharmaceuticals [the landmark 1993 case that laid out the rules for assessing expert testimony].
Relevance becomes, well, relevant, when the proposed topic of the scientific testimony seems as though it is something that is already known and understood by jurors. For example, the Federal Rules of Evidence allow expert testimony when the expert’s knowledge “will assist the trier of fact.” But, for example, memory experts were often not allowed to testify because judges believed that jurors already knew how memory works. Today, memory experts are often allowed because surveys, experiments and exonerations have shown that scientists know some important, relevant things about the workings of memory that the general public does not.
Roger Koppl, Forensic and National Security Sciences Institute, Syracuse University
The gate-keeping function of judges is rooted in the theory that scientific knowledge is completely different from other forms of human knowledge. Philosophers of science, however, have been unable to agree upon any standard separating “science” from “not science.” What happens if we view scientific and technical knowledge as equal to other knowledge? Then we may see the benefit of listening to both sides of the story.
There should be a real and substantive defense right to expertise. Then each side can make its case for what the forensic evidence shows. This “battle of experts” would be no “race to the bottom.” On the contrary! The two sides in our adversarial criminal-justice system have diametrically opposed interests. Thus, any relevant fact or argument will help one side and hurt the other side. Thus, “the battle of the experts” tends to flush out all the relevant facts and arguments, leaving the judge and jury with a more complete and balanced picture than one side alone would give them. And the contending experts are given an incentive to be clear, explicit, and helpful to the judge or jury.
When monopoly experts testify — that is, a court-appointed expert beholden to neither side — they may cave to the temptation to be a powerful and mysterious wizard. “It’s all very complicated, but I the powerful wizard can see the hidden truth!” But when you are competing with another such wizard, you have to actually explain things and show the judge or jury why your opinion makes sense. Competition turns wizards into teachers. If we had a real and substantive defense right to expertise, we could relieve judges of much of their gate-keeping functions. Instead of asking them to decide what expertise is real and what is fake, we could simply apply the same standards judges use to determine the admissibility of other forms of evidence. Is it relevant? Is it prejudicial? And so on.
Brandon Garrett, Duke University School of Law
Judges are necessary gatekeepers at trial. But they have not acted like science matters in criminal cases. As Chris Fabricant and I found, it is rare for state judges in even the jurisdictions with the more modern Rule 702 (of the Federal Rules of Civil Procedure) to even discuss reliability of forensic evidence, much less address unreliable evidence. The reliability rule is a myth, and the effort to educate judges has been painstaking and slow.
Keith Findley, Center for Integrity in Forensic Sciences, University of Wisconsin Law School
There is no one solution to the problem. The scientific communities — both within the forensic disciplines and more broadly within the academic research sciences — must validate and improve the reliability of forensic evidence. Judges, as gatekeepers, must be part of the solution, both to weed out the bad “science” that slips through, and to apply institutional pressures on the forensic disciplines to incentivize them to do the basic research and create the requisite standards and protocols needed to strengthen the scientific bona fides of their disciplines. The lawyers who litigate before those judges must also step up their game.
A national committee of scientists can certainly play a role, and the National Academy of Sciences has shown how that can work. Both in its pathbreaking 2009 report on forensic science in general, and in earlier investigations of specific disciplines, the NAS has brought real scientific standards to bear on the forensic sciences, and has repeatedly shown how the forensic sciences tend to come up short. More studies are desperately needed across the forensic disciplines. We could use such a study on Shaken Baby Syndrome/Abusive Head Trauma, for example. But the courts must start taking such studies seriously before we will see real change.
We also need a new national commission on forensic science. Congress has balked at creating a forensics panel, and the Obama Justice Department’s National Forensic Science Commission, which was starting to do some of this important work, was abruptly dismantled by the Trump Administration.
That said, the issues that arise in the courts are often too varied and too case-specific for any one national commission or committee of sciences to resolve all case-level challenges. Hence, judicial gatekeeping remains important.
While it’s true that most judges lack the scientific training to be real experts at distinguishing valid and reliable science from the unreliable or invalid stuff, they don’t have to go it alone. They can be educated. Indeed, the experience in civil litigation — where judges often seriously scrutinize and frequently exclude scientific evidence proffered by civil plaintiffs, suggests that it is not a lack of capacity to understand the science that has led to the utter failure to screen out bad forensic evidence in criminal cases.
Most of the fundamental challenges to the traditional forensic evidence are not that complicated. Once explained, the problems are pretty obvious and easy to comprehend. Even in more complex areas — like medical determinations of child abuse in so-called Shaken Baby Syndrome or Abusive Head Trauma cases — we now have many examples of cases in which courts have taken the time to hear extensive complex medical evidence and have revealed that they are quite capable of understanding the flaws with the underlying medical hypotheses. The real challenge here is to get courts to overcome the inertia of precedent, and to have the courage to reject forensic science evidence that in some cases will be the central piece in the prosecution of people charged with serious crimes. The courts will need a lot of support to get over those hurdles. That means vigorous litigation by knowledgeable lawyers challenging flawed forensics, and the the prospect of improved forensic evidence that can take the place of the flawed evidence.
The problem is a bit of a chicken-and-egg conundrum. Courts are reluctant to reject flawed evidence until something better is available to replace it. But the forensic science community has historically lacked incentives to create something better, because the evidence they have been producing has been accepted by the courts, and has served to convict the people whom prosecutors (for whom the laboratories typically work) are prosecuting. We need to break the cycle — we must create institutions and research opportunities to improve forensic science evidence upstream of the courtroom, and courts must create the incentives to produce a better forensic science product by rejecting expert testimony that is flawed.
Michael Risinger, Last Resort Exoneration Project, Seton Hall School of Law
An ideal system would take seriously the mandate of Kumho Tire v. Carmichael [which applied the Supreme Court’s ruling in Daubert to nonscientific expertise] to judge reliability not globally by forensic discipline, but specifically in regard to the expert claim being made and applied in the case at hand. The logic of this seems inescapable if one wants to determine the reliability of what is actually being offered in these cases. Unfortunately, this approach also is neither quick nor easy, which is why many judges prefer to make global determinations driven by precedent. Such determinations do not require them to spend time and effort learning about each expert task or application in the cases that come before them. I believe the so-called Frye general acceptance approach [the approach taken by most courts in the country before Daubert] hangs on in many jurisdictions because it lends itself to such a global, precedent-driven approach.
The second aspect of an ideal system would involve well-prepared adversaries who could find, marshal, and explain the import of various kinds of information bearing on the reliability of conclusions in the case at hand. This would include most especially research directed toward establishing the false positive and false negative error rates of the case-specific claim of reliability under ideal conditions. It would also include other research establishing the impact of context bias or other conditions present in the case under consideration which might undermine reliable performance in the individual case. Unfortunately, most specific tasks in many areas have not been the subject of formal research, and most lawyers are not good at formulating the actual task being undertaken by the expert, much less finding and marshaling the evidence for reliability, pro and con, to the extent there is any.
Finally, I see no way of outsourcing these fact-sensitive issues of reliability from judges and lawyers to some other standing body of “experts on expertise” in individual cases, even if I trusted the system to generate a properly neutral membership for such body. So unless through proper selection and training we get much more conscientious and better informed judges (on average), and much more sophisticated defense lawyers, we are stuck. The latter variable concerning defense lawyers might be the most important. In my opinion, federal judges have been much better at determining the reliability of expert testimony in civil cases, where both sides tend to have well-financed legal representation.
An ideal system would involve changes on multiple levels, and would require the cooperation of municipal, state and federal agencies in implementing reforms. While the courts decide what evidence gets admitted in testimony, it is often county agencies that decide which experts get hired to work in the forensic labs that analyze the evidence in the first place. Most criminal complaints don’t even make it to the courtroom, because defendants accept a plea bargain offered by the prosecution. They may be pressured to take such a deal because they’re being told that the forensic evidence is incontrovertible. Yet there is no way for the defendant to know if there have been mistakes in the way that forensic evidence has been analyzed. Several recent scandals involving years of incompetence and illegal activity among lab analysts have forced judges to throw out thousands of convictions and plea deals. Rebuilding trust in our forensic systems has to start in the lab.
So, on the municipal level, forensic laboratories need to be accredited by scientific agencies and required to participate in quality assurance/quality control testing on blind samples, with the results made public; they need to be funded appropriately to attract and retain certified analysts; and they need to be completely independent of law enforcement or judicial agencies. Local public defender offices need to be funded at the same level as prosecutors are, and they must be given equal access to all forensic evidence.
State governments and the federal government have their own role in an ideal forensic judicial system. To begin with, they need to implement both grants and fines as incentives to encourage a per-capita minimum investment in death investigation systems and forensic labs. And, yes, we need overarching legislative action to address the mess created by the [Missouri v.] Frye and Daubert Supreme Court decisions that, as noted, made judges the gate-keepers of scientific testimony. I believe that there should be a federally funded agency run by forensic scientists (without any input or influence from prosecutors, lawyers or law enforcement) that writes guidelines for expert testimony qualifications in designated scientific fields. Take the assessment of scientific merit away from judges, and let scientists determine whether an expert is qualified and whether their opinion is supported by the peer-reviewed literature or not. Then change the laws to allow defendants to appeal their convictions if scientific advances indicate that they were wrongfully convicted based on faulty scientific testimony.
Roderick Kennedy, retired judge, New Mexico Court of Appeals
Joe Cecil and Daniel Rubenfeld wrote an article in last fall’s Daedalus that talks about various ways to help judges make these decisions, including special masters to determine the parameters of science within which the parties’ experts will operate and give opinions that are congruent with what is predetermined to be the proven theory and acceptable practice.
Another possibility is to have judges get an adviser on science in much the same role as a law clerk, who is independent of the parties, but researches the science to lay groundwork for the judicial determination of admissibility.
There is no other provision in the law for determining admissibility of evidence, save for the judge doing it. Determining reliability by 40-year old precedent and then using Kumho’s “you can take established things for granted” language is a horrible thing that still goes on. Some of it is judicial sloth. Some of it is fear of getting into things that aren’t known. And some is a matter of bias. With all this in the mix, it can be hard to separate and address these problems.
[Britain], Europe and Australia all now have top-down regulation of forensics. In [Britain], compliance with those regulations is a foundational requirement for admissibility. The U.S. has punted even on watered-down regulation. We’ve kept all oversight of expert testimony in federal cases within the [Justice Department], and allowed prosecutors to be nontransparent in how they use science. We didn’t look for a validated path to standards, one in which a regulator proposes standards, and the disciplines affected by those regulations are allowed to comment. Instead, they let the disciplines propose their own regulations to the Organization of Scientific Area Committees (OSAC) for Forensic Science. Even the very first proposed standard was rejected by NIST for being too lax. This bottom-up approach will take far too long, and it won’t approach the sort of validation that we’ve known is needed in forensic science since at least 2009.
We need empirical, objective validation by persons not involved in day-to-day forensic practice. We also need external validation, black-box proficiency testing of both programs and individual analysts to establish baseline likelihoods of how much predictive ability these experts and procedures actually carry.
[You can read the Daedulus article here]
Frederic Whitehurst, FBI crime-lab whistleblower, Forensic Justice Project
Let’s say we decide that judges will no longer make these decisions. Who will? Will academic scientists be willing to enter into the combat of the courtrooms — to leave the comfort of academia to opine on issue that are science, as it’s practiced in the real world? Will they take the time to understand not only the words of the law, but the cultural meaning of those words? Will they be willing to enter into the vulgarity of courtrooms, where we make determinations about life and death and freedom? I don’t think so, and they have proven it by not engaging with the forensics world until recently.
Today’s judges are individuals who were one time practicing lawyers, who learned the law, but never questioned the science of the forensic “science” put before them. They’ve just always accepted that what forensic scientists were saying to them was correct. Now, despite Daubert and its progeny, can they really be faulted for not believing that they’ve been hoodwinked their entire political carers? We can find fault, but that is the easy way out. We all should be the deciders. Our forensic labs are moving quickly toward publishing protocols, quality control measures, and validations studies online. So far, much of the criminal defense bar has ignored that information, comfortable in their assessment that these “scientific” issues aren’t worth addressing. As bad as the record of judges may be, they are what we have, because very few others have stepped up to the plate.
An ideal system would be where we are headed. Governments recognizing the problems with their crime labs, and so moving toward requiring that all crime lab protocols be published on the internet. Even the [Justice Department] has demanded publication from any crime labs under its control. We should be putting these protocols in the full light of day, then encouraging anyone with an interest to join in the discussion. If crime lab managers balk, we should stop paying their salaries with tax dollars. After 30 years, I can say that when exposed to public view, you’ll see these protocols change for the better.
Itiel Dror, University College London, Cognitive Consultants International
I would say the following:
A. There is a question whether an area is scientific or not?
B. There is a question whether an expert is indeed an expert in the area in question?
C. There is a question of whether an expert in this area did their job properly (e.g., follow scientific protocols and best practices, etc.).
Of course, if 'A ' is a 'no ', end of story; but even if 'A' is a ‘yes’, then 'B' or/and 'C' can be a ‘no’. Furthermore, the answer to all the three questions above (A, B, & C) is not a yes/no dichotomy, but a continuum with many shades of gray.
Solution (or, at least a way forward): Jurors (as well as judges) need to be educated about the areas in question, their scientific basis, accuracy, as well as potential problems, such as bias in interpretation, etc. Hence, before each court case, the relevant evidence domain used in the case (e.g., mixture DNA, fingerprinting, etc.) would be presented to the jurors. That is, there should be a library of such background video information for each domain. These videos could be shown to the jurors before the trial begins. Who will make those videos? Experts in the relevant domain, with input from prosecutors, defense lawyers, the Innocence Project, judges, expert in bias, etc. — it should be a professional expert group composed from all the stakeholders.
The entire commentary can be read at: