PUBLISHER'S NOTE: This is a very lengthy article but well worth the read in its entirety at the link below. For now, here is but a taste. (That's true. I really mean it.)
Harold Levy: Publisher: The Charles Smith Blog.
-----------------------------------------------------------
PASSAGE OF THE DAY: "Nonetheless, the veneer of modernity that AI gives them is bringing these systems into settings the polygraph has not been able to penetrate: border crossings, private job interviews, loan screenings, and insurance fraud claims. Corporations and governments are beginning to rely on them to make decisions about the trustworthiness of customers, employees, citizens, immigrants, and international visitors. But what if lying is just too complex for any machine to reliably identify, no matter how advanced its algorithm is?"
--------------------------------------------------------
STORY: "Lie detectors have always been suspect. AI has made the problem worse, by writer, reporter, researcher, Jake Bittle, published by The MIT Technology Review on March 13, 2020.
SUB-HEADING: "An in-depth investigation into artificial-intelligence-based attempts to recognize deception."
GIST: "Before the polygraph pronounced him guilty, Emmanuel Mervilus worked for a cooking oil company at the port of Newark, New Jersey. He was making $12 an hour moving boxes, but it wasn’t enough. His brother and sister were too young to work, and his mother was fighting an expensive battle against cancer. His boss at the port, though, had told him he was next in line for promotion to a technician position, which would come with a raise to $25 an hour. Mervilus was still waiting for this promotion on October 19, 2006, when he and a friend stopped at a Dunkin’ Donuts in nearby Elizabeth, New Jersey. A few minutes later, as they walked down the street, two police officers approached them and accused them of having robbed a man at knifepoint a few minutes earlier, outside a nearby train station. The victim had identified Mervilus and his friend from a distance. Desperate to prove his innocence, Mervilus offered to take a polygraph test. The police agreed, but in the days right before the test, Mervilus’s mother died. He was distraught and anxious when the police strapped him up to the device. He failed the test, asked to take it again, and was refused. After Mervilus maintained his plea of innocence, his case went to trial. The lieutenant who had administered the polygraph testified in court that the device was a reliable “truth indicator.” He had never in his career, he said, seen a case where “someone was showing signs of deception, and [it later] came out that they were truthful.” A jury convicted Mervilus—swayed, an appeals court later found, by misplaced faith in the polygraph. The judge sentenced him to 11 years in prison. The belief that deception can be detected by analyzing the human body has become entrenched in modern life. Despite numerous studies questioning the validity of the polygraph, more than 2.5 million screenings are conducted with the device each year, and polygraph tests are a $2 billion industry. US federal government agencies including the Department of Justice, the Department of Defense, and the CIA all use the device when screening potential employees. According to 2007 figures from the Department of Justice, more than three-quarters of all urban police and sheriff’s departments also used lie detectors to screen hires. But polygraph machines are still too slow and cumbersome to use at border crossings, in airports, or on large groups of people. As a result, a new generation of lie detectors based on artificial intelligence have emerged in the past decade. Their proponents claim they are both faster and more accurate than polygraphs. In reality, the psychological work that undergirds these new AI systems is even flimsier than the research underlying the polygraph. There is scant evidence that the results they produce can be trusted. Nonetheless, the veneer of modernity that AI gives them is bringing these systems into settings the polygraph has not been able to penetrate: border crossings, private job interviews, loan screenings, and insurance fraud claims. Corporations and governments are beginning to rely on them to make decisions about the trustworthiness of customers, employees, citizens, immigrants, and international visitors. But what if lying is just too complex for any machine to reliably identify, no matter how advanced its algorithm is?................"“In a court, you have to give over material evidence, like your hair and your blood,” says Wilde. “But you also have a right to remain silent, a right not to speak against yourself.” Mervilus opted to take the polygraph test on the assumption that, like a DNA test, it would show he was innocent. And although the device got it wrong, it wasn’t the machine itself that sent him to prison. It was the jury’s belief that the test results were more credible than the facts of the case. The foundational premise of AI lie detection is that lies are there to be seen with the right tools. Psychologists still don’t know how valid that claim is, but in the meantime, the belief in its validity may be enough to disqualify deserving applicants for jobs and loans, and to prevent innocent people from crossing national borders. The promise of a window into the inner lives of others is too tempting to pass up, even if nobody can be sure how clear that window is. “It’s the promise of mind-reading,” says Wilde. “You can see that it’s bogus, but that’s what they’re selling.""
----------------------------------------------------------------
The entire story can be read at:
----------------------------------------------------------------
The entire story can be read at:
PUBLISHER'S NOTE: I am monitoring this case/issue. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic" section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/charlessmith. Information on "The Charles Smith Blog Award"- and its nomination process - can be found at: http://smithforensic.blogspot.com/2011/05/charles-smith-blog-award-nominations.html Please send any comments or information on other cases and issues of interest to the readers of this blog to: hlevy15@gmail.com. Harold Levy: Publisher: The Charles Smith Blog;
-----------------------------------------------------------------
FINAL WORD: (Applicable to all of our wrongful conviction cases): "Whenever there is a wrongful conviction, it exposes errors in our criminal legal system, and we hope that this case — and lessons from it — can prevent future injustices."
Lawyer Radha Natarajan:
Executive Director: New England Innocence Project;
------------------------------------------------------------------