PASSAGE OF THE DAY: "Creating hyper-realistic suspect profiles resembling innocent people would be especially harmful to Black and Latino people, with Black people being five times more likely to be stopped by police without cause than a white person. People of color are also more likely to be stopped, searched, and suspected of a crime, even when no crime has occurred. “If these AI-generated forensic sketches are ever released to the public, they can reinforce stereotypes and racial biases and can hamper an investigation by directing attention to people who look like the sketch instead of the actual perpetrator,” Lynch said, adding that mistaken eyewitness identifications contributed to 69 percent of wrongful convictions that were later overturned by DNA evidence in the US. Overall, false or misleading forensics—including police sketches—have contributed to almost 25 percent of all wrongful convictions across the US."
--------------------------------------------------------------------
PASSAGE TWO OF THE DAY: "Fortunato and Reynaud’s AI tool is’t the first software to create controversy with generated images of suspects. In October 2022, the Edmonton Police Service (EPS) shared a computer generated image of a suspect that was created with DNA phenotyping, leading to backlash from privacy and criminal justice experts, and the department deleting the image from its website and social media. Again, the lack of accuracy in the dissemination of a seemingly realistic photo put innocent people at risk. “I prioritized the investigation – which in this case involved the pursuit of justice for the victim, herself a member of a racialized community, over the potential harm to the Black community. This was not an acceptable trade-off and I apologize for this,” wrote Enyinnah Okere, the chief operating officer of EPS, in a press release following the backlash. Last year, a report by the Center on Privacy & Technology found that AI facial recognition tools often lead to bias and error in forensic cases. The report stated that facial recognition is an unreliable source of identity evidence and the algorithm and human steps in a face recognition search may compound the others’ mistakes. “Since faces contain inherently biasing information such as demographics, expressions, and assumed behavioral traits, it may be impossible to remove the risk of bias and mistake,” the report said. “I think that as this technology matures, we should start developing norms of things that these models can and cannot be used for. So for me, this forensics sketch artist is very clearly something that we should not be using generative technology for,” Luccioni said. “And so no matter how well we know the biases that are in the models, there are just certain applications that it shouldn't be used for.”
----------------------------------------------------------------
STORY: "Developers Created AI to Generate Police Sketches. Experts Are Horrified," by Reporter Chloe Xiang, published by 'Vice" (Motherboard) one February 7, 2023.
SUB-HEADING: "Police forensics is already plagued by human biases. Experts say AI will make it even worse."
The entire story can be read at:
By Chloe Xiahttps://www.vice.com/en/article/qjk745/ai-police-sketches
GIST: "Two developers have used OpenAI’s DALL-E 2 image generation model to create a forensic sketch program that can create “hyper-realistic” police sketches of a suspect based on user inputs.
The program, called Forensic Sketch AI-rtist, was created by developers Artur Fortunato and Filipe Reynaud as part of a hackathon in December 2022. The developers wrote that the program's purpose is to cut down the time it usually takes to draw a suspect of a crime, which is “around two to three hours,” according to a presentation uploaded to the internet.
“We haven’t released the product yet, so we don’t have any active users at the moment, Fortunato and Reynaud told Motherboard in a joint email. “At this stage, we are still trying to validate if this project would be viable to use in a real world scenario or not. For this, we’re planning on reaching out to police departments in order to have input data that we can test this on.”
AI ethicists and researchers told Motherboard that the use of generative AI in police forensics is incredibly dangerous, with the potential to worsen existing racial and gender biases that appear in initial witness descriptions.
“The problem with traditional forensic sketches is not that they take time to produce (which seems to be the only problem that this AI forensic sketch program is trying to solve). The problem is that any forensic sketch is already subject to human biases and the frailty of human memory,” Jennifer Lynch, the Surveillance Litigation Director of the Electronic Frontier Foundation, told Motherboard. “AI can’t fix those human problems, and this particular program will likely make them worse through its very design.”
The program asks users to provide information either through a template that asks for gender, skin color, eyebrows, nose, beard, age, hair, eyes, and jaw descriptions or through the open description feature, in which users can type any description they have of the suspect. Then, users can click “generate profile,” which sends the descriptions to DALL-E 2 and produces an AI-generated portrait.
“Research has shown that humans remember faces holistically, not feature-by-feature. A sketch process that relies on individual feature descriptions like this AI program can result in a face that’s strikingly different from the perpetrator’s,” Lynch said. “Unfortunately, once the witness sees the composite, that image may replace in their minds, their hazy memory of the actual suspect. This is only exacerbated by an AI-generated image that looks more ‘real’ than a hand-drawn sketch.”
Creating hyper-realistic suspect profiles resembling innocent people would be especially harmful to Black and Latino people, with Black people being five times more likely to be stopped by police without cause than a white person. People of color are also more likely to be stopped, searched, and suspected of a crime, even when no crime has occurred.
“If these AI-generated forensic sketches are ever released to the public, they can reinforce stereotypes and racial biases and can hamper an investigation by directing attention to people who look like the sketch instead of the actual perpetrator,” Lynch said, adding that mistaken eyewitness identifications contributed to 69 percent of wrongful convictions that were later overturned by DNA evidence in the US. Overall, false or misleading forensics—including police sketches—have contributed to almost 25 percent of all wrongful convictions across the US.
The addition of DALL-E 2 into the already unreliable process of witness descriptions worsens the issue. Sasha Luccioni, a Research Scientist at Hugging Face who tweeted about the police sketch program, told Motherboard that DALL-E 2 contains many biases—for example it was known to display mostly white men when asked to generate an image of a CEO. Luccioni said that though these examples repeatedly crop up, we still haven’t been able to pinpoint the exact source of the biases the model has, and are thus unable to take the right measures to correct them. OpenAI continually develops methods to mitigate bias in its AI's output.
“Typically, it is marginalized groups that are already even more marginalized by these technologies because of the existing biases in the datasets, because of the lack of oversight, because there are a lot of representations of people of color on the internet that are already very racist, and very unfair. It's like a kind of compounding factor,” Luccioni added. Like other AI experts, she describes the process as a feedback loop in which AI models contain, produce, and perpetuate bias as the images they generate continue to be used.
Fortunato and Reynaud said that their program runs with the assumption that police descriptions are trustworthy and that “police officers should be the ones responsible for ensuring that a fair and honest sketch is shared.”
“Any inconsistencies created by it should be either manually or automatically (by requesting changes) corrected, and the resulting drawing is the work of the artist itself, assisted by EagleAI and the witness,” the developers said. “The final goal of this product is to generate the most realistic drawing of a suspect, and any errors should be corrected. Furthermore, the model will most likely not produce the ideal result in just one attempt, thus requiring iterations to achieve the best result possible.”
The developers themselves admit that there are no metrics to measure the accuracy of the generated image. In a criminal case, inaccuracies may not be corrected until the suspect is found or has already spent time in jail.
And just like with when police share the names and photos of suspects on social media, the sharing of an inaccurate image before then may also place suspicion on already over-criminalized populations. Critics also point out that the developers’ assumption of police neutrality ignores well-documented evidence that cops routinely lie while presenting evidence and testifying in criminal cases.
Fortunato and Reynaud’s AI tool is’t the first software to create controversy with generated images of suspects. In October 2022, the Edmonton Police Service (EPS) shared a computer generated image of a suspect that was created with DNA phenotyping, leading to backlash from privacy and criminal justice experts, and the department deleting the image from its website and social media.
Again, the lack of accuracy in the dissemination of a seemingly realistic photo put innocent people at risk. “I prioritized the investigation – which in this case involved the pursuit of justice for the victim, herself a member of a racialized community, over the potential harm to the Black community. This was not an acceptable trade-off and I apologize for this,” wrote Enyinnah Okere, the chief operating officer of EPS, in a press release following the backlash.
Last year, a report by the Center on Privacy & Technology found that AI facial recognition tools often lead to bias and error in forensic cases. The report stated that facial recognition is an unreliable source of identity evidence and the algorithm and human steps in a face recognition search may compound the others’ mistakes.
“Since faces contain inherently biasing information such as demographics, expressions, and assumed behavioral traits, it may be impossible to remove the risk of bias and mistake,” the report said.
“I think that as this technology matures, we should start developing norms of things that these models can and cannot be used for. So for me, this forensics sketch artist is very clearly something that we should not be using generative technology for,” Luccioni said. “And so no matter how well we know the biases that are in the models, there are just certain applications that it shouldn't be used for.”
OpenAI declined to comment on the record about the use of its technology in Fortunato and Reynaud’s project."
-------------------------------------------------------------------
PUBLISHER'S NOTE: I am monitoring this case/issue/resource. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic" section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/charlessmith. Information on "The Charles Smith Blog Award"- and its nomination process - can be found at: http://smithforensic.blogspot.com/2011/05/charles-smith-blog-award-nominations.html Please send any comments or information on other cases and issues of interest to the readers of this blog to: hlevy15@gmail.com. Harold Levy: Publisher: The Charles Smith Blog;
SEE BREAKDOWN OF SOME OF THE ON-GOING INTERNATIONAL CASES (OUTSIDE OF THE CONTINENTAL USA) THAT I AM FOLLOWING ON THIS BLOG, AT THE LINK BELOW: HL:
https://www.blogger.com/blog/post/edit/120008354894645705/4704913685758792985
FINAL WORD: (Applicable to all of our wrongful conviction cases): "Whenever there is a wrongful conviction, it exposes errors in our criminal legal system, and we hope that this case — and lessons from it — can prevent future injustices."
Lawyer Radha Natarajan:
Executive Director: New England Innocence Project;
—————————————————————————————————
FINAL, FINAL WORD: "Since its inception, the Innocence Project has pushed the criminal legal system to confront and correct the laws and policies that cause and contribute to wrongful convictions. They never shied away from the hard cases — the ones involving eyewitness identifications, confessions, and bite marks. Instead, in the course of presenting scientific evidence of innocence, they've exposed the unreliability of evidence that was, for centuries, deemed untouchable." So true!
Christina Swarns: Executive Director: The Innocence Project;
------------------------------------------------------------------