Friday, June 19, 2020

Technology Series: Part One: George Floyd Aftermath: Facial recognition; "Images of a white Minneapolis police officer kneeling on the neck of an unarmed black man, George Floyd, who then died, have sparked protests worldwide and exposed deep grievances over strained race relations. The uproar has also triggered calls to address racial bias in technology as artificial intelligence is being widely adopted to automate decisions, from healthcare to recruitment, despite concerns that it could unfairly target ethnic minorities."

BACKGROUND: TECHNOLOGY: In the last several years I have been spending considerably more time than usual on applications of rapidly developing technology in the criminal justice process that could effect the  quality of the administration of justice - for better, or, most often, for worse. First, of course, predictive policing (AKA Predpol) made it’s interest, at its most extreme promising  the ability to identify a criminal act before it occurred. At it’s minimal level, it offered police a better sense of where certain crimes where occurring in the community being policed - knowledge that the seasoned beat officer had intuited through every day police work years earlier. Predpol has lost some of it’s lustre as police departments discovered that the expense of acquiring and using the technology was not justified. Then we entered a period where logarithms were become popular with judges for use on bail hearings and on sentencing, In my eyes, these judges were just passing the buck to the machine when they could have, and should have made their decisions  based on information they received in open court - not from Logarithm’s which were noxious for their secrecy, because the manufacturers did not want to reveal their trade secrets - even in a courtroom where an accused person’s liberty and reputation  were on the hook. of these logarithms on bail and sentence have come under attack in many jurisdictions for discriminating against minorities and are hopefully on the way out. Lastly. facial recognition technology has become a concern to this Blog  because of its prove ability to sweep up huge numbers of eople and lead to wrongful arrests and prosecutions. May we never forget that  a huge, extremely well-funded, powerful industry, often politically connected industry  is pushing for profit use of all these technologies in the criminal systems - and, hopefully, in the post George Floyd aftermath  will be more concerned with the welfare of the community than their bottom Line. HL.
-------------------------------------------------------------------
PASSAGE OF THE DAY: "From New York to Minneapolis, police across the United States have access to facial recognition technology, which can be used to search officers' body camera footage, surveillance camera images and social media to find specific individuals. But the algorithms used in facial recognition are trained on data sets, such as photos, which often underrepresent minorities, said Martin Tisne, head of Luminate, a philanthropic organisation focusing on digital rights issues. This means that software used to identify a person of interest to law enforcement can struggle to recognise ethnic minority faces, which privacy advocates fear could lead to harassment of innocent people. Major firms have tried to address criticisms by training their algorithms on more diverse data-sets but studies still reveal widespread bias. As big brands have taken to social media to condemn racism in the wake of Floyd's death, campaigners have challenged them to back up their words with action. "

-----------------------------------------------------------------------

STORY US protests fuel calls for ban on racially biased facial recognition tools, by reporters Avi-Asher Schapiro and Umberto Bacci, Thomas Reuters Foundation, June 8, 2020.
GIST: "Law enforcement agencies should be banned from using racially biased surveillance technology that fuels discrimination and injustice, digital and human rights groups said on Thursday, amid protests over police brutality against black Americans.
Some facial recognition systems misidentify ethnic minorities 10 to 100 times more often than white people, according to US government research, raising fears of unjust arrests.
“We need to make sure technologies like facial surveillance stay out of our communities,” said Kade Crockford, Director of the Technology for Liberty Program at the American Civil Liberties Union (ACLU) of Massachusetts.
Images of a white Minneapolis police officer kneeling on the neck of an unarmed black man, George Floyd, who then died, have sparked protests worldwide and exposed deep grievances over strained race relations.
The ACLU is campaigning for authorities to follow the lead of cities like San Francisco and Oakland that have banned facial recognition, which is also being used by customs officials at travel checkpoints.
"People are marching in record numbers to demand justice for black communities long subject to police violence," Crockford told the Thomson Reuters Foundation.
"In response, government agencies are mounting increasingly aggressive attacks on freedom of speech and association, including by deploying dystopian surveillance technologies".
Last Friday, the US Customs and Border Patrol agency flew a surveillance drone normally used for border patrols over Minneapolis, the city at the hub of the protests.
The CBP said the drone "was preparing to provide live video to aid in situational awareness at the request of our federal law enforcement partners" but was diverted back when authorities realised it was no longer needed.
Police:
From New York to Minneapolis, police across the United States have access to facial recognition technology, which can be used to search officers' body camera footage, surveillance camera images and social media to find specific individuals.
But the algorithms used in facial recognition are trained on data sets, such as photos, which often underrepresent minorities, said Martin Tisne, head of Luminate, a philanthropic organisation focusing on digital rights issues.
This means that software used to identify a person of interest to law enforcement can struggle to recognise ethnic minority faces, which privacy advocates fear could lead to harassment of innocent people.
Major firms have tried to address criticisms by training their algorithms on more diverse data-sets but studies still reveal widespread bias.
As big brands have taken to social media to condemn racism in the wake of Floyd's death, campaigners have challenged them to back up their words with action.
The ALCU hit out at Amazon for expressing "solidarity with the Black community" on Twitter while selling governments access to Rekognition, a powerful image ID software unveiled in 2016 by the company's cloud-computing division.
"Cool tweet. Will you commit to stop selling face recognition surveillance technology that supercharges police abuse?" the ACLU asked Amazon on Twitter.
A 2018 ACLU study found that Rekognition confused African American members of the US Congress with police mugshots of other people.
Amazon did not immediately reply to a request for comment.
The company said in September that it was working on proposed regulations around the fledgling technology and that all Rekognition users must follow the law.
Sarah Chander of European digital rights group EDRi said Artificial Intelligence should also be banned in predictive policing, where algorithms help decide which neighbourhoods police patrol and what kinds of crimes they prioritise.
"Governments across the world need to step up and protect communities. This means drawing red lines at certain uses of technology," she said in emailed comments.
Tisne of Luminate said he hoped the protests would push companies and governments to do more to address tech bias, including ensuring data used to train algorithms was inclusive, and that algorithms were properly tested before release.
Tech firms should also be more transparent about how their algorithms work, possibly opening up data and source codes for software, he added."
The entire story can be read at:

PUBLISHER'S NOTE: I am monitoring this case/issue. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic"  section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/charlessmith. Information on "The Charles Smith Blog Award"- and its nomination process - can be found at: http://smithforensic.blogspot.com/2011/05/charles-smith-blog-award-nominations.html Please send any comments or information on other cases and issues of interest to the readers of this blog to: hlevy15@gmail.com.  Harold Levy: Publisher: The Charles Smith Blog;
-----------------------------------------------------------------
FINAL WORD:  (Applicable to all of our wrongful conviction cases):  "Whenever there is a wrongful conviction, it exposes errors in our criminal legal system, and we hope that this case — and lessons from it — can prevent future injustices."
Lawyer Radha Natarajan:
Executive Director: New England Innocence Project;
------------------------------------------------------------------