Monday, June 22, 2020

Technology Series: Part Four: Facial Recognition: Are big tech companies like Amazon, IBM and Microsoft really doing their best to curb police use of their facial recognition products? Maybe it's not quite so. Reporter Kate Kaye makes a pretty good case in 'Fast Company' that big tech's new face recognition bans don’t go far enough..."The parade of announcements from giant tech companies is an attempt “to virtue signal as a company,” says Rashida Richardson, director of policy research at the AI Now Institute. Over the past several years, police departments’ increasing use of facial recognition has sparked criticism due to the technology’s inaccuracy. Several research studies, including one by the government, have shown that facial recognition algorithms fail to detect black and brown faces accurately. Cities across the country have banned its use by police departments and government agencies. “Limiting the scope of these [announcements] even to law enforcement is insufficient,” says Safiya Noble, associate professor at UCLA’s Department of Information Studies and author of Algorithms of Oppression. “We need a full-on recall of all of these technologies.”


BACKGROUND: TECHNOLOGY: In the last several years I have been spending considerably more time than usual on applications of rapidly developing technology in the criminal justice process that could effect the  quality of the administration of justice - for better, or, most often, for worse. First, of course, predictive policing (AKA Predpol) made it’s interest, at its most extreme promising  the ability to identify a criminal act before it occurred. At it’s minimal level, it offered police a better sense of where certain crimes where occurring in the community being policed - knowledge that the seasoned beat officer had intuited through every day police work years earlier. Predpol has lost some of it’s lustre as police departments discovered that the expense of acquiring and using the technology was not justified. Then we entered a period where logarithms were become popular with judges for use on bail hearings and on sentencing, In my eyes, these judges were just passing the buck to the machine when they could have, and should have made their decisions  based on information they received in open court - not from Logarithm’s which were noxious for their secrecy, because the manufacturers did not want to reveal their trade secrets - even in a courtroom where an accused person’s liberty and reputation  were on the hook. of these logarithms on bail and sentence have come under attack in many jurisdictions for discriminating against minorities and are hopefully on the way out. Lastly. facial recognition technology has become a concern to this Blog  because of its prove ability to sweep up huge numbers of people and lead to wrongful arrests and prosecutions. May we never forget that  a huge, extremely well-funded, powerful industry, often politically connected industry  is pushing for profit use of all these technologies in the criminal systems - and, hopefully, in the post George Floyd aftermath  will be more concerned with the welfare of the community than their bottom Line. HL.

---------------------------------

PASSAGE OF THE DAY: "Despite their limitations on facial recognition use by law enforcement, neither IBM, Amazon, nor Microsoft said they would stop the use of the other highly scrutinized predictive policing and surveillance tech they offer. Predictive policing systems in particular have been criticized for using historical data that contains inaccurate or racially-biased documentation of law enforcement incidents."

----------------------------------

STORY: "Why Big Tech's new face recognition bans don't go far enough," by reporter Kate Kaye, published by  'Fast Company' on June 13, 2020.

SUB-HEADING: "While these tech giants may have stepped back  from facial recognition, their bans don't  encompass other technology they supply for police or square with their past lobbying and legislative efforts."

GIST: Advocates against flawed facial recognition systems have pushed for limits or bans on the use of these controversial technologies by law enforcement for at least four years. Now, amid a global reckoning around racial injustice spurred by the killing of George Floyd by Minneapolis police, IBM, Amazon, and Microsoft declared decisions to end or pause sales of their facial recognition products to law enforcement.

The companies’ choice to step away from facial recognition received muted praise from some high-profile activists who’ve fought against facial recognition use for law enforcement and private surveillance. But other advocates for ethical and equitable tech approaches are skeptical of what they say looks more like pandering than meaningful action.

The parade of announcements from giant tech companies is an attempt “to virtue signal as a company,” says Rashida Richardson, director of policy research at the AI Now Institute.

Over the past several years, police departments’ increasing use of facial recognition has sparked criticism due to the technology’s inaccuracy. Several research studies, including one by the government, have shown that facial recognition algorithms fail to detect black and brown faces accurately. Cities across the country have banned its use by police departments and government agencies.

“Limiting the scope of these [announcements] even to law enforcement is insufficient,” says Safiya Noble, associate professor at UCLA’s Department of Information Studies and author of Algorithms of Oppression. “We need a full-on recall of all of these technologies.”

AN OPPORTUNISTIC MOVE

IBM came first. The company sent a letter on June 8 addressed to Congressional Black Caucus members and sponsors of the Justice in Policing Act, introduced the same day. IBM CEO Arvind Krishna recognized the “horrible and tragic deaths of George Floyd, Ahmaud Arbery, Breonna Taylor,” and stated that the company “no longer offers general purpose IBM facial recognition or analysis software.”

 The thing is, it appears IBM already stopped making its facial analysis and detection technology available in September 2019. The IBM announcement is “not bad because it’s better than doing nothing, but that said I think it’s completely promotional and opportunistic,” says Richardson. IBM’s letter got a more welcome reception from MIT researcher Joy Buolamwini and her organization Algorithmic Justice League.

The group said in an email to Fast Company that it “commends this decision as a first move forward towards company-side responsibility to promote equitable and accountable AI.” In 2018, Buolamwini and her colleague Dr. Timni Gebru published seminal research that revealed accuracy disparities for people of color and women in earlier versions of facial recognition software from IBM, Microsoft, and Chinese company Face ++. Amazon, which makes a facial recognition product called Rekognition, swiftly followed on June 10, announcing a “one-year moratorium on police use of Amazon’s facial recognition technology.”

 At least one law enforcement agency using Rekognition—Oregon’s Washington County Sheriff’s Office—has said it will stop doing so. Amazon declined to comment for this story and did not provide any details about how it will enact and enforce the moratorium. As criticism of police practices reaches a crescendo, Amazon’s two-paragraph statement made no mention of police abuse or racial injustice.

“This pause is the bare minimum when it comes to addressing the ways facial recognition has enabled harms and violence against Black people,” data equity group Data for Black Lives said in a statement sent to Fast Company.

The next day, Microsoft emerged with its own statement. Microsoft President Brad Smith told The Washington Post the firm “decided that we will not sell facial recognition technology to police departments in the United States until we have a national law in place grounded in human rights that will govern this technology.”

It was an about-face from an earlier stance. In January, Smith told Seattle’s NPR affiliate KUOW the company did not want a moratoriumon facial recognition because “the only way to continue developing it actually is to have more people using it.”

NO END FOR PREDICTIVE POLICING OR OTHER SURVEILLANCE TECH

Despite their limitations on facial recognition use by law enforcement, neither IBM, Amazon, nor Microsoft said they would stop the use of the other highly scrutinized predictive policing and surveillance tech they offer. Predictive policing systems in particular have been criticized for using historical data that contains inaccurate or racially-biased documentation of law enforcement incidents.

Microsoft’s statement was “a dodge,” says Liz O’Sullivan, technology director at the Surveillance Technology Oversight Project. Not only does Microsoft not appear to sell facial recognition to police in the U.S., she said; it has a $10 billion contract with the Pentagon, which could lead to the implementation of its augmented reality headsets and object detection for military use.

As for IBM, the company said nothing about ending sales of its predictive analytics tools to law enforcement. The company has provided predictive and “near-instant intelligence” to police clients including Rochester, New York, Manchester, New Hampshire, and Edmonton, Canada. IBM did not respond to requests for comment regarding its predictive policing technologies.

Meanwhile, Amazon’s year off from selling facial recognition to police does not limit its law enforcement partners’ use of video surveillance footage from its Ring connected doorbell system. The company feeds video footage into a data hub accessible to hundreds of law enforcement agencies, who use it as part of a warrantless community policing program. For now, Ring does not enable facial recognition.

O’Sullivan says Amazon’s moratorium is a partial victory because the company has actually sold facial recognition to law enforcement. “The reason I think this is a victory is we have a company who has a vested interest in having great relationships with local police departments take a stand and revoke access to something that otherwise they would profit from.”

A PUSH FOR WATERED-DOWN FEDERAL LEGISLATION

Both Amazon and Microsoft say they want federal legislation governing facial recognition, and both have attempted to influence rules for the technology at the state and local level.

In its statement, Amazon said it hoped its moratorium would “give Congress enough time to implement appropriate rules, and we stand ready to help if requested.” The company has already begun attempts to influence federal regulation. In September, its CEO, Jeff Bezos, said the firm’s public policy team was developing a federal facial recognition legislation proposal.

But some actions taken behind closed doors show Amazon does not actually want strict rules against facial recognition use. As recently as December, the company lobbied against a proposed ban in Portland, Oregon, which could prevent the use of facial recognition by government agencies, law enforcement, and private entities. Portland officials said the company hoped to stop or at least water down the legislation.

Microsoft also pushed against an ACLU-backed moratorium on government facial recognition use in its home state of Washington. Instead, the company supported a new law with weaker restrictions on the technology. The law was cosponsored by Senator Joseph Nguyen, who is also a senior program manager at Microsoft.

O’Sullivan says Microsoft wants a federal law that preempts tougher regulations such as the California Consumer Privacy Act and facial recognition bans in cities such as Oakland and San Francisco. “It’s a big part of why they’re backing away from this product now,” she says.

Going forward, to prevent technologies embedded with “racialized logic,” AI Now’s Richardson says all three firms should evaluate their hiring processes and incorporate nonwhite communities and employees in product conception.

“We don’t know about what’s in the R and D pipeline,” she says. “I’m sure there’s 10 other technologies we don’t know about that will come out in the next couple of years that use the same data or are embedded with the same problems.""

The entire story can be read at:

PUBLISHER'S NOTE: I am monitoring this case/issue. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic"  section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/charlessmith. Information on "The Charles Smith Blog Award"- and its nomination process - can be found at: http://smithforensic.blogspot.com/2011/05/charles-smith-blog-award-nominations.html Please send any comments or information on other cases and issues of interest to the readers of this blog to: hlevy15@gmail.com.  Harold Levy: Publisher: The Charles Smith Blog;
-----------------------------------------------------------------
FINAL WORD:  (Applicable to all of our wrongful conviction cases):  "Whenever there is a wrongful conviction, it exposes errors in our criminal legal system, and we hope that this case — and lessons from it — can prevent future injustices."
Lawyer Radha Natarajan:
Executive Director: New England Innocence Project;
------------------------------------------------------------------