QUOTE OF THE DAY: "Facial recognition software has been used to identify suspects caught by a home camera in the act of snatching the item. Still, questions about the technology have persuaded some cities against purchasing it for their police departments. The western Massachusetts city of Springfield decided against it after weighing the technology’s potential to deliver racially flawed results, effectively anticipating the NIST findings. “I’m a black woman and I’m dark,” Springfield councilor Tracye Whitfield told Police Commissioner Cheryl Clapprood, who is white. “I cannot approve something that’s going to target me more than it will target you.” Clapprood responded that cities could build in safeguards to prevent racial bias or abuse of civil liberties. “The facial recognition technology does not come along and drop a net from the sky and carry you off to prison,” she said. But the tentative lesson of the NIST study is that unless technology customers use due diligence when they purchase the software—and are cautious about the matches it turns up—the odds of falling prey to “demographic errors”—in the researchers phrase—are high."
---------------------------------------------------------------
PASSAGE OF THE DAY: "Researchers evaluated 189 face recognition algorithms supplied by 99 developers, which the study said represented a “majority of the industry,” and applied them to 18 million images of more than eight million people, using databases provided by the State Department, the Department of Homeland Security (DHS) and the FBI. They found a startling number of “false positives”—incorrect matches between individual faces—for Asian and African Americans compared to whites. The factor of error ranged enormously across the algorithms, from 10 to 100. “Using the higher quality application photos, false positive rates are highest in West and East African and East Asian people, and lowest in Eastern European individuals,” the study said, noting that there were fewer false positives for Asian faces in software developed by China. “We found false positives to be higher in women than men, and this is consistent across algorithms and data sets. This effect is smaller than that due to race. We found elevated false positives in the elderly and in children; the effects were larger in the oldest and youngest, and smallest in middle-aged adults.” In an equally significant finding, when a single image was matched against a number of faces—a technique used by police and customs officials to check whether an individual was located in a database containing known criminals or terrorists, there were higher rates of false positives for African-American females."
-------------------------------------------------------------
STORY: "Facial Recognition Software Misreads African-American Faces: Study, by The Crime Report staff, reported on December 2, 2019.
GIST: Facial recognition software, an increasingly popular high-tech
crime-fighting tool used by police departments around the U.S.,
consistently misidentifies African-American, Native American and Asian
faces, according to a new federal study. The study by the National Institute of Standards and Technology
(NIST), an agency of the Department of Commerce, avoided any
recommendation to abandon the software, cautiously noting that some of
the algorithms used by some developers were relatively more accurate
than others. But it issued a stern warning to customers of the software—most of
whom are in law enforcement—to be “aware of these differences and use
them to make decisions and to improve future performance.” “Different algorithms perform differently,” emphasized a summary accompanying the report. Researchers evaluated 189 face recognition algorithms supplied by 99
developers, which the study said represented a “majority of the
industry,” and applied them to 18 million images of more than eight
million people, using databases provided by the State Department, the
Department of Homeland Security (DHS) and the FBI. They found a startling number of “false positives”—incorrect matches
between individual faces—for Asian and African Americans compared to
whites. The factor of error ranged enormously across the algorithms,
from 10 to 100. “Using the higher quality application photos, false positive rates
are highest in West and East African and East Asian people, and lowest
in Eastern European individuals,” the study said, noting that there were
fewer false positives for Asian faces in software developed by China. “We found false positives to be higher in women than men, and this is
consistent across algorithms and data sets. This effect is smaller than
that due to race. We found elevated false positives in the elderly and
in children; the effects were larger in the oldest and youngest, and
smallest in middle-aged adults.” In an equally significant finding, when a single image was matched
against a number of faces—a technique used by police and customs
officials to check whether an individual was located in a database
containing known criminals or terrorists, there were higher rates of
false positives for African-American females. Facial recognition is also now widely used in surveillance systems
deployed in public areas, aimed at detecting individuals already linked
in FBI or DHS databases to terror groups or to spot wanted criminals or
missing persons in a crowd. Researchers said later the wide variation in errors confirmed the
fears of critics that the technology was riddled with “algorithmic
bias.” The study was a “sobering reminder that facial recognition technology
has consequential technical limitations alongside posing threats to
civil rights and liberties,” NIST researcher Joy Buolamwini told The Washington Post. Doubts about the technology have already put police on the defensive. “We never make an arrest based solely on facial recognition,” Capt. Chuck Cohen of the Indiana State Police told The New York Times last Spring. But Cohen, who also runs the Indiana Intelligence Fusion Center, said
facial recognition was a valuable accessory to solving crimes. He cited
the case of an attempted murder, in which a video taken by a friend of
the victim recorded an argument with a suspected assailant. Facial
recognition software was used to identify the suspect. “Local law enforcement had a very good video image of the person, but
couldn’t identify him,” said Cohen. “This was not the only piece of
evidence, but it was a lead.” Around the country, the technology is being used to investigate
crimes ranging from murder to shoplifting. Ironically, it is often
paired with home surveillance technology now widely available for
private purchase. In the “Amazon” era, when rising online purchases have replaced shopping at brick-and-mortar stores, the theft of packages left at doorsteps is an increasing problem. Facial recognition software has been used to identify suspects caught by a home camera in the act of snatching the item. Still, questions about the technology have persuaded some cities against purchasing it for their police departments. The western Massachusetts city of Springfield decided against it
after weighing the technology’s potential to deliver racially flawed
results, effectively anticipating the NIST findings. “I’m a black woman and I’m dark,” Springfield councilor Tracye Whitfield told Police Commissioner Cheryl Clapprood, who is white. “I cannot approve something that’s going to target me more than it will target you.” Clapprood responded that cities could build in safeguards to prevent racial bias or abuse of civil liberties. “The facial recognition technology does not come along and drop a net from the sky and carry you off to prison,” she said. But the tentative lesson of the NIST study is that unless technology
customers use due diligence when they purchase the software—and are
cautious about the matches it turns up—the odds of falling prey to
“demographic errors”—in the researchers phrase—are high." The entire NIST study can be accessed here. For a summary of the findings, click here.
The entire story can be read at:
The entire story can be read at:
PUBLISHER'S NOTE: I am monitoring this case/issue. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic" section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/charlessmith. Information on "The Charles Smith Blog Award"- and its nomination process - can be found at: http://smithforensic.blogspot.com/2011/05/charles-smith-blog-award-nominations.html Please send any comments or information on other cases and issues of interest to the readers of this blog to: hlevy15@gmail.com. Harold Levy: Publisher: The Charles Smith Blog;
---------------------------------------------------------------
FINAL WORD: (Applicable to all of our wrongful conviction cases): "Whenever there is a wrongful conviction, it exposes errors in our criminal legal system, and we hope that this case — and lessons from it — can prevent future injustices.""
Lawyer Radha Natarajan:
https://www.providencejournal.com/news/20191210/da-drops-murder-charge-against-taunton-man-who-served-35-years-for-1979-slaying
----------------------------------------------------------------