PUBLISHER'S NOTE: In recent years, I have found myself publishing more and more posts on the application of artificial intelligence technology to policing, public safety, and the criminal justice process, not just in North America, but in countries all over the world, including China. Although I accept that properly applied science can play a positive role in our society, I have learned over the years that technologies introduced for the so-called public good, can eventually be used against the people they were supposed to benefit. As reporter Sieeka Khan writes in Science Times: "In 2017, researchers sent a letter to the secretary of the US Department of Homeland Security. The researchers expressed their concerns about a proposal to use the AI to determine whether someone who is seeking refuge in the US would become a positive and contributing member of society or if they are likely to become a threat or a terrorist. The other government uses of AI are also being questioned, such as the attempts at setting bail amounts and sentences on criminals, predictive policing and hiring government workers. All of these attempts have been shown to be prone to technical issues and a limit on the data can cause bias on their decisions as they will base it on gender, race or cultural background. Other AI technologies like automated surveillance, facial recognition and mass data collection are raising concerns about privacy, security, accuracy and fairness in a democratic society. As the executive order of Trump demonstrates, there is a massive interest in harnessing AI for its full, positive potential. But the dangers of misuse, bias and abuse, whether it is intentional or not, have the chance to work against the principles of international democracies. As the use of artificial intelligence grows, the potential for misuse, bias and abuse grows as well.
Harold Levy: Publisher: The Charles Smith Blog.
---------------------------------------------------------------------
QUOTE OF THE DAY: "Michael Bryant on Thursday called on the Toronto Police Services Board to place a moratorium on the use of the technology, “because it renders all of us walking ID cards.” He called the use of the technology “carding by algorithm, and notoriously unreliable,” and, in a written statement, likened it to police “fingerprinting and DNA swabbing everybody at Yonge and Bloor during rush hour” and running the results through databases. Continued use of the technology leaves the board open to lawsuits and, at the very least, requires formal oversight, Bryant said."
----------------------------------------------------------------------
PASSAGE OF THE DAY: "As reported earlier this week by the Star, Toronto police have been using facial recognition technology driven by artificial intelligence for more than a year. Police say it’s an efficient tool that has led to arrests in major crimes, including homicides. But it also comes with criticisms that the technology is an invasion of privacy and overreach by police and state. San Francisco, a tech-centric city, recently banned the tool. A London, U.K., policing ethics panel this month concluded that the technology should not be used if police can’t prove it works equally well with people of all ethnic and racial backgrounds and women. The panel, set up to advise London city hall, noted there are “important ethical issues to be addressed” but concluded that does not mean the technology should not be used at all, reported The Guardian. "
----------------------------------------------------------------------
STORY: "Toronto police should drop facial recognition technology or risk lawsuits, civil liberties association tells board," by reporters Jim Rankin and Wendy Gillis, published by The Toronto Star on May 30, 2019.
GIST:"Toronto
police should stop using facial recognition technology or face the
prospect of class-action lawsuits, says the head of the Canadian Civil
Liberties Association. Michael Bryant on Thursday called on the
Toronto Police Services Board to place a moratorium on the use of the
technology, “because it renders all of us walking ID cards.” He
called the use of the technology “carding by algorithm, and notoriously
unreliable,” and, in a written statement, likened it to police
“fingerprinting and DNA swabbing everybody at Yonge and Bloor during
rush hour” and running the results through databases. Continued
use of the technology leaves the board open to lawsuits and, at the very
least, requires formal oversight, Bryant said. Toronto police
Deputy Chief James Ramer told the board the use of the technology is
nothing like the controversial practice of carding. “It’s not
indiscriminate, it’s not random,” he said. “It’s very specific.” The technology saves victims of crimes from having to go through police mugshot databases, Ramer said. On
Thursday, the police board passed a motion to receive Chief Mark
Saunders’ report on the use of the technology and the deputations from
Bryant and others. As reported earlier this week by the Star,
Toronto police have been using facial recognition technology driven by
artificial intelligence for more than a year. Police say it’s an
efficient tool that has led to arrests in major crimes, including
homicides. But
it also comes with criticisms that the technology is an invasion of
privacy and overreach by police and state. San Francisco, a tech-centric
city, recently banned the tool. A London, U.K., policing ethics
panel this month concluded that the technology should not be used if
police can’t prove it works equally well with people of all ethnic and
racial backgrounds and women. The panel, set up to advise London
city hall, noted there are “important ethical issues to be addressed”
but concluded that does not mean the technology should not be used at
all, reported The Guardian. Research
has shown that differences in race and gender can lead the technology
to return false positives. Some systems kick out higher false-positive
rates for Black women, compared to white men. Toronto police ran
1,516 facial recognition searches using about 5,000 still and video
images between March and December of last year, according to Saunders’
report to the board. They were cross-checked against the
service’s mugshot database of 1.5 million individuals, resulting in
matches in about 60 per cent of the searches. Of those, 80 per cent of
the matches resulted in identifying criminal offenders. There is
no count available for how many led to arrests, since the technology
identifies potential matches that must be further investigated further
using other police methods. Toronto police said they have no plans
to extend matches beyond the mugshot database, and that real-time
facial recognition, such as searching faces in crowds, is not being
used. The technology was used in the investigation into the Gay Village serial murders to help determine the identity of one of the victims."
The entire story can be read at:
The entire story can be read at: