Tuesday, May 21, 2019

Technology series: (Part Eight): Algorithms: Ethical considerations over use of artificial intelligence are also being raised in the UK - including concern over a computer tool used by police to predict which people are likely to reoffend.


PUBLISHER'S NOTE: In recent years, I have found myself publishing more and more posts on the  application of artificial intelligence technology to policing, public safety, and the criminal justice process,  not just in North America, but in countries all over the world, including China. Although I accept that properly applied science  can play a positive role in our society, I have learned over the years that technologies introduced for the so-called public good, can eventually be used against the people they were supposed to  benefit. As reporter Sieeka Khan  writes in Science Times:  "In 2017, researchers sent a letter to the secretary of the US Department of Homeland Security. The researchers expressed their concerns about a proposal to use the AI to determine whether someone who is seeking refuge in the US would become a positive and contributing member of society or if they are likely to become a threat or a terrorist. The other government uses of AI are also being questioned, such as the attempts at setting bail amounts and sentences on criminals, predictive policing and hiring government workers. All of these attempts have been shown to be prone to technical issues and a limit on the data can cause bias on their decisions as they will base it on gender, race or cultural background. Other AI technologies like automated surveillance, facial recognition and mass data collection are raising concerns about privacy, security, accuracy and fairness in a democratic society. As the executive order of Trump demonstrates, there is a massive interest in harnessing AI for its full, positive potential. But the dangers of misuse, bias and abuse, whether it is intentional or not, have the chance to work against the principles of international democracies. As the use of artificial intelligence grows, the potential for misuse, bias and abuse grows as well. The purpose of this 'technology' series, is to highlight the dangers of artificial intelligence -  and to help readers make their own assessments as to  whether these innovations will do more harm than good."

Harold Levy: Publisher: The Charles Smith Blog.

----------------------------------------------------------

STORY: "Ethics committee raises alarm over 'predictive policing' tool," by reporter Sarah Marsh, published by The Guardian on April 20, 2019.

SUB-HEADING:  "Algorithm that predicts who will reoffend may give rise to ethical concerns such as bias."

GIST: "A computer tool used by police to predict which people are likely to reoffend has come under scrutiny from one force’s ethics committee, who said there were a lot of “unanswered questions” and concerns about potential bias. Amid mounting financial pressure, at least a dozen police forces are using or considering predictive analytics, despite warnings from campaigners that use of algorithms and “predictive policing” models risks locking discrimination into the criminal justice system.
West Midlands police are at the forefront, leading on a £4.5m project funded by the Home Office called National Data Analytics Solution (NDAS). The long-term aim of the project is to analyse data from force databases, social services, the NHS and schools to calculate where officers can be most effectively used. An initial trial combined data on crimes, custody, gangs and criminal records to identify 200 offenders “who were getting others into a life on the wrong side of the law”.
A report by West Midlands police’s ethics committee, however, raised concerns about the project. They said there were a lot of “unanswered questions giving rise to the potential for ethical concerns”.
The committee noted that no privacy impact assessments had been made available, and there was almost no analysis of how it impacted rights. The new tool will use data such as that linked to stop and search, and the ethics committee noted this would also include information on people who were stopped with nothing found, which could entail “elements of police bias”.  Hannah Couchman, the advocacy and policy officer at the human rights organisation Liberty, said: “The proposed program would rely on data loaded with bias and demonstrates exactly why we are deeply concerned about predictive policing entrenching historic discrimination into ongoing policing strategies. “It is welcome that the ethics committee has raised concerns about these issues, but not all forces have similar oversight and the key question here should be whether these biased programs have any place in policing at all. It is hard to see how these proposals could be reformed to address these fundamental issues.” Tom McNeil, the strategic adviser to the West Midlands police and crime commissioner, said: “The robust advice and feedback of the ethics committee shows it is doing what it was designed to do. The committee is there to independently scrutinise and challenge West Midlands police and make recommendations to the police and crime commissioner and chief constable.” He added: “This is an important area of work, that is why it is right that it is properly scrutinised and those details are made public.” The ethics committee recommended more information be provided about the benefits of the mode. “The language use in the report has the potential to cause unconscious bias. The committee recommends the lab looks at the language used in the report, including the reference to propensity for certain ethnic minorities to be more likely to commit high-harm offences, given the statistical analysis showed ethnicity was not a reliable predictor,” it said. In February, a report by Liberty raised concern that predictive programs encouraged racial profiling and discrimination, and threatened privacy and freedom of expression. Couchman said that when decisions were made on the basis of arrest data, this was “already imbued with discrimination and bias from the way people policed in the past” and that was “entrenched by algorithms”.
She added: “One of the key risks with that is that it adds a technological veneer to biased policing practices. People think computer programs are neutral but they are just entrenching the pre-existing biases that the police have always shown.” Using freedom of information data, Liberty discovered that at least 14 forces in the UK are either using algorithm programs for policing, have previously done so or have conducted research and trials into them."

The entire story can be read at:
https://www.theguardian.com/uk-news/2019/apr/20/predictive-policing-tool-could-entrench-bias-ethics-committee-warns