PUBLISHER'S NOTE: In recent years, I have found myself publishing more and more posts on the application of artificial intelligence technology to policing, public safety, and the criminal justice process, not just in North America, but in countries all over the world, including China. Although I accept that properly applied science can play a positive role in our society, I have learned over the years that technologies introduced for the so-called public good, can eventually be used against the people they were supposed to benefit. As reporter Sieeka Khan writes in Science Times: "In 2017, researchers sent a letter to the secretary of the US Department of Homeland Security. The researchers expressed their concerns about a proposal to use the AI to determine whether someone who is seeking refuge in the US would become a positive and contributing member of society or if they are likely to become a threat or a terrorist. The other government uses of AI are also being questioned, such as the attempts at setting bail amounts and sentences on criminals, predictive policing and hiring government workers. All of these attempts have been shown to be prone to technical issues and a limit on the data can cause bias on their decisions as they will base it on gender, race or cultural background. Other AI technologies like automated surveillance, facial recognition and mass data collection are raising concerns about privacy, security, accuracy and fairness in a democratic society. As the executive order of Trump demonstrates, there is a massive interest in harnessing AI for its full, positive potential. But the dangers of misuse, bias and abuse, whether it is intentional or not, have the chance to work against the principles of international democracies. As the use of artificial intelligence grows, the potential for misuse, bias and abuse grows as well. The purpose of this 'technology' series, is to highlight the dangers of artificial intelligence - and to help readers make their own assessments as to whether these innovations will do more harm than good.
----------------------------------------------------------------
PASSAGE OF THE DAY: "Robot overlords remain firmly in the realm of science fiction, but AI is increasingly involved in decisions that have serious consequences for individuals. Since 2015, police departments in Vancouver, Edmonton, Saskatoon and London, Ont. have implemented or piloted predictive policing _ automated decision-making based on data that predicts where a crime will occur or who will commit it. The federal immigration and refugee system relies on algorithmically-driven decisions to help determine factors such as whether a marriage is genuine or someone should be designated as a “risk”, according to a Citizen Lab study, which found the practice threatens to violate human rights law. AI testing and deployment in Canada’s military prompted Canadian AI pioneers Geoffrey Hinton and Yoshua Bengio to warn about the dangers of robotic weapons and outsourcing lethal decisions to machines, and to call for an international agreement on their deployment. “When you’re using any type of black box system, you don’t even know the standards that are embedded in the system or the types of data that may be used by the system that could be at risk of perpetuating bias,” said Rashida Richardson, director of policy research at New York University’s AI Now Institute."
------------------------------------------------------------
STORY: "Canada lacks laws to tackle problems posed by artificial intelligence: Experts," by Associated press reporter Chris Reynolds, published by Global News on May 19, 2019.