Wednesday, May 22, 2019

Technology Series (Part Nine): Global News reports that "Canada lacks laws to tackle problems posed by artificial intelligence: Expert." (Associated Press reporter Chris Reynolds)...“We need the government, we need the regulation in Canada,” said Mahdi Amri, who heads AI services at Deloitte Canada. The absence of an AI-specific legal framework undermines trust in the technology and, potentially, accountability among its providers, according to a report he co-authored. “Basically there’s this idea that the machines will make all the decisions and the humans will have nothing to say, and we’ll be ruled by some obscure black box somewhere,” Amri said. Robot overlords remain firmly in the realm of science fiction, but AI is increasingly involved in decisions that have serious consequences for individuals."sTechnology Series: (Part Nine):


PUBLISHER'S NOTE: In recent years, I have found myself publishing more and more posts on the  application of artificial intelligence technology to policing, public safety, and the criminal justice process,  not just in North America, but in countries all over the world, including China. Although I accept that properly applied science  can play a positive role in our society, I have learned over the years that technologies introduced for the so-called public good, can eventually be used against the people they were supposed to  benefit. As reporter Sieeka Khan  writes in Science Times:  "In 2017, researchers sent a letter to the secretary of the US Department of Homeland Security. The researchers expressed their concerns about a proposal to use the AI to determine whether someone who is seeking refuge in the US would become a positive and contributing member of society or if they are likely to become a threat or a terrorist. The other government uses of AI are also being questioned, such as the attempts at setting bail amounts and sentences on criminals, predictive policing and hiring government workers. All of these attempts have been shown to be prone to technical issues and a limit on the data can cause bias on their decisions as they will base it on gender, race or cultural background. Other AI technologies like automated surveillance, facial recognition and mass data collection are raising concerns about privacy, security, accuracy and fairness in a democratic society. As the executive order of Trump demonstrates, there is a massive interest in harnessing AI for its full, positive potential. But the dangers of misuse, bias and abuse, whether it is intentional or not, have the chance to work against the principles of international democracies. As the use of artificial intelligence grows, the potential for misuse, bias and abuse grows as well. The purpose of this 'technology' series, is to highlight the dangers of artificial intelligence -  and to help readers make their own assessments as to  whether these innovations will do more harm than good.

----------------------------------------------------------------

PASSAGE OF THE DAY: "Robot overlords remain firmly in the realm of science fiction, but AI is increasingly involved in decisions that have serious consequences for individuals. Since 2015, police departments in Vancouver, Edmonton, Saskatoon and London, Ont. have implemented or piloted predictive policing _ automated decision-making based on data that predicts where a crime will occur or who will commit it. The federal immigration and refugee system relies on algorithmically-driven decisions to help determine factors such as whether a marriage is genuine or someone should be designated as a “risk”, according to a Citizen Lab study, which found the practice threatens to violate human rights law. AI testing and deployment in Canada’s military prompted Canadian AI pioneers Geoffrey Hinton and Yoshua Bengio to warn about the dangers of robotic weapons and outsourcing lethal decisions to machines, and to call for an international agreement on their deployment. “When you’re using any type of black box system, you don’t even know the standards that are embedded in the system or the types of data that may be used by the system that could be at risk of perpetuating bias,” said Rashida Richardson, director of policy research at New York University’s AI Now Institute."

------------------------------------------------------------

STORY: "Canada lacks laws to tackle problems posed by artificial intelligence: Experts," by Associated press reporter Chris Reynolds,  published by Global News on May 19, 2019.

GIST: The role of artificial intelligence in Netflix’s movie suggestions and Alexa’s voice commands is commonly understood, but less known is the shadowy role AI now plays in law enforcement, immigration assessment, military programs and other areas. Despite its status as a machine-learning innovation hub, Canada has yet to develop a regulatory regime to deal with issues of discrimination and accountability to which AI systems are prone, prompting calls for regulation — including from business leaders.“We need the government, we need the regulation in Canada,” said Mahdi Amri, who heads AI services at Deloitte Canada. The absence of an AI-specific legal framework undermines trust in the technology and, potentially, accountability among its providers, according to a report he co-authored. “Basically there’s this idea that the machines will make all the decisions and the humans will have nothing to say, and we’ll be ruled by some obscure black box somewhere,” Amri said. Robot overlords remain firmly in the realm of science fiction, but AI is increasingly involved in decisions that have serious consequences for individuals. Since 2015, police departments in Vancouver, Edmonton, Saskatoon and London, Ont. have implemented or piloted predictive policing _ automated decision-making based on data that predicts where a crime will occur or who will commit it. The federal immigration and refugee system relies on algorithmically-driven decisions to help determine factors such as whether a marriage is genuine or someone should be designated as a “risk”, according to a Citizen Lab study, which found the practice threatens to violate human rights law.AI testing and deployment in Canada’s military prompted Canadian AI pioneers Geoffrey Hinton and Yoshua Bengio to warn about the dangers of robotic weapons and outsourcing lethal decisions to machines, and to call for an international agreement on their deployment. “When you’re using any type of black box system, you don’t even know the standards that are embedded in the system or the types of data that may be used by the system that could be at risk of perpetuating bias,” said Rashida Richardson, director of policy research at New York University’s AI Now Institute.She pointed to “horror cases,” including a predictive policing strategy in Chicago where the majority of people on a list of potential perpetrators were black men who had no arrests or shooting incidents to their name, “the same demographic that was targeted by over-policing and discriminatory police practices.”Richardson says it’s time to move from lofty guidelines to legal reform. A recent AI Now Institute report states federal governments should “oversee, audit, and monitor” the use of AI in fields like criminal justice, health care and education, as “internal governance structures at most technology companies are failing to ensure accountability for AI systems.”Oversight should be divided up among agencies or groups of experts instead of hoisting it all onto a single AI regulatory body, given the unique challenges and regulations specific to each industry, the report says. In health care, AI is poised to upend the way doctors practice medicine as machine-learning systems can now analyze vast sets of anonymized patient data and images to identify health problems ranging from osteoporosis to lesions and signs of blindness. Carolina Bessega, co-founder and chief scientific officer of Montreal-based Stradigi AI, says the regulatory void discourages businesses from using AI, holding back innovation and efficiency _ particularly in hospitals and clinics, where the implications can be life or death. “Right now it’s like a grey area, and everybody’s afraid making the decision of, ‘Okay, let’s use artificial intelligence to improve diagnosis, or let’s use artificial intelligence to help recommend a treatment for a patient,'” Bessega said.She is calling for “very strong” regulations around treatment and diagnosis and for a professional to bear responsibility for any final decisions, not a software program.Critics say Canada lags behind the U.S. and the EU on exploring AI regulation. None has implemented a comprehensive legal framework, but Congress and the EU Commission have produced extensive reports on the issue. “Critically, there is no legal framework in Canada to guide the use of these technologies or their intersection with foundational rights related to due process, administrative fairness, human rights, and justice system transparency,” states a March briefing by Citizen Lab, the Law Commission of Ontario and other bodies.Divergent international standards, trade secrecy and algorithms’ constant “fluidity” pose obstacles to smooth regulation, says Miriam Buiten, junior professor of law and economics at the University of Mannheim.Canada was among the first states to develop an official AI research plan, unveiling a $125-million strategy in 2017. But its focus was largely scientific and commercial. In December, Prime Minister Trudeau and French President Emmanuel Macron announced a joint task force to guide AI policy development with an eye to human rights. Minister of Innovation, Science and Economic Development Navdeep Bains told The Canadian Press in April a report was forthcoming “in the coming months.” Asked whether the government is open to legislation around AI transparency and accountability, he said: “I think we need to take a step back to determine what are the core guiding principles. “We’ll be coming forward with those principles to establish our ability to move forward with regards to programming, with regards to legislative changes — and it’s not only going to be simply my department, it’s a whole government approach.” The Treasury Board of Canada has already laid out a 119-word set of principles on responsible AI use that stress transparency and proper training. The Department of Innovation, Science and Economic Development highlighted the Personal Information Protection and Electronic Documents Act, privacy legislation that applies broadly to commercial activities and allows a privacy commissioner to probe complaints. “While AI may present some novel elements, it and other disruptive technologies are subject to existing laws and regulations that cover competition, intellectual property, privacy and security,” a department spokesperson said in an email. As of April 1, 2020, government departments seeking to deploy an automated decision system must first conduct an “algorithmic impact assessment” and post the results online.:

The entire story can be read at:

https://globalnews.ca/news/5293400/canada-ai-laws/