Wednesday, May 22, 2019

Technology Series (Part Nine): Global News reports that "Canada lacks laws to tackle problems posed by artificial intelligence: Expert." (Associated Press reporter Chris Reynolds)...“We need the government, we need the regulation in Canada,” said Mahdi Amri, who heads AI services at Deloitte Canada. The absence of an AI-specific legal framework undermines trust in the technology and, potentially, accountability among its providers, according to a report he co-authored. “Basically there’s this idea that the machines will make all the decisions and the humans will have nothing to say, and we’ll be ruled by some obscure black box somewhere,” Amri said. Robot overlords remain firmly in the realm of science fiction, but AI is increasingly involved in decisions that have serious consequences for individuals."sTechnology Series: (Part Nine):


PUBLISHER'S NOTE: In recent years, I have found myself publishing more and more posts on the  application of artificial intelligence technology to policing, public safety, and the criminal justice process,  not just in North America, but in countries all over the world, including China. Although I accept that properly applied science  can play a positive role in our society, I have learned over the years that technologies introduced for the so-called public good, can eventually be used against the people they were supposed to  benefit. As reporter Sieeka Khan  writes in Science Times:  "In 2017, researchers sent a letter to the secretary of the US Department of Homeland Security. The researchers expressed their concerns about a proposal to use the AI to determine whether someone who is seeking refuge in the US would become a positive and contributing member of society or if they are likely to become a threat or a terrorist. The other government uses of AI are also being questioned, such as the attempts at setting bail amounts and sentences on criminals, predictive policing and hiring government workers. All of these attempts have been shown to be prone to technical issues and a limit on the data can cause bias on their decisions as they will base it on gender, race or cultural background. Other AI technologies like automated surveillance, facial recognition and mass data collection are raising concerns about privacy, security, accuracy and fairness in a democratic society. As the executive order of Trump demonstrates, there is a massive interest in harnessing AI for its full, positive potential. But the dangers of misuse, bias and abuse, whether it is intentional or not, have the chance to work against the principles of international democracies. As the use of artificial intelligence grows, the potential for misuse, bias and abuse grows as well. The purpose of this 'technology' series, is to highlight the dangers of artificial intelligence -  and to help readers make their own assessments as to  whether these innovations will do more harm than good.

----------------------------------------------------------------

PASSAGE OF THE DAY: "Robot overlords remain firmly in the realm of science fiction, but AI is increasingly involved in decisions that have serious consequences for individuals. Since 2015, police departments in Vancouver, Edmonton, Saskatoon and London, Ont. have implemented or piloted predictive policing _ automated decision-making based on data that predicts where a crime will occur or who will commit it. The federal immigration and refugee system relies on algorithmically-driven decisions to help determine factors such as whether a marriage is genuine or someone should be designated as a “risk”, according to a Citizen Lab study, which found the practice threatens to violate human rights law. AI testing and deployment in Canada’s military prompted Canadian AI pioneers Geoffrey Hinton and Yoshua Bengio to warn about the dangers of robotic weapons and outsourcing lethal decisions to machines, and to call for an international agreement on their deployment. “When you’re using any type of black box system, you don’t even know the standards that are embedded in the system or the types of data that may be used by the system that could be at risk of perpetuating bias,” said Rashida Richardson, director of policy research at New York University’s AI Now Institute."

------------------------------------------------------------

STORY: "Canada lacks laws to tackle problems posed by artificial intelligence: Experts," by Associated press reporter Chris Reynolds,  published by Global News on May 19, 2019.

GIST: The role of artificial intelligence in Netflix’s movie suggestions and Alexa’s voice commands is commonly understood, but less known is the shadowy role AI now plays in law enforcement, immigration assessment, military programs and other areas. Despite its status as a machine-learning innovation hub, Canada has yet to develop a regulatory regime to deal with issues of discrimination and accountability to which AI systems are prone, prompting calls for regulation — including from business leaders.“We need the government, we need the regulation in Canada,” said Mahdi Amri, who heads AI services at Deloitte Canada. The absence of an AI-specific legal framework undermines trust in the technology and, potentially, accountability among its providers, according to a report he co-authored. “Basically there’s this idea that the machines will make all the decisions and the humans will have nothing to say, and we’ll be ruled by some obscure black box somewhere,” Amri said. Robot overlords remain firmly in the realm of science fiction, but AI is increasingly involved in decisions that have serious consequences for individuals. Since 2015, police departments in Vancouver, Edmonton, Saskatoon and London, Ont. have implemented or piloted predictive policing _ automated decision-making based on data that predicts where a crime will occur or who will commit it. The federal immigration and refugee system relies on algorithmically-driven decisions to help determine factors such as whether a marriage is genuine or someone should be designated as a “risk”, according to a Citizen Lab study, which found the practice threatens to violate human rights law.AI testing and deployment in Canada’s military prompted Canadian AI pioneers Geoffrey Hinton and Yoshua Bengio to warn about the dangers of robotic weapons and outsourcing lethal decisions to machines, and to call for an international agreement on their deployment. “When you’re using any type of black box system, you don’t even know the standards that are embedded in the system or the types of data that may be used by the system that could be at risk of perpetuating bias,” said Rashida Richardson, director of policy research at New York University’s AI Now Institute.She pointed to “horror cases,” including a predictive policing strategy in Chicago where the majority of people on a list of potential perpetrators were black men who had no arrests or shooting incidents to their name, “the same demographic that was targeted by over-policing and discriminatory police practices.”Richardson says it’s time to move from lofty guidelines to legal reform. A recent AI Now Institute report states federal governments should “oversee, audit, and monitor” the use of AI in fields like criminal justice, health care and education, as “internal governance structures at most technology companies are failing to ensure accountability for AI systems.”Oversight should be divided up among agencies or groups of experts instead of hoisting it all onto a single AI regulatory body, given the unique challenges and regulations specific to each industry, the report says. In health care, AI is poised to upend the way doctors practice medicine as machine-learning systems can now analyze vast sets of anonymized patient data and images to identify health problems ranging from osteoporosis to lesions and signs of blindness. Carolina Bessega, co-founder and chief scientific officer of Montreal-based Stradigi AI, says the regulatory void discourages businesses from using AI, holding back innovation and efficiency _ particularly in hospitals and clinics, where the implications can be life or death. “Right now it’s like a grey area, and everybody’s afraid making the decision of, ‘Okay, let’s use artificial intelligence to improve diagnosis, or let’s use artificial intelligence to help recommend a treatment for a patient,'” Bessega said.She is calling for “very strong” regulations around treatment and diagnosis and for a professional to bear responsibility for any final decisions, not a software program.Critics say Canada lags behind the U.S. and the EU on exploring AI regulation. None has implemented a comprehensive legal framework, but Congress and the EU Commission have produced extensive reports on the issue. “Critically, there is no legal framework in Canada to guide the use of these technologies or their intersection with foundational rights related to due process, administrative fairness, human rights, and justice system transparency,” states a March briefing by Citizen Lab, the Law Commission of Ontario and other bodies.Divergent international standards, trade secrecy and algorithms’ constant “fluidity” pose obstacles to smooth regulation, says Miriam Buiten, junior professor of law and economics at the University of Mannheim.Canada was among the first states to develop an official AI research plan, unveiling a $125-million strategy in 2017. But its focus was largely scientific and commercial. In December, Prime Minister Trudeau and French President Emmanuel Macron announced a joint task force to guide AI policy development with an eye to human rights. Minister of Innovation, Science and Economic Development Navdeep Bains told The Canadian Press in April a report was forthcoming “in the coming months.” Asked whether the government is open to legislation around AI transparency and accountability, he said: “I think we need to take a step back to determine what are the core guiding principles. “We’ll be coming forward with those principles to establish our ability to move forward with regards to programming, with regards to legislative changes — and it’s not only going to be simply my department, it’s a whole government approach.” The Treasury Board of Canada has already laid out a 119-word set of principles on responsible AI use that stress transparency and proper training. The Department of Innovation, Science and Economic Development highlighted the Personal Information Protection and Electronic Documents Act, privacy legislation that applies broadly to commercial activities and allows a privacy commissioner to probe complaints. “While AI may present some novel elements, it and other disruptive technologies are subject to existing laws and regulations that cover competition, intellectual property, privacy and security,” a department spokesperson said in an email. As of April 1, 2020, government departments seeking to deploy an automated decision system must first conduct an “algorithmic impact assessment” and post the results online.:

The entire story can be read at:

https://globalnews.ca/news/5293400/canada-ai-laws/
 

Tuesday, May 21, 2019

Arson 'Science': Cameron Todd Willingham: Stuart Miller review's Ed Zwick's 2018 film 'Trial by Fire' in the New York Daily News. (An important movie telling the tragic story of a glaringly innocent man who was executed by the State of Texas. HL)..."Ed Zwick’s new film recounts the notorious true story of how events spiraled after this Texas tragedy. Grievous errors by arson investigators led to Willingham getting charged with arson and murder. Then his defense lawyer’s apathy and incompetence coupled with the prosecutor’s misconduct — not to mention a class-driven society that is quick to blame an outsider — landed Willingham on death row. “He was from a certain section of society that is deprived of a voice,” says British actor Jack O'Connell, who portrays Willingham. “This is about class and poverty as a dividing line,” Zwick says. Just because Willingham was a hard-drinking, uneducated womanizer, does not mean he should be deprived of justice, Zwick says." The Stae; An important review. HL.Movie Review: Trial by Fire’ and its real life tragedy - New York Daily News


PASSAGE ONE OF THE DAY: "Evidence of the mistakes and misconduct came to light while Willingham was on death row, but it was not enough to save him because Gov. Rick Perry was unwilling to accept the possibility that law enforcement was wrong. “There is quite overwhelming evidence that proves his innocence but he seems to have had no worth to them,” British actor Jack O'Connell, who plays Willingham, says.

-------------------------------------------------------------------

PASSAGE TWO OF THE DAY: "In the years after Willingham's death, journalists and the New York-based Innocence Project continued pushing to reveal the truth. “The great tragedy is that the techniques used by the arson investigators highlighted in ‘Trial By Fire’ really had been discredited ten years before Todd was executed,” says Innocence Project co-founder Barry Scheck. “One of the great things they do in the movie is they really lay out visually and clearly what the investigators looked at and got wrong, like the scarring on the floor and ‘crazed’ glass.” “This was absolutely the worst,” adds John Lentini, who wrote the leading scientific book on arson and pioneered efforts to debunk the sort of mistakes made in cases like this one. When Zwick read a New Yorker article about Willingham's case in 2009, he “was in an inchoate rage and knew immediately I wanted to make a film.”

---------------------------------------------------------------------

PASSAGE THREE OF THE DAY:  "Things have changed in the Lonestar state, Scheck says the Texas Forensic Science Commission is now one of the best in the country. “The Willingham case really brought these issues of junk forensic science to the forefront and in that context Todd did not die in vain,” Scheck says, although O'Connell adds that “it's just a shame it takes a catastrophe like that to effect change.”

----------------------------------------------------------------------



MOVIE REVIEW: "'Trial by Fire’ and its real life tragedy," an Ed Zwick Film, reviewed by Stuart Miller in the New York Daily News on May 19, 2019.

FROM THE TRIAL BY FIRE WEB PAGE: "The tragic and controversial story of Cameron Todd Willingham, who was executed in Texas for killing his three children after scientific evidence and expert testimony that bolstered his claims of innocence were suppressed."
https://www.google.com/search?client=firefox-b-1-d&channel=tus&q=%22trial+by+Fire%22


GIST: "As “Trial By Fire” opens, an inferno consumes the house of Cameron Todd Willingham. He races out, briefly tries to get back inside to save his children, then gives up hope. He is, at that moment, a doomed man. Inside were Willingham’s three toddlers, who all died in the blaze. Ed Zwick’s new film recounts the notorious true story of how events spiraled after this Texas tragedy. Grievous errors by arson investigators led to Willingham getting charged with arson and murder. Then his defense lawyer’s apathy and incompetence coupled with the prosecutor’s misconduct — not to mention a class-driven society that is quick to blame an outsider — landed Willingham on death row. “He was from a certain section of society that is deprived of a voice,” says British actor Jack O'Connell, who portrays Willingham. “This is about class and poverty as a dividing line,” Zwick says. Just because Willingham was a hard-drinking, uneducated womanizer, does not mean he should be deprived of justice, Zwick says. “One of the things that magnetized me to the story is that he was immediately pegged by everybody as the 'other,' because of his looks, lack of education and disreputable behavior.” Evidence of the mistakes and misconduct came to light while Willingham was on death row, but it was not enough to save him because Gov. Rick Perry was unwilling to accept the possibility that law enforcement was wrong. “There is quite overwhelming evidence that proves his innocence but he seems to have had no worth to them,” O'Connell says. In the years after Willingham's death, journalists and the New York-based Innocence Project continued pushing to reveal the truth. “The great tragedy is that the techniques used by the arson investigators highlighted in ‘Trial By Fire’ really had been discredited ten years before Todd was executed,” says Innocence Project co-founder Barry Scheck. “One of the great things they do in the movie is they really lay out visually and clearly what the investigators looked at and got wrong, like the scarring on the floor and ‘crazed’ glass.” “This was absolutely the worst,” adds John Lentini, who wrote the leading scientific book on arson and pioneered efforts to debunk the sort of mistakes made in cases like this one. When Zwick read a New Yorker article about Willingham's case in 2009, he “was in an inchoate rage and knew immediately I wanted to make a film.” But while most of his films, like “Glory,” “Blood Diamond” and “Defiance,” were profitable, getting financing took nearly a decade. Zwick aimed to focus the movie on the characters and their emotional journeys. “This is not an institutional lecture. I tend to tell stories of personal relationships in the context of bigger stories. It's that juxtapoisiton that interests me.” He first cast Laura Dern as the writer who takes up Willingham's cause in his final years of appeals, keeping his hopes and the case alive, at least temporarily. Then he added O'Connell for his “willingness to expose the darker colors.” O'Connell worked with dialect coach Tim Monich to get the right regional Texas accent. While Zwick briefly shows footage of Perry at the end to hold him accountable for as he says, “using the death penalty as a political tool.” Things have changed in the Lonestar state, Scheck says the Texas Forensic Science Commission is now one of the best in the country. “The Willingham case really brought these issues of junk forensic science to the forefront and in that context Todd did not die in vain,” Scheck says, although O'Connell adds that “it's just a shame it takes a catastrophe like that to effect change.” Not everyone is moving fast enough, including New York, says Adele Bernhard, an attorney who represented one of three men who were wrongfully convicted of arson and murder in a Park Slope fire that killed a mother and five children. William Vasquez and Amaury Villalobos spent 33 years behind bars for torching 695 Sackett St. in February 1980. The third man, Raymond Mora, died in prison in 1989. The woman who owned townhouse told police the trio set the fire because she had an ongoing drug beef with one of them. When Vasquez and Villalobos were paroled in 2012, Villalobos approached Bernhard, a law professor and director of the Post-Conviction Innocence Clinic to work on his case. The homeowner, Hannah Quick, admitted she lied about the men’s involvement. With Quick’s confession and the evidence of arson used by a fire marshal at the time of their 1981 conviction long since scientifically reputed, the men were exonerated in 2015. They were later awarded $31 million."


The entire review can be read at:
https://www.nydailynews.com/news/national/ny-trial-by-fire-movie-20190519-7d4etsudabetleddebudnzggte-story.html
 
 
 

Technology series: (Part Eight): Algorithms: Ethical considerations over use of artificial intelligence are also being raised in the UK - including concern over a computer tool used by police to predict which people are likely to reoffend.


PUBLISHER'S NOTE: In recent years, I have found myself publishing more and more posts on the  application of artificial intelligence technology to policing, public safety, and the criminal justice process,  not just in North America, but in countries all over the world, including China. Although I accept that properly applied science  can play a positive role in our society, I have learned over the years that technologies introduced for the so-called public good, can eventually be used against the people they were supposed to  benefit. As reporter Sieeka Khan  writes in Science Times:  "In 2017, researchers sent a letter to the secretary of the US Department of Homeland Security. The researchers expressed their concerns about a proposal to use the AI to determine whether someone who is seeking refuge in the US would become a positive and contributing member of society or if they are likely to become a threat or a terrorist. The other government uses of AI are also being questioned, such as the attempts at setting bail amounts and sentences on criminals, predictive policing and hiring government workers. All of these attempts have been shown to be prone to technical issues and a limit on the data can cause bias on their decisions as they will base it on gender, race or cultural background. Other AI technologies like automated surveillance, facial recognition and mass data collection are raising concerns about privacy, security, accuracy and fairness in a democratic society. As the executive order of Trump demonstrates, there is a massive interest in harnessing AI for its full, positive potential. But the dangers of misuse, bias and abuse, whether it is intentional or not, have the chance to work against the principles of international democracies. As the use of artificial intelligence grows, the potential for misuse, bias and abuse grows as well. The purpose of this 'technology' series, is to highlight the dangers of artificial intelligence -  and to help readers make their own assessments as to  whether these innovations will do more harm than good."

Harold Levy: Publisher: The Charles Smith Blog.

----------------------------------------------------------

STORY: "Ethics committee raises alarm over 'predictive policing' tool," by reporter Sarah Marsh, published by The Guardian on April 20, 2019.

SUB-HEADING:  "Algorithm that predicts who will reoffend may give rise to ethical concerns such as bias."

GIST: "A computer tool used by police to predict which people are likely to reoffend has come under scrutiny from one force’s ethics committee, who said there were a lot of “unanswered questions” and concerns about potential bias. Amid mounting financial pressure, at least a dozen police forces are using or considering predictive analytics, despite warnings from campaigners that use of algorithms and “predictive policing” models risks locking discrimination into the criminal justice system.
West Midlands police are at the forefront, leading on a £4.5m project funded by the Home Office called National Data Analytics Solution (NDAS). The long-term aim of the project is to analyse data from force databases, social services, the NHS and schools to calculate where officers can be most effectively used. An initial trial combined data on crimes, custody, gangs and criminal records to identify 200 offenders “who were getting others into a life on the wrong side of the law”.
A report by West Midlands police’s ethics committee, however, raised concerns about the project. They said there were a lot of “unanswered questions giving rise to the potential for ethical concerns”.
The committee noted that no privacy impact assessments had been made available, and there was almost no analysis of how it impacted rights. The new tool will use data such as that linked to stop and search, and the ethics committee noted this would also include information on people who were stopped with nothing found, which could entail “elements of police bias”.  Hannah Couchman, the advocacy and policy officer at the human rights organisation Liberty, said: “The proposed program would rely on data loaded with bias and demonstrates exactly why we are deeply concerned about predictive policing entrenching historic discrimination into ongoing policing strategies. “It is welcome that the ethics committee has raised concerns about these issues, but not all forces have similar oversight and the key question here should be whether these biased programs have any place in policing at all. It is hard to see how these proposals could be reformed to address these fundamental issues.” Tom McNeil, the strategic adviser to the West Midlands police and crime commissioner, said: “The robust advice and feedback of the ethics committee shows it is doing what it was designed to do. The committee is there to independently scrutinise and challenge West Midlands police and make recommendations to the police and crime commissioner and chief constable.” He added: “This is an important area of work, that is why it is right that it is properly scrutinised and those details are made public.” The ethics committee recommended more information be provided about the benefits of the mode. “The language use in the report has the potential to cause unconscious bias. The committee recommends the lab looks at the language used in the report, including the reference to propensity for certain ethnic minorities to be more likely to commit high-harm offences, given the statistical analysis showed ethnicity was not a reliable predictor,” it said. In February, a report by Liberty raised concern that predictive programs encouraged racial profiling and discrimination, and threatened privacy and freedom of expression. Couchman said that when decisions were made on the basis of arrest data, this was “already imbued with discrimination and bias from the way people policed in the past” and that was “entrenched by algorithms”.
She added: “One of the key risks with that is that it adds a technological veneer to biased policing practices. People think computer programs are neutral but they are just entrenching the pre-existing biases that the police have always shown.” Using freedom of information data, Liberty discovered that at least 14 forces in the UK are either using algorithm programs for policing, have previously done so or have conducted research and trials into them."

The entire story can be read at:
https://www.theguardian.com/uk-news/2019/apr/20/predictive-policing-tool-could-entrench-bias-ethics-committee-warns

Monday, May 20, 2019

Criminalizing Reproduction: (Attacks on Science, Medicine and the Right To Choose.)...The Toronto Star says Canadians have good reason to worry "that 300 (yes, 300) anti-abortion bills have been introduced so far this year, alone, in 36 states south of our border" - And that " Because if history has made one thing clear, it’s that women’s rights are fragile and constantly under threat of being extinguished for political purposes. When they are undermined in any country — or any court — that emboldens those who would try to curb them elsewhere. Canadian women, one in three of whom will get an abortion in her lifetime, can’t rest assured that their rights are safe because abortion is legal here."


PUBLISHER'S NOTE:

I have taken on the  them of criminalizing reproduction - a natural theme for a Blog concerned with  flawed science in its myriad forms  and its flawed devotees (like Charles Smith), as I am utterly opposed to the current movement in the United States and some other countries - thankfully not Canada any more - towards imprisoning women and their physicians on the basis of sham science (or any other basis). Control over their reproductive lives is far too important to women in America or anywhere else so they can  participate  equally in the economic and social life of their nations without fear for  loss their freedom at the hands of political opportunists and fanatics. I will continue to follow relevant cases such as  Purvi Patel and Bei Bei Shuai - and the mounting wave of  legislative attacks aimed at chipping away at  Roe V. Wade and ultimately dismantling it.

Harold Levy: Publisher: The Charles Smith Blog.

-----------------------------------------------------------

PASSAGE OF THE DAY: "After all, what is happening in the United States, a country that should be a leader on women’s rights, is shocking. On Wednesday, the Alabama legislature passed the most draconian law to date banning abortions at any stage, without exception for incest or rape. Further, it calls for doctors who perform them to be jailed for up to 99 years. When Alabama Gov. Kay Ivey signed the bill, she said it “stands as a powerful testament to Alabamians’ deeply held belief that every life is precious, that every life is a sacred gift from God.” That’s quite a statement from the governor of the U.S. state that has the highest per capita death penalty rate in the country. What it really stands as a testament to, as California Senator and presidential candidate Kamala Harris tweeted, is Alabama’s goal “to criminalize women for their health care decisions.”


-------------------------------------------------------------

EDITORIAL: "Women’s rights under threat," published by The Toronto Star on May 20, 2109.

GIST: When Judge Brett Kavanaugh was under consideration for a position on the U.S. Supreme Court last fall, one thing became clear: Republican senators wanted to appoint him to appease the party’s right-wing base, whose members were confident he would tilt the court’s majority to overturn Roe v. Wade, the landmark 1973 ruling that established a woman’s constitutional protection to terminate a pregnancy. Now, in an effort to get Roe v. Wade overturned, they are working with Republican-controlled state governments to introduce anti-abortion bills designed to get before the Supreme Court. So why should Canadians worry   Because if history has made one thing clear, it’s that women’s rights are fragile and constantly under threat of being extinguished for political purposes. When they are undermined in any country — or any court — that emboldens those who would try to curb them elsewhere. Canadian women, one in three of whom will get an abortion in her lifetime, can’t rest assured that their rights are safe because abortion is legal here. It was legal in Poland until that country’s government fulfilled a backroom deal with the Catholic Church and banned abortion in 1993. That law, which allows for exceptions for serious threats to the health of the mother or the fetus and for pregnancy resulting from rape or incest, is now under attack — not from those who want to make abortion legal again, but from those who want to remove any grounds for it. While abortion is still legal in the United States, it is already increasingly difficult to obtain in many states. So Canadians would be wise to look south of the border to ensure there is no “backsliding” here, as Prime Minister Justin Trudeau put it. After all, what is happening in the United States, a country that should be a leader on women’s rights, is shocking. On Wednesday, the Alabama legislature passed the most draconian law to date banning abortions at any stage, without exception for incest or rape. Further, it calls for doctors who perform them to be jailed for up to 99 years. When Alabama Gov. Kay Ivey signed the bill, she said it “stands as a powerful testament to Alabamians’ deeply held belief that every life is precious, that every life is a sacred gift from God.” That’s quite a statement from the governor of the U.S. state that has the highest per capita death penalty rate in the country. What it really stands as a testament to, as California Senator and presidential candidate Kamala Harris tweeted, is Alabama’s goal “to criminalize women for their health care decisions.” While there are efforts to control women’s bodies in Canada, they are more subtle. The fact is, though abortion has been legal in this country since 1988, access to it is still uneven. That is especially true for lower income women. For example, the $450 abortion pill Mifegymiso — which was finally approved for use in Canada in 2015 — is still not universally covered under all provincial and territorial health care plans. It’s not even available at pharmacies in many parts of the country. Some provinces have also erected requirements that make it hard for women to obtain timely surgical abortions. And last week 12 Conservative MPs and three Ontario Progressive Conservative MPPs attended anti-abortion rallies on Parliament Hill and at Queen’s Park. “We pledge to fight to make abortion unthinkable in our lifetime,” MPP Sam Oosterhoff promised. At the same time, Conservative Leader Andrew Scheer has said unequivocally that if his party wins October’s federal election, he won’t re-open the abortion debate in Canada. Still, no one should be complacent about women’s rights on this front — or any other."

The entire editorial can be read at:
https://www.thestar.com/opinion/editorials/2019/05/20/womens-rights-under-threat.html

Technology Series: (Part Seven): Software privacy and porn: Software used by police in pornography prosecutions comes under attack by defence lawyers leading to withdrawal of charges; ProPublica story by reporter Jack Gillum explains why technology and privacy may not always mix.don't mix in court. "Defense attorneys have long complained that the government’s secrecy claims may hamstring suspects seeking to prove that the software wrongly identified them. But the growing success of their counterattack is also raising concerns that, by questioning the software used by investigators, some who trade in child pornography can avoid punishment."


PUBLISHER'S NOTE: In recent years, I have found myself publishing more and more posts on the  application of artificial intelligence technology to policing, public safety, and the criminal justice process,  not just in North America, but in countries all over the world, including China. Although I accept that properly applied science  can play a positive role in our society, I have learned over the years that technologies introduced for the so-called public good, can eventually be used against the people they were supposed to  benefit. As reporter Sieeka Khan  writes in Science Times:  "In 2017, researchers sent a letter to the secretary of the US Department of Homeland Security. The researchers expressed their concerns about a proposal to use the AI to determine whether someone who is seeking refuge in the US would become a positive and contributing member of society or if they are likely to become a threat or a terrorist. The other government uses of AI are also being questioned, such as the attempts at setting bail amounts and sentences on criminals, predictive policing and hiring government workers. All of these attempts have been shown to be prone to technical issues and a limit on the data can cause bias on their decisions as they will base it on gender, race or cultural background. Other AI technologies like automated surveillance, facial recognition and mass data collection are raising concerns about privacy, security, accuracy and fairness in a democratic society. As the executive order of Trump demonstrates, there is a massive interest in harnessing AI for its full, positive potential. But the dangers of misuse, bias and abuse, whether it is intentional or not, have the chance to work against the principles of international democracies. As the use of artificial intelligence grows, the potential for misuse, bias and abuse grows as well. The purpose of this 'technology' series, is to highlight the dangers of artificial intelligence -  and to help readers make their own assessments as to  whether these innovations will do more harm than good."

Harold Levy: Publisher: The Charles Smith Blog.

----------------------------------------------------------

PASSAGE OF THE DAY:  “The sharing of child-sex-abuse images is a serious crime, and law enforcement should be investigating it. But the government needs to understand how the tools work, if they could violate the law and if they are accurate,” said Sarah St.Vincent, a Human Rights Watch researcher who examined the practice. “These defendants are not very popular, but a dangerous precedent is a dangerous precedent that affects everyone. And if the government drops cases or some charges to avoid scrutiny of the software, that could prevent victims from getting justice consistently,” she said. “The government is effectively asserting sweeping surveillance powers but is then hiding from the courts what the software did and how it worked.”

------------------------------------------------------------

 PASSAGE TWO OF THE DAY: "The government’s reluctance to share technology with defense attorneys isn’t limited to child pornography cases. Prosecutors have let defendants monitored with cellphone trackers known as Stingrays go free rather than fully reveal the technology. The secrecy surrounding cell tracking was once so pervasive in Baltimore that Maryland’s highest court rebuked the practice as “detrimental.” As was first reported by Reuters in 2013, the U.S. Drug Enforcement Administration relied in investigations on information gathered through domestic wiretaps, a phone-records database and National Security Agency intercepts, while training agents to hide those sources from the public record. Courts and police are increasingly using software to make decisions in the criminal justice system about bail, sentencing, and probability-matching for DNA and other forensic tests,” said Jennifer Granick, a surveillance and cybersecurity lawyer with the American Civil Liberties Union’s Speech, Privacy and Technology Project who has studied the issue. “If the defense isn’t able to examine these techniques, then we have to just take the government’s word for it — on these complicated, sensitive and non-black-and-white decisions. And that’s just too dangerous.”

------------------------------------------------------------

STORY: "Prosecutors Dropping Child Porn Charges After Software Tools Are Questioned," by reporter Jack Gillum, published by Pro Publica on April 3, 2019. ( Jack Gillum is a senior reporter at ProPublica covering technology, specializing in how algorithms, big data and social media platforms affect people’s daily lives and civil rights.)

SUB-HEADING: "More than a dozen cases were dismissed after defense attorneys asked to examine, or raised doubts about, computer programs that track illegal images to internet addresses."

GIST: (This is a  just portion of a lengthy story. The rest is well worth reading at  the link below. HL) "Using specialized software, investigators traced explicit child pornography to Todd Hartman’s internet address. A dozen police officers raided his Los Angeles-area apartment, seized his computer and arrested him for files including a video of a man ejaculating on a 7-year-old girl. But after his lawyer contended that the software tool inappropriately accessed Hartman’s private files, and asked to examine how it worked, prosecutors dismissed the case. Near Phoenix, police with a similar detection program tracked underage porn photos, including a 4-year-old with her legs spread, to Tom Tolworthy’s home computer. He was indicted in state court on 10 counts of committing a “dangerous crime against children,” each of which carried a decade in prison if convicted. Yet when investigators checked Tolworthy’s hard drive, the images weren’t there. Even though investigators said different offensive files surfaced on another computer that he owned, the case was tossed. At a time when at least half a million laptops, tablets, phones and other devices are viewing or sharing child pornography on the internet every month, software that tracks images to specific internet connections has become a vital tool for prosecutors. Increasingly, though, it’s backfiring. Drawing upon thousands of pages of court filings as well as interviews with lawyers and experts, ProPublica found more than a dozen cases since 2011 that were dismissed either because of challenges to the software’s findings, or the refusal by the government or the maker to share the computer programs with defense attorneys, or both. Tami Loehrs, a forensics expert who often testifies in child pornography cases, said she is aware of more than 60 cases in which the defense strategy has focused on the software. Defense attorneys have long complained that the government’s secrecy claims may hamstring suspects seeking to prove that the software wrongly identified them. But the growing success of their counterattack is also raising concerns that, by questioning the software used by investigators, some who trade in child pornography can avoid punishment.



Sunday, May 19, 2019

Technology Series: (Part Six): Algorithms: (Accountability): Trying to figure out whether they are doing what they are supposed to do? 'Curbed' New York.' reporter Diana Budds suggests (based on New York City's experience) that this is not an easy task: "In May 2018, Mayor Bill de Blasio announced the formation of the Automated Decision Systems Task Force, a cross-disciplinary group of city officials and experts in artificial intelligence (AI), ethics, privacy, and law. Established by Local Law 49, the goal of the ADS Task Force is to develop a process for reviewing algorithms the city uses—such as those for determining public school assignments, predicting which buildings should be inspected, and fighting tenant harassment—through the lens of equity, fairness, and accountability. But nearly one year later, little progress has been made, casting doubt that the task force will fulfill its mandate: issuing a report of policy recommendations by fall 2019."


PUBLISHER'S NOTE: In recent years, I have found myself publishing more and more posts on the  application of artificial intelligence technology to policing, public safety, and the criminal justice process,  not just in North America, but in countries all over the world, including China. Although I accept that properly applied science  can play a positive role in our society, I have learned over the years that technologies introduced for the so-called public good, can eventually be used against the people they were supposed to  benefit. As reporter Sieeka Khan  writes in Science Times:  "In 2017, researchers sent a letter to the secretary of the US Department of Homeland Security. The researchers expressed their concerns about a proposal to use the AI to determine whether someone who is seeking refuge in the US would become a positive and contributing member of society or if they are likely to become a threat or a terrorist. The other government uses of AI are also being questioned, such as the attempts at setting bail amounts and sentences on criminals, predictive policing and hiring government workers. All of these attempts have been shown to be prone to technical issues and a limit on the data can cause bias on their decisions as they will base it on gender, race or cultural background. Other AI technologies like automated surveillance, facial recognition and mass data collection are raising concerns about privacy, security, accuracy and fairness in a democratic society. As the executive order of Trump demonstrates, there is a massive interest in harnessing AI for its full, positive potential. But the dangers of misuse, bias and abuse, whether it is intentional or not, have the chance to work against the principles of international democracies. As the use of artificial intelligence grows, the potential for misuse, bias and abuse grows as well. The purpose of this 'technology' series, is to highlight the dangers of artificial intelligence -  and to help readers make their own assessments as to  whether these innovations will do more harm than good."

Harold Levy: Publisher: The Charles Smith Blog.

----------------------------------------------------------

PASSAGE OF THE DAY: "Automated decision systems have been in use in city government for many years. Because of their opaque nature (they’re often off-the-shelf products from private companies) and the fact that there’s little knowledge of what systems are actually in use, there has been little governmental oversight and accountability. Meanwhile, many of these systems are biased and flawed. The risk assessment algorithm used by Broward County, Florida, to predict future criminals was the subject of a ProPublica expose on racially biased software. After an algorithm in use by the Arkansas Department of Public Health began dramatically reducing benefits for Medicaid recipients, the state was sued. A judge ordered the state to stop using the automated system for determining home health care hours. And in the 1970s, a flawed algorithm informing FDNY station closures left broad swathes of the city susceptible to fire, disproportionately affecting predominantly low-income black and Latino neighborhoods. Matching algorithms used by NYC public schools have favored white students and disadvantaged students of color. Local Law 49 was praised as a significant step toward achieving equity and fairness in New York City. But there were clear challenges from the very beginning: The law was broad, sweeping, and ambitious. It requires a level of transparency that many agencies—like the NYPD, which frequently does not disclose information publicly, citing interference with public safety—and the tech companies that develop these products are not accustomed to.?"

-----------------------------------------------------------

STORY: "New York City's AI task force stalls," by Diana Budds, published by  'Curbed NY' on April 16, 2019. Diana Budds is a New York–based writer interested in stories about how design reflects and affects culture: everything from the hidden inequality of streetscapes to how algorithms are reshaping our world.

SUB-HEADING: "Nearly one year after its founding, the Automated Decision Systems Task Force hasn’t even agreed on the definition of an automated decision system.

GIST: "In May 2018, Mayor Bill de Blasio announced the formation of the Automated Decision Systems Task Force, a cross-disciplinary group of city officials and experts in artificial intelligence (AI), ethics, privacy, and law. Established by Local Law 49, the goal of the ADS Task Force is to develop a process for reviewing algorithms the city uses—such as those for determining public school assignments, predicting which buildings should be inspected, and fighting tenant harassment—through the lens of equity, fairness, and accountability. But nearly one year later, little progress has been made, casting doubt that the task force will fulfill its mandate: issuing a report of policy recommendations by fall 2019. “My major concern is the task force has been on a trajectory of nothing. A lot of time has been wasted,” says Rashida Richardson, director of policy research at AI Now, a research institute at NYU that focuses on the social implications of artificial intelligence. (AI Now co-founder Meredith Whittaker is a member of the task force.) “Squandering almost a year’s worth of time makes me concerned about the value and robustness of the final product.” Automated decision systems have been in use in city government for many years. Because of their opaque nature (they’re often off-the-shelf products from private companies) and the fact that there’s little knowledge of what systems are actually in use, there has been little governmental oversight and accountability.
Meanwhile, many of these systems are biased and flawed. The risk assessment algorithm used by Broward County, Florida, to predict future criminals was the subject of a ProPublica expose on racially biased software. After an algorithm in use by the Arkansas Department of Public Health began dramatically reducing benefits for Medicaid recipients, the state was sued. A judge ordered the state to stop using the automated system for determining home health care hours. And in the 1970s, a flawed algorithm informing FDNY station closures left broad swathes of the city susceptible to fire, disproportionately affecting predominantly low-income black and Latino neighborhoods. Matching algorithms used by NYC public schools have favored white students and disadvantaged students of color. Local Law 49 was praised as a significant step toward achieving equity and fairness in New York City. But there were clear challenges from the very beginning: The law was broad, sweeping, and ambitious. It requires a level of transparency that many agencies—like the NYPD, which frequently does not disclose information publicly, citing interference with public safety—and the tech companies that develop these products are not accustomed to. At an April 4 hearing before the City Council Committee on Technology, task force co-chair Jeff Thamkittikasem, director of the Mayor’s Office of Operations, testified that the group has not reached consensus about what constitutes an automated decision system, despite meeting about 20 times over the past year. “The task force has spent time looking at what falls under an agency ADS; it’s taken more time than we thought it would,” Thamkittikasem said, adding that because the law’s definition of ADS is broad, members flagged a vast array of computer models along the spectrum, including sophisticated machine learning models, as well as “calculators and advanced Excel spreadsheets.” Thamkittikasem also told the council that the task force does not know what automated decision systems are in use, does not plan to create or disclose a list of systems the city uses, and has not held any public meetings.
At the hearing, members of the task force, along with data experts and privacy advocates expressed frustration with the lack of progress and reluctance to disclose what automated systems are in use.
In prepared remarks, Janet Haven, executive director of Data & Society, a New York-based research group focused on the social and cultural issues surrounding AI and data-centric technology, said, “We have seen little evidence that the task force is living up to its potential. New York has a tremendous opportunity to lead the country in defining these new public safeguards, but time is growing short to deliver on the promise of this body.” During his testimony to City Council, Albert Fox Cahn, a privacy advocate who departed the group in December, voiced alarm about mismanagement and disempowering of the task force. One issue was the use of the Jain Family Foundation, a non-profit research institute that the city hired (they worked pro bono) to help provide project management and research support. It was never an official member of the task force, yet its scope increased as time went on from providing background research to authoring proposed language and policy documents for the task force to ratify. New York has a tremendous opportunity to lead the country in defining these new public safeguards, but time is growing short to deliver on the promise of this body. “Increasingly, the foundation was writing a first draft of the task force’s report,” Cahn told the City Council during the hearing. “The foundation’s role drew complaints from numerous task force members, so it was eventually phased out, but it’s a telling example of how the role of task force members themselves was circumscribed as part of this process.” The Jain Family Foundation’s work included attempts to define an ADS. They presented options for the group to vote on, but since the definitions did not reflect the views of the task force, they did not reach consensus. The Jain Family Foundation stopped its work in December. “Everything about the task force report was ambiguous and up to the task force to decide, except the definition of an automated decision system,” Cahn later told Curbed. “That was the one clear thing presented by the City Council [in the Local Law] and it was unfortunate that the task force hasn’t operated from the baseline understanding as defined by the Council … I believe it was the Mayor’s Office that raised fears that [was an overly expansive definition. During the hearing [the task force chairs] talked about not wanting every Excel document scrutinized. Something important to understand in this discussion is some of the most powerful and sweeping tools can be run on relatively simple platforms.” Task force members Julia Stoyanovich, a data science, computer science, and engineering professor at NYU, and Solon Barocas, a Cornell professor focusing on the ethics of machine learning, submitted joint testimony to the City Council that expressed particular concern over the lack of information made available to them, stressing the importance of knowing about actual systems in use. Without real-life data sets and case studies, the recommendations would be generic and ineffective for New York City’s needs, and could have been completed using existing academic research. “A report based on hypothetical examples, rather than on actual NYC systems, will remain abstract and inapplicable in practice,” they wrote. “The task force cannot issue actionable and credible recommendations without some knowledge of the systems to which they are intended to apply … The apparent lack of commitment to transparency on the part of task force leadership casts doubt on the City’s intentions to seriously consider or enact the report’s recommendations—recommendations largely about transparency.”
City officials are also growing impatient. In a March 26 letter to Thamkittikasem, Comptroller Scott Stringer emphasized the importance of algorithmic accountability and expressed disappointment in the task force’s work to date, particularly that disclosure of automated decision systems has not occurred. He requested a list of all algorithms that inform public services or placement in a public facility—like school selection, homeless shelter placement, bail determinations, domestic violence interventions, and child protective services—by May 26, as well as information about how each is used and how they were developed. “Algorithms should be subject to the same scrutiny with which we treat any regulation, standard, rule, or protocol. It is essential that they are highly vetted, transparent, accurate and do not generate injurious, unintended consequences,” Stringer wrote. “Without such oversight, misguided or outright inaccurate algorithms can fester and lead to increasingly problematic outcomes for city residents, employees, and contractors.” This lack of progress to date reflects the overall difficulty of regulating technology, a field that’s coming under increased scrutiny at federal, state, and local levels. This month, the House and Senate introduced the Algorithmic Accountability Act, which, if passed, would require the FTC to create rules for assessing the impact of automated decision systems. HUD recently sued Facebook for housing discrimination in its ads, the New York Civil Liberties Union is suing ICE for its immigrant risk assessment algorithm, and a Connecticut judge recently ruled that tenant screening companies that use algorithmic risk assessments must comply with fair housing rules. Five months after New York City announced the ADS Task Force, Vermont announced a statewide Artificial Intelligence Task Force, which had similar directives as New York City’s: to make recommendations on oversight and regulation of algorithmic systems in use. It’s held multiple public meetings and is due to release its report in June, showing that with determination and proper support from government institutions, this type of work, while difficult and uncharted, is possible in a timely manner. To help improve transparency, AI Now compiled a list of all the automated decision systems it knows the city uses, which is far from an exhaustive list. The ADS Task Force is due to host its first public forum on April 30 at New York Law School."




The entire story can be read at:
https://ny.curbed.com/2019/4/16/18335495/new-york-city-automated-decision-system-task-force-ai

 

Saturday, May 18, 2019

Technology Series: (Part Five): 'One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority:' A disturbing article by New York Times reporter Paul Mozur which details how ethnic profiling software developed for use in China can be "easily put" in the hands of other governments..."The facial recognition technology, which is integrated into China’s rapidly expanding networks of surveillance cameras, looks exclusively for Uighurs based on their appearance and keeps records of their comings and goings for search and review. The practice makes China a pioneer in applying next-generation technology to watch its people, potentially ushering in a new era of automated racism."


PUBLISHER'S NOTE: In recent years, I have found myself publishing more and more posts on the  application of artificial intelligence technology to policing, public safety, and the criminal justice process,  not just in North America, but in countries all over the world, including China. Although I accept that properly applied science  can play a positive role in our society, I have learned over the years that technologies introduced for the so-called public good, can eventually be used against the people they were supposed to  benefit. As reporter Sieeka Khan  writes in Science Times:  "In 2017, researchers sent a letter to the secretary of the US Department of Homeland Security. The researchers expressed their concerns about a proposal to use the AI to determine whether someone who is seeking refuge in the US would become a positive and contributing member of society or if they are likely to become a threat or a terrorist. The other government uses of AI are also being questioned, such as the attempts at setting bail amounts and sentences on criminals, predictive policing and hiring government workers. All of these attempts have been shown to be prone to technical issues and a limit on the data can cause bias on their decisions as they will base it on gender, race or cultural background. Other AI technologies like automated surveillance, facial recognition and mass data collection are raising concerns about privacy, security, accuracy and fairness in a democratic society. As the executive order of Trump demonstrates, there is a massive interest in harnessing AI for its full, positive potential. But the dangers of misuse, bias and abuse, whether it is intentional or not, have the chance to work against the principles of international democracies. As the use of artificial intelligence grows, the potential for misuse, bias and abuse grows as well. The purpose of this 'technology' series, is to highlight the dangers of artificial intelligence -  and to help readers make their own assessments as to  whether these innovations will do more harm than good."

Harold Levy: Publisher: The Charles Smith Blog.

----------------------------------------------------------

PASSAGE OF THE DAY: "The Chinese government has drawn wide international condemnation for its harsh crackdown on ethnic Muslims in its western region, including holding as many as a million of them in detention camps. Now, documents and interviews show that the authorities are also using a vast, secret system of advanced facial recognition technology to track and control the Uighurs, a largely Muslim minority.

-----------------------------------------------------------

STORY: "One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority," by reporter Paul Mozur, published by The New York Times on April 14, 2019. (Paul Mozur is a technology reporter based in Shanghai. Along with writing about Asia's biggest tech companies, he covers cybersecurity, emerging internet cultures, censorship and the intersection of geopolitics and technology in Asia. A Mandarin speaker, he was a reporter for The Wall Street Journal in China and Taiwan prior to joining The New York Times in 2014.)

GIST: "The Chinese government has drawn wide international condemnation for its harsh crackdown on ethnic Muslims in its western region, including holding as many as a million of them in detention camps. Now, documents and interviews show that the authorities are also using a vast, secret system of advanced facial recognition technology to track and control the Uighurs, a largely Muslim minority. It is the first known example of a government intentionally using artificial intelligence for racial profiling, experts said. The facial recognition technology, which is integrated into China’s rapidly expanding networks of surveillance cameras, looks exclusively for Uighurs based on their appearance and keeps records of their comings and goings for search and review. The practice makes China a pioneer in applying next-generation technology to watch its people, potentially ushering in a new era of automated racism. The technology and its use to keep tabs on China’s 11 million Uighurs were described by five people with direct knowledge of the systems, who requested anonymity because they feared retribution. The New York Times also reviewed databases used by the police, government procurement documents and advertising materials distributed by the A.I. companies that make the systems. Chinese authorities already maintain a vast surveillance net, including tracking people’s DNA, in the western region of Xinjiang, which many Uighurs call home. But the scope of the new systems, previously unreported, extends that monitoring into many other corners of the country. "............................."Yitu and its rivals have ambitions to expand overseas. Such a push could easily put ethnic profiling software in the hands of other governments, said Jonathan Frankle, an A.I. researcher at the Massachusetts Institute of Technology. “I don’t think it’s overblown to treat this as an existential threat to democracy,” Mr. Frankle said. “Once a country adopts a model in this heavy authoritarian mode, it’s using data to enforce thought and rules in a much more deep-seated fashion than might have been achievable 70 years ago in the Soviet Union. To that extent, this is an urgent crisis we are slowly sleepwalking our way into.”

The entire story can be read at:

https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html






























Friday, May 17, 2019

Bulletin: (Criminalizing reproduction): "The Missouri House has passed a bill banning abortions after a fetal heartbeat is detected—about eight weeks into the pregnancy—before many women know they are pregnant - and includes no exceptions for rape or incest..." The passage of the bill was the culmination of long years of effort by the anti-abortion movement in the state, and Republican lawmakers voted for it overwhelmingly. One of them, Representative Holly Rehder, a Republican from southeast Missouri, implied in her speech that rape and incest were not reasons for exceptions. “To stand on this floor and say, ‘How could someone look at a child of rape or incest and care for them?’” she said. “I can say how we can do that. We can do that with the love of God.”



PUBLISHER'S NOTE:

I have taken on the  issue  of criminalizing reproduction - a natural theme for a Blog concerned with  flawed science in its myriad forms  and its flawed devotees (like Charles Smith), as I am utterly opposed to the current movement in the United States and some other countries - thankfully not Canada any more - towards imprisoning women and their physicians on the basis of sham science (or any other basis). Control over their reproductive lives is far too important to women in America or anywhere else so they can  participate  equally in the economic and social life of their nations without fear for  loss their freedom at the hands of political opportunists and fanatics. I will continue to follow relevant cases such as  Purvi Patel and Bei Bei Shuai - and the mounting wave of  legislative attacks aimed at chipping away at  Roe V. Wade and ultimately dismantling it.

Harold Levy: Publisher: The Charles Smith Blog.


------------------------------------------------------------

GIST: "Missouri lawmakers passed a bill Friday to ban abortions after a fetal heartbeat is detected, the latest in a flurry of anti-abortion measures across the country intended to mount direct challenges to federal protections for the procedure. The Missouri House passed H.B. 126 in a 110-to-44 vote after hours of heated debate, including impassioned speeches by both Democratic and Republican legislators and angry shouts of “when you lie, people die” from those who opposed the bill. Those protesters were eventually removed by the police. The measure, known as the Missouri Stands for the Unborn Act, now moves to the desk of Gov. Mike Parson, a Republican, who is expected to sign it. The bill, which bans abortions at around eight weeks of pregnancy, often before a woman even knows she is pregnant, included no exceptions for rape or incest. The passage of the bill was the culmination of long years of effort by the anti-abortion movement in the state, and Republican lawmakers voted for it overwhelmingly. One of them, Representative Holly Rehder, a Republican from southeast Missouri, implied in her speech that rape and incest were not reasons for exceptions. “To stand on this floor and say, ‘How could someone look at a child of rape or incest and care for them?’” she said. “I can say how we can do that. We can do that with the love of God.”"

Read this  story at this link: New York Times
-------------------------------------------------------