Wednesday, August 8, 2018

Technology series: (Part 1): An excellent beginning for this series: ACLU podcast tells us 'How to Fight an Algorithm.' ..."Local police departments are buying Amazon's facial recognition tool, which can automatically identify people as they go about their lives in public. What does all this mean for our civil liberties and how do we exercise oversight of an algorithm? Here to talk through this brave new world with us is Meredith Whittaker. She is co-founder and executive director of AI Now, a research institute that studies the social implications of artificial intelligence."


algorithm


PUBLISHER'S NOTE: Artificial intelligence, once the stuff of science fiction, has become all to real in our modern society - especially in the American criminal justice system; As Lee Rowland puts it:  "Today, artificial intelligence. It's everywhere — in our homes, in our cars, our offices, and of course online. So maybe it should come as no surprise that government decisions are also being outsourced to computer code. In one Pennsylvania county, for example, child and family services uses digital tools to assess the likelihood that a child is at risk of abuse. Los Angeles contracts with the data giant Palantir to engage in predictive policing, in which algorithms identify residents who might commit future crimes. Local police departments are buying Amazon's facial recognition tool, which can automatically identify people as they go about their lives in public."  The algorithm is finding its place deeper and deeper in the nation's courtrooms on what used to be  exclusive decisions of judges such as bail and even the sentence to be imposed. I am pleased to see that a dialogue has begun on the effect that increasing use of these logarithms in our criminal justice systems is having on our society and on the quality of decision-making inside courtrooms. Do they, in fact work, as promised? If so, does the 'bad' justify the 'good'? As Lee Rowland asks about this brave new world,  "What does all this mean for our civil liberties and how do we exercise oversight of an algorithm?" In view of the importance of these issues - and  the increasing use of artificial intelligence by countries for surveillance  of their citizens - it's time for yet another technology series on The Charles Smith Blog focusing on the impact of science on society and  criminal justice. Up to now I have been identifying the appearance of these technologies. Now at last I can report on the realization that some of them may be two-edged swords - and on growing  pushback.

Harold Levy: Publisher; The Charles Smith Blog:

------------------------------------------------------------

PASSAGE ONE OF THE DAY:  "LEE: Could you give us a specific example that people might have heard of that demonstrates how the government is using AI? MEREDITH: Well, we have seen in the past the Baltimore Police Department scanning Instagram photos from a Freddie Gray protest, feeding them through facial recognition, using that to identify people with outstanding warrants or tickets. Right? So we do see the will on the part of a lot of these law enforcement officers and agencies to use the technology in ways that would be problematic and oppressive for civil liberties, to say the least."

------------------------------------------------------------------------

PASSAGE TWO OF THE DAY:  [00:05:31] Absolutely. I mean I think we go back to how is AI constructed, right. It requires a lot of data. It requires data that, given the nature of space-time, was created in the past. And this data necessarily reflects the patterns of discrimination, of oppression, the patterns of life as it is. The way in which AI systems often not only replicate, but in some senses can amplify and mask existing patterns of oppression and discrimination. So there is a system that has been proposed in Pennsylvania. This is a risk assessment system that will give a score as to whether a defendant is a risk, at risk of reoffending. LEE: "This is a criminal defendant? This is in a criminal context? MEREDITH: "Yeah this is in a criminal context. Now, one of the inputs it uses to make this decision is arrests. Has this person been arrested in the past? But if you look at Pennsylvania, Pennsylvania has the second highest racial disparity in arrests in the U.S.. So one white man is arrested for every nine black men and every three Latino men. So your risk score and with it your chances of being kept behind bars will increase simply because you're a black man. And we should also note that the record of arrests is taken from a state in which stop and frisk has been deployed. So you're looking here at at the sort of methodology of data creation literally being practices of unconstitutional policing that are then fed into these systems which claim to give objective results but are actually replicating these same patterns of arguably unconstitutional discrimination."

--------------------------------------------------------------------- 

PASSAGE THREE  OF THE DAY: "LEE: You know traditionally if you went through a criminal justice case and you went to sentencing or a bail hearing you would have real humans testifying against you. Now you're suggesting that some of those decisions are being replaced by algorithms that the people administering them barely understand. So can you explain to me what happens in a courtroom if somebody says, "Well hey this algorithm is junk because you fit in racially biased arrest statistics and so it spits out that I'm likely to be dangerous because I'm a black guy and that's that's flawed."  Are there ways for people to make that argument to to challenge the underlying code in these algorithms in the same way that you would, say, traditionally get to confront a human witness against you? MEREDITH: At this point, not many and this is something we're working on with AI Now, we have developed a policy framework called the Algorithmic Impact Assessment framework that is looking at simply giving people more access to information on where these systems are when they may have made a decision that affected my life, and allowing some form of of pushback to debate that decision. But at this point the ability to contest an automated decision is not part of most criminal trials."

-------------------------------------------------------------

PASSAGE FOUR  OF THE DAY: "  MEREDITH" Yeah. And this is so common that it actually has a term. We call it automation bias. And it's just the tendency to be more credulous when a seemingly objective veneer of scientific authority, like a computer, gives you an answer. I think there's also...we need to look at the way our current trust in technical solutions kind of adds to this tendency.  I think it's telling that a lot of these systems have been created without documenting the data they used and certainly without releasing this data to the public.There are few to no monitoring regimes that actually look at the impact of these systems and it's often very hard to get records around this. So we can look at the way in which a kind of automation bias is baked into the way even the developers are thinking about these systems. LEE:[ You know traditionally if you went through a criminal justice case and you went to sentencing or a bail hearing you would have real humans testifying against you. Now you're suggesting that some of those decisions are being replaced by algorithms that the people administering them barely understand. So can you explain to me what happens in a courtroom if somebody says, "Well hey this algorithm is junk because you fit in racially biased arrest statistics and so it spits out that I'm likely to be dangerous because I'm a black guy and that's that's flawed."

---------------------------------------------------------------

PODCAST: "How to Fight an Algorithm (ep. 7), on the podcast "at liberty" published by The American Civil Liberties Union (ACLU).

GIST: "I'm Lee Rowland. And from the ACLU, this is At Liberty, the show where we discuss today's biggest civil rights and civil liberties topics.
 Today, artificial intelligence. It's everywhere — in our homes, in our cars, our offices, and of course online. So maybe it should come as no surprise that government decisions are also being outsourced to computer code. In one Pennsylvania county, for example, child and family services uses digital tools to assess the likelihood that a child is at risk of abuse. Los Angeles contracts with the data giant Palantir to engage in predictive policing, in which algorithms identify residents who might commit future crimes. Local police departments are buying Amazon's facial recognition tool, which can automatically identify people as they go about their lives in public. What does all this mean for our civil liberties and how do we exercise oversight of an algorithm? Here to talk through this brave new world with us is Meredith Whittaker. She is co-founder and executive director of AI Now, a research institute that studies the social implications of artificial intelligence. Meredith is a distinguished research scientist at New York University, the founder of Google's Open Research Group, and an expert on digital privacy and security issues. Meredith, thank you so much for being with us today.

--------------------------------------
MEREDITH WHITTAKER
 It's my pleasure. Thank you so much for speaking with me.
LEE
  Of course. So let's start big. Maybe with just some definitional terms. What do we mean when we talk about AI, artificial intelligence?
MEREDITH
  This is a great and fundamental question and I want to put everyone who's asking it at ease by letting you know that even the most technically adept people struggle with this. AI is not a clearly fixed term and it often means slightly different things to different people.This is in part because this is a field that's evolved over many years and because it's also a very hyped marketing term at this point. So the term AI is being used to sell everything, some of which resembles what we might think of as traditional AI, some of which certainly does not.  So it might be helpful to sort of go back to the history and begin around the beginning. So it was in 1956 a group of men met at Dartmouth. This small group of guys was determined to build intelligent machines over the course of the summer. They thought this was possible and they didn't succeed. That's a spoiler alert there. But they did ignite a field of AI research and this field has developed in fits and starts with, you know, moments of great hope and moments of disappointment over the last 62 years.  And you know I think it's important to note in the context of some of the current debates going on that this was largely bankrolled and shaped by the U.S. military. So from this common root a lot of different sub-branches all clustered under the umbrella AI emerged. This produced kind of an ecology of AI techniques, ranging from machine vision to natural language processing to machine learning to deep neural nets and well beyond. But they all share a common characteristic that's important in the context of today's conversation. They all learn about the world by being fed large amounts of data. And they all make predictions and determinations based on what's in such data.  So if you saw the movie Her you might remember that the AI system built a model of the owner's personality by reading his e-mail. That's basically how it works.

---------------------------------------------------------------
LEE
How does AI appear in our daily lives and who's using it?
MEREDITH
  So, AI appears in many, many different forms. Right? You may walk through an airport. You may be profiled by a AI risk assessment system at TSA. You may apply for a loan online and there's sort of an AI backend that is running your credit record and some other data determining if you're credit-worthy. You may apply for insurance and A.I. is determining whether you are eligible for a certain plan or not. A.I. is being used in hiring, it's being used to process worker data to detect which workers are potentially dissatisfied and can monitor their performance and on and on and on. It's actually probably much easier to think about areas where AI isn't which I can't think of right now then it would be to name every sector in which A.I. and automated decision system is being deployed.

---------------------------------------------------------------
LEE
Could you give us a specific example that people might have heard of that demonstrates how the government is using AI?
MEREDITH
Well, we have seen in the past the Baltimore Police Department scanning Instagram photos from a Freddie Gray protest, feeding them through facial recognition, using that to identify people with outstanding warrants or tickets. Right? So we do see the will on the part of a lot of these law enforcement officers and agencies to use the technology in ways that would be problematic and oppressive for civil liberties, to say the least.

---------------------------------------------------------------
LEE
  Are there particular groups of people who are more vulnerable to this kind of algorithmic decision making?
MEREDITH
  Absolutely. I mean I think we go back to how is AI constructed, right. It requires a lot of data. It requires data that, given the nature of space-time, was created in the past. And this data necessarily reflects the patterns of discrimination, of oppression, the patterns of life as it is. The way in which AI systems often not only replicate, but in some senses can amplify and mask existing patterns of oppression and discrimination. So there is a system that has been proposed in Pennsylvania. This is a risk assessment system that will give a score as to whether a defendant is a risk, at risk of reoffending.
LEE
This is a criminal defendant? This is in a criminal context?
MEREDITH
Yeah this is in a criminal context. Now, one of the inputs it uses to make this decision is arrests. Has this person been arrested in the past? But if you look at Pennsylvania, Pennsylvania has the second highest racial disparity in arrests in the U.S.. So one white man is arrested for every nine black men and every three Latino men. So your risk score and with it your chances of being kept behind bars will increase simply because you're a black man. And we should also note that the record of arrests is taken from a state in which stop and frisk has been deployed. So you're looking here at at the sort of methodology of data creation literally being practices of unconstitutional policing that are then fed into these systems which claim to give objective results but are actually replicating these same patterns of arguably unconstitutional discrimination.

---------------------------------------------------------------
LEE
  I think I've heard a phrase that describes this when talking to coders. It's "garbage in, garbage out." Is that the right thing to use here?
MEREDITH
That is exactly right. I would say it's garbage in, and often much harder to detect and much harder to contest garbage out. Right? These systems often resist due process. There are very few mechanisms for pushing back against an automated decision and in a lot of cases the people who are actually tasked with using the system, the people on the ground, from a beat officer to a social worker who's given an iPad and saying run this algorithm to get a score, have no real understanding of how the system works. And in many cases have no ability to override the determination of the system.
LEE
That's fascinating.
MEREDITH
  So garbage in, complicated garbage out.
LEE
And do you get the sense that because it's spit out by a computer, it sometimes comes with a veneer of neutrality or objectivity?
MEREDITH
Absolutely.

-------------------------------------------------------------
LEE
  And yet it's a product of a very flawed human system. Right? In an algorithm or code that's been designed with the flaws baked into it.
MEREDITH
  Absolutely.
LEE
But people I think have a natural inclination to believe that a computer code result is somehow objective.
MEREDITH
 Yeah. And this is so common that it actually has a term. We call it automation bias. And it's just the tendency to be more credulous when a seemingly objective veneer of scientific authority, like a computer, gives you an answer. I think there's also...we need to look at the way our current trust in technical solutions kind of adds to this tendency.  I think it's telling that a lot of these systems have been created without documenting the data they used and certainly without releasing this data to the public.There are few to no monitoring regimes that actually look at the impact of these systems and it's often very hard to get records around this. So we can look at the way in which a kind of automation bias is baked into the way even the developers are thinking about these systems.

--------------------------------------------------------------------
LEE
[ You know traditionally if you went through a criminal justice case and you went to sentencing or a bail hearing you would have real humans testifying against you. Now you're suggesting that some of those decisions are being replaced by algorithms that the people administering them barely understand. So can you explain to me what happens in a courtroom if somebody says, "Well hey this algorithm is junk because you fit in racially biased arrest statistics and so it spits out that I'm likely to be dangerous because I'm a black guy and that's that's flawed."
Are there ways for people to make that argument to to challenge the underlying code in these algorithms in the same way that you would, say, traditionally get to confront a human witness against you?
MEREDITH
  At this point, not many and this is something we're working on with AI Now, we have developed a policy framework called the Algorithmic Impact Assessment framework that is looking at simply giving people more access to information on where these systems are when they may have made a decision that affected my life, and allowing some form of of pushback to debate that decision. But at this point the ability to contest an automated decision is not part of most criminal trials.
 I think you know an example from Arkansas in the sort of healthcare space not the criminal justice space would help illustrate this. So in 2016, Arkansas implemented a health care algorithm that was being used to allocate health benefits to Medicaid patients. And not only did it make some really fundamental mistakes, it was implemented in a way that left no room for override by the Medicare worker who was in charge of administering it
LEE
And how do we know that this system made mistakes?
MEREDITH
We know that it made mistakes because legal aid in Arkansas took a case from somebody who was impacted by the system whose benefits had been cut. You know we're looking at kind of home care patients, right. These are people who often need help getting out of bed in the morning, need help eating, need help in getting put back into bed. Right. You cut your home care benefits from 12 hours a week to something much smaller and you are really endangering that person's ability to live. So this is not trivial. And the reason we know this is that people raise complaints. Legal Aid began getting calls. They decide to take this case. At significant expense and time, they were able to get people to review the algorithm and it was only during the course of litigation that the fundamental flaws in the algorithm and the software implementation of the system were uncovered.
LEE
And that was uncovered only after the system had been in use for a while?
MEREDITH Yeah. Exactly. And only through a sort of drawn out and expensive litigation process.
So it's almost certain there are many more systems like this, maybe less egregiously harmful, that are in use that we don't know about because they haven't been disclosed or that some people may know about but that we don't have the resources or the time or the expertise to sort of push on a lawsuit or push for explanations.
LEE
It sounds like it took a lot of resources just to uncover you know how that particular code was flawed. Do you think that in practice local governments have the expertise or the know-how to assess those systems before they're put in place?
MEREDITH
  You know I think this will vary. I think we have seen a starvation of social services and government services in the U.S. over many years. I think that's a solvable problem if we look to fund those type of experts. I mean in New York City which is the first local government in the nation to pass legislation that is looking to implement algorithmic transparency and accountability.
LEE
 What does that mean? What does algorithmic transparency and accountability mean in practice?
MEREDITH
In this case it means that the algorithms that are used in government would be disclosed to the public, that there would be some form of deliberation around the use of such technologies, and that agencies that were using these technologies would be required to account for their use and hopefully to produce an impact assessment or some other analysis of what the effect of the use of such a system would be on the populations it was being deployed among.  You know, I caveat all of these. This is not in the law right now. The law as passed constituted a task force that I am a member of that is going to write a report that will be filed with the mayor's office late 2019. This report will then feed into a lawmaking process that will specify what the substance of algorithmic accountability actually looks like in New York.

------------------------------------------------------------
LEE
  So it sounds like these are the early days for a groundswell at least at the local government level. The systems you're describing have immense implications for people's daily lives, their freedom, their families. It sounds like it's hard to really exaggerate the degree to which AI decision-making could affect us all. Is this an inevitable march towards our Minority Report dystopian future, or are there ways that the public can have a meaningful say, can find out about these systems, can object to them before they're controlling our lives?
MEREDITH
  I mean, I don't believe in inevitability, because I would just stop working if I did. You know, a couple of years ago when these systems were being put in place, we didn't have this conversation. We now do. I think there is a lot of increased public awareness that has led to things like the New York City bill. It's a start, but it's a start that happened because council member Vacca at the time was getting a lot of calls asking questions about these systems in response to, you know, news articles and awareness-raising generally. And he couldn't answer these questions. And this sort of turned into this process.  So I think there's a lot of opportunity for people simply to ask, "Hey where are these systems being used? How are they impacting me? Hey, tech companies who are selling these systems, who are you selling them to? What are they doing? Why is it that you know my due process rights may be perturbed by a system that identifies me as like someone else but doesn't actually reflect me as a person, my record, my history?" So these are all questions that don't require a technical degree, that don't require that you're well versed in the latest technical jargon, and they're all really fundamental questions that I think we have to simply be requiring both governments and tech companies to answer clearly.

-----------------------------------------------------------
LEE
  I'm so glad you mentioned the tech companies because I definitely want to talk about what we've been seeing in Silicon Valley. There's been some real meaningful pushback lately from tech workers concerned about their company's contracts with the government.  It started at Google in a protest and I believe you were at least somewhat involved in, you can tell us, where employees organized against a Pentagon contract that would use A.I. to analyze drone footage. Project Marven, I believe it's called. And since then we've now seen similar protests popping up at Amazon, Microsoft, and Salesforce, also against the use of A.I. in government contracts. So can you tell us more about what's happening in the tech sector right now and why it's happening right now?
MEREDITH
  Yeah. Certainly, I was very involved in the Maven protests in my role as the leader of open research at Google, and I'm deeply heartened to see the rising concern and the willingness to kind of act on that across the industry. You know, I think part of what happened is that there was an increasing dissonance between the promise of tech as a great democratizer, and the reality of these kinds of contracts, right.  There are many people who for many reasons are deeply concerned about the idea of autonomous weapons. There are many people who are watching what's happening with the Trump administration, and watching the human rights abuses on the border, and watching the rise of authoritarianism frankly and recognizing that this is the moment where you make a choice, right. This is the moment where you say, I'm either going to go with the flow or I'm going to answer the question: What would you have done in this situation? And one of the things that's heartening to me is that old fashioned worker organizing works. That a lot of people suddenly realize that oh you know what, we don't actually have to align completely with our employer's interests and since we are the people who are building this technology, since our skills are necessary to do this, we should also be able to make a choice in what we're doing and what we're not doing.  So I think there's a long way to go but frankly if you'd asked me, six months ago, four months ago, if it were even possible for a workers' movement in Silicon Valley, I would have been skeptical. So you know I'm hopeful simply seeing this quick emergence with such clear language and such clear demands, even if there is a long road to go to ensure that the people building the tech actually have a meaningful ethical decision in what they're building and where it gets applied.

--------------------------------------------------------------------
LEE
Are engineers our best hope in seeking an ethical framework for the use of A.I?
MEREDITH
I think this is one lever among many. And I think we can't simply count on engineers holding the line against you know these massively powerful interests. We really have to think about what other organizing outside of tech companies would look like, what solidarity there would look like, what legal mechanisms do we have. You know I think this is a coalition effort if there ever were one, simply because this is technology that is not only affecting one group or another, right.
  Engineers certainly need to join the fight, but we need voices from all of the affected communities and beyond, making it clear that they too want a say in how this technology affects their lives, and saying you know it's not inevitable.

-----------------------------------------------------------------------
LEE
So Meredith what would you say to a member of the public who might be listening who says I want to help make this not inevitable?
MEREDITH
  Given the breadth of A.I's reach right now into almost every sector, you know, start at home. Where are you seeing it in your field? Where are you seeing it in your discipline? Where are you seeing it in your neighborhood? Just begin asking those questions and be confident that whoever you are, you don't need a CS degree, you don't need fancy training in A.I. to be able to ask straightforward questions about who is this being used by? Who owns it? Who owns my data? Who made the decision to deploy this?  Certainly call your local representatives, show up at city council meetings, ask questions if that's something you have time to do. If not, support people who do. And, you know, again given the way that A.I is being deployed, it's not going to be hard to look around you and, if you do some digging, figure out one place that it's probably already operationalised in a way that you weren't aware of.

-------------------------------------------------------------
LEE
  Meredith, thank you so much for talking to us today and for helping to make a dystopian future less inevitable.
MEREDITH
Yeah. My pleasure. Thanks for doing this.
LEE
From the ACLU, this has been At Liberty. Thanks for listening and be sure to subscribe anywhere you get your podcasts.

The transcript of the entire podcast can be  found at the link below:

https://www.aclu.org/podcast/how-fight-algorithm-ep-7?ms_aff=NAT&initms_aff=NAT&ms=180804_privacyandtechnology_warrantlesssurveillance_newsletter_recruit_gradesAD&initms=180804_privacyandtechnology_warrantlesssurveillance_newsletter_recruit_gradesAD&ms_chan=eml&initms_chan=eml

Programming code abstract screen of software developer.

 PUBLISHER'S NOTE: I am monitoring this case/issue. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic" section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/charlessmith. Information on "The Charles Smith Blog Award"- and its nomination process - can be found at: http://smithforensic.blogspot.com/2011/05/charles-smith-blog-award-nominations.html Please send any comments or information on other cases and issues of interest to the readers of this blog to: hlevy15@gmail.com.