Wednesday, June 29, 2022

Technology: (Mis)uses thereof in the criminal justice system: Tim Cushing, an awesome observer of the (mis)uses of technology in criminal justice systems, explains in 'Techdirt' why an artificial intelligence (AI) tool used in some U.S. states to initiate child welfare investigations have com under close scrutiny..."There’s plenty of human work to be done, but there never seems to be enough humans to do it. When things need to be processed in bulk, we turn it over to hardware and software. It isn’t better. It isn’t smarter. It’s just faster. We can’t ask humans to process massive amounts of data because they just can’t do it well enough or fast enough. But they can write software that can perform tasks like this, allowing humans to do the other things they do best… like make judgment calls and deal with others humans. Unfortunately, even AI can become mostly human, and not in the sentient, “turn everyone into paperclips” way it’s so often portrayed in science fiction. Instead, it becomes an inadvertent conduit of human bias that can produce the same results as biased humans, only at a much faster pace while being whitewashed with the assumption that ones and zeroes are incapable of being bigoted. But that’s the way AI works, even when deployed with the best of intentions. Unfortunately, taking innately human jobs and subjecting them to automation tends to make societal problems worse than they already are. Take, for example, a pilot program that debuted in Pennsylvania before spreading to other states. Child welfare officials decided software should do some of the hard thinking about the safety of children. But when the data went in, the usual garbage came out. According to new research from a Carnegie Mellon University team obtained exclusively by AP, Allegheny’s algorithm in its first years of operation showed a pattern of flagging a disproportionate number of Black children for a “mandatory” neglect investigation, when compared with white children. Fortunately, humans were still involved, which means not everything the AI spit out was treated as child welfare gospel. The independent researchers, who received data from the county, also found that social workers disagreed with the risk scores the algorithm produced about one-third of the time. But if the balance shifted towards more reliance on the algorithm, the results would be even worse."




PASSAGE OF THE DAY: "But Oregon officials have decided to ditch this following the AP investigation published in April (as well as a nudge from Senator Ron Wyden). Oregon’s Department of Human Services announced to staff via email last month that after “extensive analysis” the agency’s hotline workers would stop using the algorithm at the end of June to reduce disparities concerning which families are investigated for child abuse and neglect by child protective services. “We are committed to continuous quality improvement and equity,” Lacey Andresen, the agency’s deputy director, said in the May 19 email.


------------------------------------------------------------------


COMMENTARY: "Oregon State officials dump AI tool used to initiate welfare investigations," by Tim Cushing, Techdirt's awesome commentator on the (mis)uses of technology, published on June 17, 2022.


GIST:  "There’s plenty of human work to be done, but there never seems to be enough humans to do it. When things need to be processed in bulk, we turn it over to hardware and software. It isn’t better. It isn’t smarter. It’s just faster.


We can’t ask humans to process massive amounts of data because they just can’t do it well enough or fast enough. But they can write software that can perform tasks like this, allowing humans to do the other things they do best… like make judgment calls and deal with others humans.


Unfortunately, even AI can become mostly human, and not in the sentient, “turn everyone into paperclips” way it’s so often portrayed in science fiction. 


Instead, it becomes an inadvertent conduit of human bias that can produce the same results as biased humans, only at a much faster pace while being whitewashed with the assumption that ones and zeroes are incapable of being bigoted.


But that’s the way AI works, even when deployed with the best of intentions.


 Unfortunately, taking innately human jobs and subjecting them to automation tends to make societal problems worse than they already are. 


Take, for example, a pilot program that debuted in Pennsylvania before spreading to other states. Child welfare officials decided software should do some of the hard thinking about the safety of children. But when the data went in, the usual garbage came out.


According to new research from a Carnegie Mellon University team obtained exclusively by AP, Allegheny’s algorithm in its first years of operation showed a pattern of flagging a disproportionate number of Black children for a “mandatory” neglect investigation, when compared with white children.


Fortunately, humans were still involved, which means not everything the AI spit out was treated as child welfare gospel.


The independent researchers, who received data from the county, also found that social workers disagreed with the risk scores the algorithm produced about one-third of the time.


But if the balance shifted towards more reliance on the algorithm, the results would be even worse.


If the tool had acted on its own to screen in a comparable rate of calls, it would have recommended that two-thirds of Black children be investigated, compared with about half of all other children reported, according to another study published last month and co-authored by a researcher who audited the county’s algorithm.


There are other backstops that minimize the potential damage caused by this tool, which the county relies on to handle thousands of neglect decisions a year. 


Workers are told not to use algorithmic output alone to instigate investigations. 


As noted above, workers are welcome to disagree with the automated determinations. 


And this only used to handle cases of potential neglect or substandard living conditions, rather than cases involving more direct harm like physical or sexual abuse.


Allegheny County isn’t an anomaly. More locales are utilizing algorithms to make child welfare decisions. 


The state of Oregon’s tool is based on the one used in Pennsylvania, but with a few helpful alterations.


Oregon’s Safety at Screening Tool was inspired by the influential Allegheny Family Screening Tool, which is named for the county surrounding Pittsburgh, and is aimed at predicting the risk that children face of winding up in foster care or being investigated in the future. It was first implemented in 2018. Social workers view the numerical risk scores the algorithm generates – the higher the number, the greater the risk – as they decide if a different social worker should go out to investigate the family.


But Oregon officials tweaked their original algorithm to only draw from internal child welfare data in calculating a family’s risk, and tried to deliberately address racial bias in its design with a “fairness correction.”


But Oregon officials have decided to ditch this following the AP investigation published in April (as well as a nudge from Senator Ron Wyden).


Oregon’s Department of Human Services announced to staff via email last month that after “extensive analysis” the agency’s hotline workers would stop using the algorithm at the end of June to reduce disparities concerning which families are investigated for child abuse and neglect by child protective services.


“We are committed to continuous quality improvement and equity,” Lacey Andresen, the agency’s deputy director, said in the May 19 email.


There’s no evidence Oregon’s tool resulted in disproportionate targeting of minorities, but the state obviously feels it’s better to get out ahead of the problem, rather than dig out of a hole later. It appears, at least from this report, the immensely important job of ensuring children’s safety will still be handled mostly by humans. 


And yes, humans are more prone to bias than software, but at least their bias isn’t hidden behind a wall of inscrutable code and is far less efficient than the slowest biased AI."


The entire commentary can be read at:

https://www.techdirt.com/user/capitalisliontamer/

PUBLISHER'S NOTE: I am monitoring this case/issue. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic"  section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/charlessmith. Information on "The Charles Smith Blog Award"- and its nomination process - can be found at: http://smithforensic.blogspot.com/2011/05/charles-smith-blog-award-nominations.html Please send any comments or information on other cases and issues of interest to the readers of this blog to: hlevy15@gmail.com.  Harold Levy: Publisher: The Charles Smith Blog;



SEE BREAKDOWN OF  SOME OF THE ON-GOING INTERNATIONAL CASES (OUTSIDE OF THE CONTINENTAL USA) THAT I AM FOLLOWING ON THIS BLOG,  AT THE LINK BELOW:  HL:




FINAL WORD:  (Applicable to all of our wrongful conviction cases):  "Whenever there is a wrongful conviction, it exposes errors in our criminal legal system, and we hope that this case — and lessons from it — can prevent future injustices."
Lawyer Radha Natarajan:
Executive Director: New England Innocence Project;
—————————————————————————————————
FINAL, FINAL WORD: "Since its inception, the Innocence Project has pushed the criminal legal system to confront and correct the laws and policies that cause and contribute to wrongful convictions.   They never shied away from the hard cases — the ones involving eyewitness identifications, confessions, and bite marks. Instead, in the course of presenting scientific evidence of innocence, they've exposed the unreliability of evidence that was, for centuries, deemed untouchable." So true!
Christina Swarns: Executive Director: The Innocence Project;