PASSAGE OF THE DAY: "In my view, there are a few things we can do. First, I’d like to see a non-industry body develop a system of auditing algorithmic bias, which would allow governments and businesses to restrict the use of algorithms that don’t meet a particular bias metric. With that in place, I’d like to see governments identify situations where the biases of those algorithms must be made available to the public. Law enforcement, consumer finance, and health care seem like good candidates from the get-go. Second, I believe governments and corporations should be required to disclose when an algorithm influenced the outcome of a decision. For example, if police arrest someone based in part on the evaluation of a predictive policing system, the suspect should have the right to know that. I don’t have high hopes any of that will happen, given the current political climate. But a boy can dream. It’s the only way I’ll be able to sleep at night, anyhow."
------------------------------------------------------------------
COMMENTARY: "AI Weekly: Dystopian visions of AI are distractions from present problems," by Blair Hanley Frank, published by Venture Beat on April 27, 2018. "Blair Hanley Frank is a staff writer for VentureBeat covering artificial intelligence and cloud computing. His work has previously appeared in outlets including PCWorld, Macworld, InfoWorld, Computerworld and GeekWire."
GIST: "Writing about AI for an appreciable amount of time is, in my
experience, enough to make any reasonable person concerned about the
future of humanity. But I worry the focus of that concern is too often
directed at the relatively distant future, which could lead to
unforeseen consequences in the present. Headlines from the past few months illuminate how bad things can get. Consider the cases of the self-driving Uber that killed Elaine Herzberg in Tempe, Arizona and that of the Apple engineer who was killed when his Tesla, driving on Autopilot,
plowed into a traffic barrier on the highway. You’re probably aware of
the content suggestion algorithms from Facebook and YouTube, which have
been implicated in the spread of fake news and extremist views. Then there was a story last week
about how companies and cities use Palantir’s analytics for corporate
security and predictive policing, with potentially disastrous results.
One man interviewed in the story claimed that he isn’t involved with the
Eastside 18 gang in Los Angeles but said the LAPD has him in their
database as an associate and officers have been stopping him as a
result. None of these cases involved AI that’s advanced enough for Elon Musk
to call it an existential threat to humanity. But that didn’t stop
people from getting hurt. If we’re not careful, this will happen more
frequently in the coming years. It’s easy to worry about a catastrophic future for bank tellers,
truck drivers, or some other profession that’s being told today their
jobs will go away in the future. Thanks to Westworld, Battlestar Galactica,
Iain M. Banks’ Culture novels, and other media, we can confidently
picture how artificial superintelligence could upend our lives. Don’t get me wrong — that’s all very concerning. But we can’t let our
anxiety about the distant future blind us to what’s going on right in
front of us. When AI systems fail, they can do so in ways we don’t
expect. Research and anecdotal evidence have shown us that those systems
are often biased against minorities. That bad news turns dire when
those systems become critical components of our infrastructure. Consider the story of the Southern State Parkway in New York that Robert Caro laid out in The Power Broker.
According to Caro’s interviews with Sidney M. Shapiro, urban developer
Robert Moses decided to build overpasses for the parkway that were too
short for buses to pass under, which would limit the access people of
color had to it. Those overpasses still stand. It’s a story that worries me about the future, considering that we’re
using AI to build far more powerful systems that can make predictive
decisions about all manner of things. This isn’t an idle concern: Law
enforcement agencies are using AWS’ Recognition service to do image
recognition today as part of their work. A recent study
of competing facial recognition APIs showed they were less accurate
identifying the gender of people with darker skin, especially women. There’s good news: Changing an algorithm is far easier than building a
new train line or adjusting the height of a bridge. Our codification of
bias need not be set in stone. But we’re already developing
technologies like algorithms that claim to provide predictive policing
capabilities. Companies are already testing the use of AI for investing,
money lending, and other tasks. We need to be working on this now. In my view, there are a few things we can do. First, I’d like to see a
non-industry body develop a system of auditing algorithmic bias, which
would allow governments and businesses to restrict the use of algorithms
that don’t meet a particular bias metric. With that in place, I’d like
to see governments identify situations where the biases of those
algorithms must be made available to the public. Law enforcement,
consumer finance, and health care seem like good candidates from the
get-go. Second, I believe governments and corporations should be required to
disclose when an algorithm influenced the outcome of a decision. For
example, if police arrest someone based in part on the evaluation of a
predictive policing system, the suspect should have the right to know
that.
I don’t have high hopes any of that will happen, given the current political climate. But a boy can dream. It’s the only way I’ll be able to sleep at night, anyhow."
The entire commentary can be found at:
I don’t have high hopes any of that will happen, given the current political climate. But a boy can dream. It’s the only way I’ll be able to sleep at night, anyhow."
The entire commentary can be found at:
PUBLISHER'S NOTE: I am monitoring this case/issue. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic" section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/c