PUBLISHER'S NOTE: Artificial intelligence, once the stuff of science fiction, has become all to real in our modern society - especially in the American criminal justice system; As the ACLU's Lee Rowland puts it: "Today, artificial intelligence. It's everywhere — in our homes, in our cars, our offices, and of course online. So maybe it should come as no surprise that government decisions are also being outsourced to computer code. In one Pennsylvania county, for example, child and family services uses digital tools to assess the likelihood that a child is at risk of abuse. Los Angeles contracts with the data giant Palantir to engage in predictive policing, in which algorithms identify residents who might commit future crimes. Local police departments are buying Amazon's facial recognition tool, which can automatically identify people as they go about their lives in public." The algorithm is finding its place deeper and deeper in the nation's courtrooms on what used to be exclusive decisions of judges such as bail and even the sentence to be imposed. I am pleased to see that a dialogue has begun on the effect that increasing use of these logarithms in our criminal justice systems is having on our society and on the quality of decision-making inside courtrooms. As Lee Rowland asks about this brave new world, "What does all this mean for our civil liberties and how do we exercise oversight of an algorithm?" In view of the importance of these issues - and the increasing use of artificial intelligence by countries for surveillance of their citizens - it's time for yet another technology series on The Charles Smith Blog focusing on the impact of science on society and criminal justice. Up to now I have been identifying the appearance of these technologies. Now at last I can report on the realization that some of them may be two-edged swords - and on growing pushback.
Harold Levy: Publisher; The Charles Smith Blog:
------------------------------------------------------------
PASSAGE OF THE DAY:
STORY: "The UK wants to Become the World Leader in Ethical A.I. But what does that actually mean? And is it possible?, by Joelle Renstrom, published by Slate on August 1, 2018. Slate tells us that "Joelle Renstrom lives in Boston, where she teaches and writes
about all things geeky. Her blog, Could This Happen?, explores the
relationship between science and science fiction."
PUBLISHER'S NOTE: I am monitoring this case/issue. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic" section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/ charlessmith.
Information on "The Charles Smith Blog Award"- and its nomination
process - can be found at: http://smithforensic.blogspot. com/2011/05/charles-smith- blog-award-nominations.html
Please send any comments or information on other cases and issues of
interest to the readers of this blog to: hlevy15@gmail.com.
Harold Levy: Publisher; The Charles Smith Blog;
---------------------------------------------------------------------
GIST: "In 2013, an algorithm determined Eric Loomis' six-year
prison sentence in Wisconsin for attempting to flee a traffic officer and
operating a motor vehicle without the owner's consent. No one knew how the
software, Correctional Offender Management Profiling for Alternative Sanctions,
or COMPAS, worked—not even the judge who delivered the sentence. Analyses
conducted by ProPublica later found the predictive
artificial intelligence used in this case, which attempts to gauge the
likelihood of an offender committing another crime, to be racially biased: A two-year study involving 10,000 defendants
found that the A.I. routinely overestimated the likelihood of recidivism among
black defendants and underestimated it among whites. The U.S. Supreme Court declined to review Eric Loomis' case, so the
sentence stands. Increasingly, A.I. has the power to alter the course of people's lives.
It's becoming part of decisions about who gets hired, who gets fired, who goes to prison, which students schools pursue, and how doctors treat patients. It's going to affect foreign affairs, the economy (particularly through job automation), transportation, and infrastructure. Each new application
represents an economic opportunity, which is part of the reason why the rush to
develop has been dubbed the "new space race." The current
front-runners are China, with billions of dollars in investments
and an ambitious national plan for establishing global
dominance over the industry, and the U.S., where advancements come primarily
from the private sector and academia. Other countries are trying to figure out how
they can keep up, even if they can't compete with the U.S. or China in A.I.
funding and development. The U.K. has settled on another path to become a leader in the A.I.
game. At the World Economic Forum in Davos, Switzerland, in January, U.K. Prime
Minister Theresa May announced her country's goal to become a world leader in "ethical A.I." Three months later, the
U.K. unveiled its A.I. Sector Deal, a comprehensive policy that
establishes a partnership between government, academia, and industry to address
residents' and businesses' goals and concerns with respect to A.I. And there's lots to be concerned about, like technological unemployment,
a homogeneous A.I. workforce creating products that have very human biases, the
dissemination of misinformation, military applications, and a widening wealth
gap. Crime-prediction software focuses on nonwhite neighborhoods,
perpetuating profiling, resentment, and potential due process violations.
LinkedIn's search algorithms make it easier to find prospective male employees
than female ones. China's facial-recognition programs threaten privacy and suppress freedom. (China's willingness to bypass data-protection concerns
may be an advantage in the A.I. race.) Due to backlash, Google recently opted
out of a contract to work on A.I. for weapons, though it will
continue to do military work. The U.K. deal is designed to address many of these worries. If it
succeeds in balancing economic growth with concerns about privacy, trust, and
access, it could demonstrate that ethical behavior is good for business. That
in turn could influence policies in the European Union (regardless of what happens
with Brexit) and around the world. At a roundtable discussion at the British Consulate in Cambridge,
Massachusetts, in May, Matthew Gould, U.K. director general for
digital and media, said the goal is to have researchers consider ethics every
step of the way, rather than relegate it to an afterthought. That sounds
wonderful. But it is also abstract and seems destined to become overwhelming,
even Sisyphean, given the industry's size and growth. What does it mean to bake
in ethics? Is it even possible? The 21-page deal "establishes the beginning of [the]
partnership" among business, academia, and government by responding to
recommendations about "how the government and industry can work together
on skills, infrastructure and implement a longterm strategy for AI in the
UK." Its key policies revolve around research and development, skills and
digital literacy of human workers, infrastructure, and the business
environment. The four "grand challenges" noted in the deal address
the A.I. and data economy, clean growth, future mobility of goods and services,
and the needs of an aging society. The deal covers many significant aspects and
implications of A.I., all of which require attention to ethics. "A
revolution in AI technology is already emerging," reads the deal. "If
we act now, we can lead it from the front." Much of the ethical discussion comes down to the role played by data—how
A.I. uses it and how it's collected. Data sets contain information that can be
isolated by specific variables—such as age, gender, race, or education—or
organized and analyzed by A.I. in a way that provides insight and identifies
trends. They're also used in training A.I. to better perform these tasks. Data
sets gathered by private companies or researchers often contain far more
information (medical records, bank information, purchase histories), which
could go unused by A.I. in other sectors, thus impeding disease research, algorithm
programming and implementation, understanding of financial trends, and insight
into other far-reaching issues. Worse, data can also be misused. Imagine the
Cambridge Analytica scandal with far more powerful technology. Since 2010, the British government has granted open access to its public data sets
and mandated other public bodies to do the same. The U.K.
deal advocates for the use of data trusts (also called data
collaboratives)—frameworks that facilitate mutually beneficial
data-sharing between the government and other sectors. The idea is to provide access to new data, incentives to
share data, and assurance that such data are being used for the public good.
For example, a data trust could provide invaluable geographical, health,
demographic, political, and other information about migration trends to various researchers,
companies, and the government to help shape policies. Any data strategy must
comply with the recently passed EU General Data Protection Regulation (the
U.K. has its own similar Data Protection Act, which it recently
strengthened) as well as maintain consumer trust. Maintaining trust also includes taking on the problem of programmer homogeneity, which contributes to
cases such as Eric Loomis'. "A diverse group of programmers reduces the
risk of bias embedding into the algorithm and enables a fairer and higher
quality output," computer science professor Dame Wendy Hall and industry
expert Jérôme Pesenti wrote in the recommendations they passed on to
the U.K. government. "Currently, the workforce is not representative of the wider population. In the
past, gender and ethnic exclusion have been shown to
affect the equitability of results from technology processes. If UK AI cannot
improve the diversity of its workforce, the capability and credibility of the
sector will be undermined." A report from the Chartered
Institute for IT found that the vast majority of IT specialists in
the U.K. are male, able-bodied, and under 50 years old. Seventeen percent are
female, 21 percent are older than 50, and 8 percent are disabled. Digital,
media, and creative sectors have similarly disproportionate demographics.
Ultimately, according to Hall, this creates a pervasive problem: "bias in, bias out." While some U.S.
companies have managed to increase diversity, they've only made a tiny
dent: As of 2017, black workers filled only 3.1 percent of jobs in the eight largest
American tech companies. Thus, the U.K. deal aims to create a more heterogeneous workforce. The
newly established Ada Lovelace Institute will work with the
government to promote diversity, and the Alan
Turing Institute's new fellowship program will offer 1,000
government-supported A.I.-related Ph.D. placements. The U.K. will also double
the number of Tier 1 exceptional talent visas and make it
easier for visa holders to apply for long-term settlement. Diversity in
programmers and researchers leads to products and services for different
demographics, as well as algorithms that account for skin color and gender.
Investing in education for both children and adults also helps make
opportunities in A.I. available to more people. The U.K. will spend 406 million
pounds (about $533 million) to boost STEM education, train up to 8,000 computer science teachers, and create the National Centre of Computing Education. Adults
can participate in the National Retraining Scheme, which will put 36
million pounds (roughly $47 million) toward digital-skills training and 40
million pounds (about $52 million) toward construction training—a savvy
combination for workers and for national infrastructure. The U.K. is already
experiencing housing shortages, and jobs such as
bricklaying and roofing are predicted to be particularly hard hit by technological
unemployment. The deal also addresses concerns about A.I. supplanting
human workers, particularly lower-skilled ones. Gould said they're
"confident new jobs will materialize," but much debate remains about whether that will
happen, how many and what types of new positions A.I. might create, and who
will be qualified for those jobs. "New jobs don't always go to those
who've lost them," Gould acknowledged. "We're trying to avoid a haves
and have-nots situation." That wealth gap already exists. Income
inequality isn't quite as bad in the U.K. as it is in the U.S.,
but it's getting worse. Perhaps it can be narrowed, or
at least not exacerbated, by improved access. The U.K. is moving forward with 5G and fiber
networks, as well as plans to provide high-speed broadband access to
everyone. (Currently, 95 percent of U.K. residents have access.)
Internet connectivity delivered at 10 megabits per second or more will be a
legal right in the U.K. by 2020. Crafting ethical policies regarding A.I. also requires addressing the
notoriously complicated issue of liability, especially in the event of a
malfunction or an autonomous A.I. decision or action. Areiel
Wolanow, managing director of Finserv Experts, a consulting firm that
customizes A.I. for businesses, attended evidence sessions—meetings in which committees
of experts provide relevant data and perspectives to various governmental
departments—that helped guide the deal. The moral of those sessions was that
accountability is the most important aspect of A.I. regulations.
Slow progress on legal standards means laws and precedents must
perpetually play catch-up with A.I. advancements. Many laws don't have specific
provisions for A.I., especially advanced A.I. capable of making autonomous
decisions. However, "A.I. doesn't get people around existing laws,"
Wolanow said. As an example, he mentioned airlines' "dynamic pricing," which uses personal
information to individualize fares. Germany's Federal Cartel Office
investigated Lufthansa's 2017 25–30 percent ticket price hike just after the
shuttering of rival Air Berlin and rejected Lufthansa's explanation that its
prices are generated by algorithms, which aren't responsible for following the
law. Although the FCO ultimately didn't open a formal case, "the company
that owns the A.I. makes the decision to declare themselves accountable, even
if the A.I. makes unethical decisions autonomously," Wolanow said. In
other words, Spider-Man's first lesson applies. While A.I.—and those who use it—can't circumvent existing regulations, a
lack of guidelines raises significant challenges. Who makes the rules that
apply to technology so nascent and powerful that we don't understand all it can
do? Governments and industries set the standards for technologies used for health care, medical devices, and banking
procedures, but that hasn't happened yet with A.I.: "Even if you have
something ethical, you can't get it approved for use because there were no
standards to develop it," said Wolanow, who is in a group working to develop
those standards. This critical step dictates the pace at which companies can roll out A.I.;
it's similar to private companies twiddling their thumbs for years while the
FAA devised commercial drone regulations.
"Certification for use is more time consuming and expensive than building
A.I. solutions," Wolanow said. Without standards, IT architects who ensure
the functionality, safety, and compliance of technological systems "can't
really say an A.I. solution is safe for use." Wolanow cited Bank of America's security blockchain, which
took 18 months to advance from prototype to pilot program because it lacked a
basis for approval. To address these problems, the deal calls for the creation of an A.I.
Global Governance Commission, which recently met for its first
planning and strategy session. The commission will "provide a point of
auditability against which existing solutions can be measured," said
Wolanow. All guidance published by the commission will be testable: A.I.
developers will be able to assess and verify that their solutions follow the
guidelines, speeding up the process by which solutions can be implemented. Promoting a diverse workforce, setting standards and a procedure for
regulation, addressing the impact of technological unemployment, providing
broadband access to all residents, protecting private data, incentivizing
research and development, and fortifying the economy—the deal ticks many boxes
when it comes to the concerns about and implications of A.I. In fact, it's
enough boxes that it seems too good to be true. Can ethical practice coexist
with rampant capitalism in the world's most lucrative and dynamic industry? Kentaro Toyama, associate professor at the
University of Michigan School of Information and author of Geek Heresy: Rescuing Social Change from the Cult of
Technology, said the ethical focus "sounds great in theory
… [but] when policies start off well-intentioned, the good intentions tend to
erode under the constant efforts of lobbyists and less-than-noble
politicians." Certainly, the deal warrants skepticism. It's not difficult
to imagine companies seeking ways to exploit or circumvent the system.
"The likelihood that the ethics focus will survive is slim," Toyama
said.
Optimism about the deal seems as naïve as it is reassuring. The ethical
pieces could get jettisoned over time, and perhaps the reality won't match the
vision, but as Gould put it, the deal is a "declaration of intent," a
crucial first step. The American government isn't talking about ethical A.I.,
and in an interview in Wired, Michael Kratsios, the
president's deputy assistant for technology policy, indicated that the Trump
administration plans to intervene as little as possible in the development of
A.I. so as not to inhibit its growth. The U.S. Office of Science and Technology
now has only 45 employees, compared with 135 under
President Barack Obama, which slows down policy implementation and, more
worryingly, inhibits the administration's ability to understand and act on
trends in science and technology, as well as their implications. Ignoring the
ethical questions and consequences of A.I. could lead to a future most of us
would rather avoid. "We in the United States should be equally up in arms
about these issues," Toyama said, "but apart from a few voices, many
of which are appropriated by large tech companies, little is being done." Given that the U.S. has practically abdicated its moral responsibility
here, and China doesn't seem terribly interested either, it seems that we
should be rooting for the U.K. to succeed. Fusing ethics and economic growth
seems both obvious and ingenious because it doesn't matter which holds more
sway. Some people switch to solar power because it's cheaper, not because they care about the
environment, but the outcome is the same. Perhaps after it becomes evident that
combining ethics and economics amounts to a win-win, we'll see more of that. Knowing what we know now about the implications of A.I., maybe we'd make
different decisions around data protections, social media, and privacy if we
could go back in time. But since each generation of technology gives rise to
the next and irrevocably affects society, moving backward when it comes to A.I.
is next to impossible. We decide what happens and how, which includes figuring
out how to develop and use A.I. for the good of everyone and to accept
responsibility for our mistakes. The U.K. has the opportunity to lead the next
generation of tech corporations, research, and governmental policies. Here's
hoping the deal gets it right."
- The entire story can be read at the link below:
PUBLISHER'S NOTE: I am monitoring this case/issue. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic" section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/
Harold Levy: Publisher; The Charles Smith Blog;
---------------------------------------------------------------------