GIST: "In May 2018, Mayor Bill de Blasio
announced the formation of the
Automated Decision Systems Task Force,
a cross-disciplinary group of city officials and experts in artificial
intelligence (AI), ethics, privacy, and law. Established by
Local Law 49,
the goal of the ADS Task Force is to develop a process for reviewing
algorithms the city uses—such as those for determining public school
assignments, predicting which buildings should be inspected, and
fighting tenant harassment—through the lens of equity, fairness, and
accountability. But nearly one year later, little progress has been made,
casting doubt that the task force will fulfill its mandate: issuing a
report of policy recommendations by fall 2019. “My major concern is the task force has been on a trajectory of nothing. A lot of time has been wasted,” says
Rashida Richardson,
director of policy research at AI Now, a research institute at NYU that
focuses on the social implications of artificial intelligence. (AI Now
co-founder Meredith Whittaker is a member of the task force.)
“Squandering almost a year’s worth of time makes me concerned about the
value and robustness of the final product.” Automated decision systems have been in use in city
government for many years. Because of their opaque nature (they’re often
off-the-shelf products from private companies) and the fact that
there’s little knowledge of what systems are actually in use, there has
been little governmental oversight and accountability.
Meanwhile, many of these systems are
biased and flawed. The risk assessment algorithm used by Broward County, Florida, to predict future criminals was the subject of a
ProPublica expose on racially biased software.
After an algorithm in use by the Arkansas Department of Public Health
began dramatically reducing benefits for Medicaid recipients, the state
was sued.
A judge ordered the state to stop using the automated system for determining home health care hours. And in the 1970s,
a flawed algorithm informing FDNY station closures
left broad swathes of the city susceptible to fire, disproportionately
affecting predominantly low-income black and Latino neighborhoods.
Matching algorithms used by NYC public schools
have favored white students and disadvantaged students of color. Local Law 49 was
praised as a significant step toward achieving equity and fairness in New York City.
But there were clear challenges from the very beginning: The law was
broad, sweeping, and ambitious. It requires a level of transparency that
many agencies—like the NYPD, which frequently does not disclose
information publicly,
citing interference with public safety—and the tech companies that develop these products are not accustomed to. At an April 4 hearing before the City Council Committee
on Technology, task force co-chair Jeff Thamkittikasem, director of the
Mayor’s Office of Operations, testified that the group has not reached
consensus about what constitutes an automated decision system, despite
meeting about 20 times over the past year. “The task force has spent time looking at what falls
under an agency ADS; it’s taken more time than we thought it would,”
Thamkittikasem said, adding that because the law’s definition of ADS is
broad, members flagged a vast array of computer models along the
spectrum, including sophisticated machine learning models, as well as
“calculators and advanced Excel spreadsheets.” Thamkittikasem also told the council that the task force
does not know what automated decision systems are in use, does not plan
to create or disclose a list of systems the city uses, and has not held
any public meetings.
At the hearing, members of the task force, along with
data experts and privacy advocates expressed frustration with the lack
of progress and reluctance to disclose what automated systems are in
use.
In
prepared remarks, Janet Haven, executive director of
Data & Society,
a New York-based research group focused on the social and cultural
issues surrounding AI and data-centric technology, said, “We have seen
little evidence that the task force is living up to its potential. New
York has a tremendous opportunity to lead the country in defining these
new public safeguards, but time is growing short to deliver on the
promise of this body.” During his testimony to City Council, Albert Fox Cahn, a
privacy advocate who departed the group in December, voiced alarm about
mismanagement and disempowering of the task force. One issue was the use
of the
Jain Family Foundation,
a non-profit research institute that the city hired (they worked pro
bono) to help provide project management and research support. It was
never an official member of the task force, yet its scope increased as
time went on from providing background research to authoring proposed
language and policy documents for the task force to ratify.
New York has a
tremendous opportunity to lead the country in defining these new public
safeguards, but time is growing short to deliver on the promise of this
body.
“Increasingly, the foundation was writing a first draft
of the task force’s report,” Cahn told the City Council during the
hearing. “The foundation’s role drew complaints from numerous task force
members, so it was eventually phased out, but it’s a telling example of
how the role of task force members themselves was circumscribed as part
of this process.” The Jain Family Foundation’s work included attempts to
define an ADS. They presented options for the group to vote on, but
since the definitions did not reflect the views of the task force, they
did not reach consensus. The Jain Family Foundation stopped its work in
December. “Everything about the task force report was ambiguous and
up to the task force to decide, except the definition of an automated
decision system,” Cahn later told Curbed. “That was the one clear thing
presented by the City Council [in the Local Law] and it was unfortunate
that the task force hasn’t operated from the baseline understanding as
defined by the Council … I believe it was the Mayor’s Office that raised
fears that [was an overly expansive definition. During the hearing [the
task force chairs] talked about not wanting every Excel document
scrutinized. Something important to understand in this discussion is
some of the most powerful and sweeping tools can be run on relatively
simple platforms.” Task
force members Julia Stoyanovich, a data science, computer science, and
engineering professor at NYU, and Solon Barocas, a Cornell professor
focusing on the ethics of machine learning,
submitted joint testimony to the City Council
that expressed particular concern over the lack of information made
available to them, stressing the importance of knowing about actual
systems in use. Without real-life data sets and case studies, the
recommendations would be generic and ineffective for New York City’s
needs, and could have been completed using existing academic research. “A report based on hypothetical examples, rather than on
actual NYC systems, will remain abstract and inapplicable in practice,”
they wrote. “The task force cannot issue actionable and credible
recommendations without some knowledge of the systems to which they are
intended to apply … The apparent lack of commitment to transparency on
the part of task force leadership casts doubt on the City’s intentions
to seriously consider or enact the report’s
recommendations—recommendations largely about transparency.”
City officials are also growing impatient. In a March 26
letter to Thamkittikasem, Comptroller Scott Stringer emphasized the
importance of algorithmic accountability and expressed disappointment in
the task force’s work to date, particularly that disclosure of
automated decision systems has not occurred. He requested a list of all
algorithms that inform public services or placement in a public
facility—like school selection, homeless shelter placement, bail
determinations, domestic violence interventions, and child protective
services—by May 26, as well as information about how each is used and
how they were developed. “Algorithms should be subject to the same scrutiny with
which we treat any regulation, standard, rule, or protocol. It is
essential that they are highly vetted, transparent, accurate and do not
generate injurious, unintended consequences,” Stringer wrote. “Without
such oversight, misguided or outright inaccurate algorithms can fester
and lead to increasingly problematic outcomes for city residents,
employees, and contractors.” This lack of progress to date reflects the overall
difficulty of regulating technology, a field that’s coming under
increased scrutiny at federal, state, and local levels. This month, the
House and Senate introduced the Algorithmic Accountability Act, which, if passed, would require the FTC to create rules for assessing the impact of automated decision systems.
HUD recently sued Facebook for housing discrimination in its ads, the
New York Civil Liberties Union is suing ICE for its immigrant risk assessment algorithm, and a
Connecticut
judge recently ruled that tenant screening companies that use
algorithmic risk assessments must comply with fair housing rules. Five months after New York City announced the ADS Task Force, Vermont announced a statewide
Artificial Intelligence Task Force,
which had similar directives as New York City’s: to make
recommendations on oversight and regulation of algorithmic systems in
use. It’s held multiple public meetings and is due to release its report
in June, showing that with determination and proper support from
government institutions, this type of work, while difficult and
uncharted, is possible in a timely manner. To help improve transparency,
AI Now compiled a list of all the automated decision systems it knows the city uses,
which is far from an exhaustive list. The ADS Task Force is due to host
its first public forum on April 30 at New York Law School."