GIST: "It
is the job of the U.S. Immigration and Customs Enforcement to determine
— case by case — whether people arrested for immigration offenses ought
to be detained, released on bond or just trusted to show up in court.
For years, ICE has used an algorithm to help make this decision, one
that weighs factors such as ties to the community, criminal history and
past or present substance abuse. But
something remarkable happened in ICE’s New York field office not long
after President Trump came into office: Even though, from 2013 into
2017, the algorithm had recommended about 47 percent of arrestees be
released, after June 2017, the number plummeted to about 3 percent. According to a
lawsuit
filed last week by the New York Civil Liberties Union and the legal aid
organization Bronx Defenders, ICE secretly changed the algorithm to all
but eliminate the release of defendants,
the Intercept reported this week.
That means thousands of noncitizens in New York City languished for
weeks, and sometimes months, awaiting hearings. The information about
the algorithmic manipulation came to light only because immigration
lawyers noticed the change in outcomes, and the civil rights groups
filed Freedom of Information Act lawsuits to figure out what was going
on. Now those groups have
sued ICE
in federal court, saying the alteration of the algorithm denied due
process to those to whom it was applied. They have asked that human
decision-makers revisit all of the algorithm’s decisions that kept
low-risk individuals in custody. (In theory, an official reviewed the
algorithm’s recommendation, but the program’s answer was embraced some
99 percent of the time, the legal complaint says.)Every
day, government agencies rely on algorithms to make decisions that
affect our rights and liberties. Algorithms help to administer federal
and state benefits, determine the length of criminal sentences and
interpret forensic evidence. Yet the government seldom tells the public
how these algorithms work. When ICE tweaked its risk assessment tool, it
certainly did not notify the tens of thousands of immigrants ensnared
in Trump’s “catch and detain” policy shift. Yet it in effect had
introduced a draconian policy change. The full ramifications of the
change to the algorithm are likely even broader: It is used nationally,
but the FOIA suit only revealed data from New York.
The other problem with ICE detention: Solitary confinement When
the government turns to automated decision-making, transparency all too
often falls by the wayside. Critics have frequently pointed out that
algorithms can be biased and faulty — but so can human decision-makers.
However, the opacity of decisions made by software presents a unique set
of difficulties. It is not just that algorithms can, as appears to be
the case here, systematically lead to unjustified outcomes. It is also
that victims of the system and watchdog groups often have no way of
knowing why and how the decisions are made, which forecloses
accountability.It is hardly only ICE that is using tools like these, after all. Similar programs are
deployed
across many jurisdictions to assess whether an individual charged with a
crime ought to be released from detention before trial. As many
jurisdictions turn away from cash bail,
reformers are implementing risk-based systems
to determine who actually poses a danger to society — and who should go
free while awaiting trial. The impulse is understandable: The goal is
to standardize a process that, in the past, has too often been shaped by
defendants’ poverty and judges’ inconsistent notions about crime,
dangerousness and punishment. Indeed, in some cases criminal justice
reformers have applauded risk assessment algorithms like ICE’s for
reducing pretrial incarceration. When ICE announced its development of a risk assessment tool, some
immigration advocates cheered,
believing it would result in more objective and consistent decisions.
But
the opacity remains a problem. Unsurprisingly, the government would
prefer to keep those technologies a closely guarded secret — avoiding
public scrutiny, judicial review and legislative action. ICE does not
stand out in this regard. In other cases, prosecutors have argued that
algorithms used to interpret forensic evidence or to determine
defendants’ prison sentences amount to trade secrets — they are the
property of private companies — that cannot be disclosed without
violating contracts. On similar grounds, state agencies have resisted
sharing information about automated decisions to fire public employees
or to reduce Medicaid benefits. Democratic
oversight is critical if we are to realize the benefits of automated
decision-making. To start, the government should provide much more
information about when it is using these new tools, and it should
disclose how the algorithms weigh all the factors they consider.
Existing state and federal open records statutes (such as FOIA) provide
some recourse, but not enough. (If you do not know an algorithm exists,
you are unlikely to sue to find out how it works.) At a bare minimum,
the government ought to be obliged to reveal such information when it
uses automated decision-making to deprive individuals of liberty or
property. Some plaintiffs have succeeded in arguing that the
Constitution’s protections for due process requires this level of
disclosure. Algorithmic transparency requirements could also be imposed
through either state or federal law.Even this modest suggestion has
proved surprisingly controversial. Law enforcement officials, for
instance, have
expressed concern that disclosing how automated decisions are made will allow bad actors to “game” the systems and avoid detection. But
continuing to cloak automated decisions in secrecy will only further
undermine public trust in our institutions, which is hardly high to
begin with. As automated decision-making becomes more sophisticated and
comprehensive, oversight becomes even more critical. As thousands of
undocumented immigrants in New York can attest, it is time to pry open
the black box and find out what the government is up to."