PASSAGE OF THE DAY: "It’s currently unknown
how pervasive these automated systems are in the United States. Richardson
says that part of the reason for this is a lack of transparency around
the acquisition and development of these technologies by jurisdictions.
Many such systems are acquired or developed outside of the normal
procurement process; that is, from federal or third-party grants from
the likes of police organizations or nongovernment organizations with an
interest in law enforcement. In New Orleans, for example, Palantir gave
the [predictive policing] system as an in-kind gift to the police
department. “It didn’t go through the legislative process,” says
Richardson. “It’s only due to some litigation and investigative
journalism that we have some sort of a grasp about how common it is.”"
PUBLISHER'S NOTE: In recent years, I have found myself
publishing more and more posts on the application of artificial
intelligence technology to policing, public safety, and the criminal
justice process, not just in North America, but in countries all over
the world, including China. Although I accept that properly applied
science can play a positive role in our society, I have learned over
the years that technologies introduced for the so-called public good,
can eventually be used against the people they were supposed to
benefit. As reporter Sieeka Khan writes in Science Times: "In 2017,
researchers sent a letter to the secretary of the US
Department of Homeland Security. The researchers expressed their
concerns about a proposal to use the AI to determine whether someone who
is seeking refuge in the US would become a positive and contributing
member of society or if they are likely to become a threat or a
terrorist. The other government uses of AI are also being questioned,
such as
the attempts at setting bail amounts and sentences on criminals,
predictive policing and hiring government workers. All of these attempts
have been shown to be prone to technical issues and a limit on the data
can cause bias on their decisions as they will base it on gender, race
or cultural background. Other AI technologies like automated
surveillance, facial recognition
and mass data collection are raising concerns about privacy, security,
accuracy and fairness in a democratic society. As the executive order of
Trump demonstrates, there is a massive interest in harnessing AI for
its full, positive potential. But the dangers of misuse, bias and abuse,
whether it is intentional or not, have the chance to work against the
principles of international democracies. As the use of artificial
intelligence grows, the potential for
misuse, bias and abuse grows as well. The purpose of this 'technology'
series, is to highlight the dangers of artificial intelligence - and to
help readers make their own assessments as to whether these
innovations will do more harm than good."
Harold Levy: Publisher: The Charles Smith Blog.
----------------------------------------------------------
STORY: "The future of policing How “dirty data” from civil rights violations leads to bad predictive policing: A new report investigates how 13 jurisdictions, including Chicago and New Orleans, were feeding systems data sullied by “unconstitutional and racially biased stops, searches, and arrests,” by reporter D.J. Pangburn, published by 'Fast Company' on February 2, 2019.
PUBLISHER'S NOTE: I am monitoring this case/issue. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic" section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/c harlessmith. Information on "The Charles Smith Blog Award"- and its nomination process - can be found at: http://smithforensic.blogspot. com/2011/05/charles-smith-blog -award-nominations.html Please send any comments or information on other cases and issues of interest to the readers of this blog to: hlevy15@gmail.com. Harold Levy: Publisher; The Charles Smith Blog.
Harold Levy: Publisher: The Charles Smith Blog.
----------------------------------------------------------
STORY: "The future of policing How “dirty data” from civil rights violations leads to bad predictive policing: A new report investigates how 13 jurisdictions, including Chicago and New Orleans, were feeding systems data sullied by “unconstitutional and racially biased stops, searches, and arrests,” by reporter D.J. Pangburn, published by 'Fast Company' on February 2, 2019.
GIST: In March 2015, the American Civil Liberties Union (ACLU) of Illinois published a report
on the Chicago Police Department’s (CPD) stop and frisk practices.
After looking at records from 2012, 2013, and four months of contact
card data from 2014, ACLU of Illinois concluded that many CPD stop and
frisks were unlawful, and that black residents were disproportionately
targeted. The report also noted deficiencies in CPD’s data and data
collection practices, which were, alongside other practices and
procedures, to be independently monitored as part of an August 2015
settlement agreement. But
the ACLU wasn’t alone in its findings about CPD data policies. A
yearlong U.S. Department of Justice (DOJ) investigation into the fatal
shooting of Laquan McDonald found a pattern of poor data collection to
identify and address unlawful conduct, among other issues. All the
while, CPD had been using its own predictive policing system, which has
existed in some form since at least 2012. Funded by a DOJ grant and
developed by the Illinois Institute of Technology, the Strategic Subject
List (SSL) is an automated assessment tool that uses a number of data
sets to analyze crime, as well as identify and rank individuals as at
risk of becoming a victim or offender in a shooting or homicide. A 2017
Freedom of Information Act request revealed that the data set included
398,684 individuals, with much of the information having to do with
arrests, not convictions–just one of many types of information that can
warp SSL’s automated assessments. Chicago, the report’s first case
study, is of particular interest in the predictive policing debate. The
city’s example is also included in a new report
published by AI Now–an interdisciplinary research center at New York
University focused on the social implications of artificial
intelligence–about “dirty data” from civil rights violations leading to
bad predictive policing. The report, published last week,
investigates how 13 jurisdictions that had used, were using, or planned
to implement predictive policing systems were feeding these systems data
sullied by “unconstitutional and racially biased stops, searches, and
arrests,” as well as excessive use of force and first amendment
violations, among other issues. The jurisdictions, which included New
Orleans; Maricopa County, Arizona; Milwaukee; and other cities, had all
entered into notable consent decrees (settlements between two parties)
with the Department of Justice, or some other federal court-monitored
settlements for “corrupt, racially biased, or otherwise illegal policing
practices.” The automated tools used by public agencies to make
decisions in criminal justice, healthcare, and education are often
acquired and developed in the shadows. However, activists, lawyers, and
lawmakers are working to raise awareness
about these algorithms, with a major effort currently under way in the
state of Washington, where legislators are now debating an algorithmic accountability bill
that would establish transparency guidelines. But one area in the
debate that hasn’t received a great deal of attention is the “dirty
data” used by predictive policing systems. The report notes that
police data can be biased in two distinct ways. First, police data
reflects police practices and policies, and “if a group or geographic
area is “disproportionately targeted for unjustified police contacts and
actions, this group or area will be overrepresented in the data, in
ways that often suggest greater criminality.” Another type of bias
occurs when police departments and predictive policing systems tend to
focus on “violent, street, property, and quality-of-life crimes,” while
white-collar crimes–which some studies suggest occur with higher
frequency than the aforementioned crimes–remain “comparatively
under-investigated and overlooked in crime reporting.” Rashida Richardson, director of policy research at AI Now, tells Fast Company
that it was relatively easy to find public records of police misconduct
in the targeted jurisdictions. However, information regarding police
data sharing practices–what data and with which other jurisdictions it
is shared, as well as information on predictive policing systems–were
more difficult to find. Other instances existed where evidence was
inconclusive about a direct link between policing practices and the data
used in the predictive policing system. “We didn’t
have to do [Freedom of Information Act requests] or any formal public
records requests,” says Richardson. “Part of the methodology was trying
to rely on strictly what was already publicly available because the
theory is that this is the type of information that the public should
already have access to.” “In some jurisdictions that have more
recent consent decrees–those being Milwaukee, Baltimore, and
Chicago–it’s a little bit harder because there is a lack of public
information,” she adds. “A lot of the predictive policing pilots or use
cases are often funded through federal dollars, so there were sometimes
records through the DOJ that they provided a grant to the jurisdiction,
but then no other documentation on the local level about how that money
was used.” Richardson says that HunchLab and PredPol are the two
most common predictive policing systems of the 13 jurisdictions. IBM and
Motorola also offer some type of predictive policing systems, while
other jurisdictions develop their own in-house. It’s currently unknown
how pervasive these automated systems are in the United States. Richardson
says that part of the reason for this is a lack of transparency around
the acquisition and development of these technologies by jurisdictions.
Many such systems are acquired or developed outside of the normal
procurement process; that is, from federal or third-party grants from
the likes of police organizations or nongovernment organizations with an
interest in law enforcement. In New Orleans, for example, Palantir gave
the [predictive policing] system as an in-kind gift to the police
department. “It didn’t go through the legislative process,” says
Richardson. “It’s only due to some litigation and investigative
journalism that we have some sort of a grasp about how common it is.” For
there to be unbiased predictive policing systems, Richardson says there
must be reform of both policing and the criminal justice system.
Otherwise, it will continue to be difficult to trust that information
coming from what she calls a “broken system” can be implemented in a
nondiscriminatory way. “One day in the future, it may
be possible to use this type of technology in a way that would not
produce discriminatory outcomes,” says Richardson. “But the problem is
that there are so many embedded problems within policing and, more
broadly, within criminal justice that it would take a lot of fundamental
changes, not only within data practices but also how these systems are
implemented for there to be a fair outcome.”
https://www.fastcompany.com/90312369/how-dirty-data-from-civil-rights-violations-leads-to-bad-predictive-policing
PUBLISHER'S NOTE: I am monitoring this case/issue. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic" section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/c