PASSAGE OF THE DAY: "Bill had ended up on what is known as
the Gangs Matrix, a controversial database created by the Metropolitan
Police (the Met) in the aftermath of London’s 2011 riots
to purportedly identify and surveil not only those at risk of
committing gang-related violence, but also potential victims of it.
Based on a number of variables such as previous
offences, patrol logs, social media activity and friendship networks,
the matrix relies on a mathematical formula to calculate a “risk score” –
red, amber, or green – for each person, in reference to the likelihood
they will be involved in gang violence. This intelligence in theory
guides an efficient use of police resources and aids court prosecutions.
But critics argue it is one of the most flawed policing initiatives in
modern times. In a report last year,
human rights charity Amnesty International described it as “a racially
biased database criminalising a generation of young black men”,
revealing that 35 per cent of those on the matrix had no police
intelligence linking them to gang violence and had never been charged
with a crime."
------------------------------------------------------
PUBLISHER'S NOTE: In recent years, I have found myself publishing more and more posts on the application of artificial intelligence technology to policing, public safety, and the criminal justice process, not just in North America, but in countries all over the world, including China. Although I accept that properly applied science can play a positive role in our society, I have learned over the years that technologies introduced for the so-called public good, can eventually be used against the people they were supposed to benefit. As reporter Sieeka Khan writes in Science Times: "In 2017, researchers sent a letter to the secretary of the US Department of Homeland Security. The researchers expressed their concerns about a proposal to use the AI to determine whether someone who is seeking refuge in the US would become a positive and contributing member of society or if they are likely to become a threat or a terrorist. The other government uses of AI are also being questioned, such as the attempts at setting bail amounts and sentences on criminals, predictive policing and hiring government workers. All of these attempts have been shown to be prone to technical issues and a limit on the data can cause bias on their decisions as they will base it on gender, race or cultural background. Other AI technologies like automated surveillance, facial recognition and mass data collection are raising concerns about privacy, security, accuracy and fairness in a democratic society. As the executive order of Trump demonstrates, there is a massive interest in harnessing AI for its full, positive potential. But the dangers of misuse, bias and abuse, whether it is intentional or not, have the chance to work against the principles of international democracies. As the use of artificial intelligence grows, the potential for misuse, bias and abuse grows as well." The purpose of this 'technology' series, is to highlight the dangers of artificial intelligence - and to help readers make their own assessments as to whether these innovations will do more harm than good.
Harold Levy: Publisher: The Charles Smith Blog.
----------------------------------------------------------
STORY: "The grim reality of life under Gangs Matrix, London's controversial predictive policing tool," by Peter Yeung, published by Wired on April 2, 2019.
------------------------------------------------------
PUBLISHER'S NOTE: In recent years, I have found myself publishing more and more posts on the application of artificial intelligence technology to policing, public safety, and the criminal justice process, not just in North America, but in countries all over the world, including China. Although I accept that properly applied science can play a positive role in our society, I have learned over the years that technologies introduced for the so-called public good, can eventually be used against the people they were supposed to benefit. As reporter Sieeka Khan writes in Science Times: "In 2017, researchers sent a letter to the secretary of the US Department of Homeland Security. The researchers expressed their concerns about a proposal to use the AI to determine whether someone who is seeking refuge in the US would become a positive and contributing member of society or if they are likely to become a threat or a terrorist. The other government uses of AI are also being questioned, such as the attempts at setting bail amounts and sentences on criminals, predictive policing and hiring government workers. All of these attempts have been shown to be prone to technical issues and a limit on the data can cause bias on their decisions as they will base it on gender, race or cultural background. Other AI technologies like automated surveillance, facial recognition and mass data collection are raising concerns about privacy, security, accuracy and fairness in a democratic society. As the executive order of Trump demonstrates, there is a massive interest in harnessing AI for its full, positive potential. But the dangers of misuse, bias and abuse, whether it is intentional or not, have the chance to work against the principles of international democracies. As the use of artificial intelligence grows, the potential for misuse, bias and abuse grows as well." The purpose of this 'technology' series, is to highlight the dangers of artificial intelligence - and to help readers make their own assessments as to whether these innovations will do more harm than good.
Harold Levy: Publisher: The Charles Smith Blog.
----------------------------------------------------------
STORY: "The grim reality of life under Gangs Matrix, London's controversial predictive policing tool," by Peter Yeung, published by Wired on April 2, 2019.
SUB-HEADING: "AI and machine learning software was meant to make policing fairer and more accountable – but it hasn't worked out that way."
GIST: The first time Bill was stopped and searched by police,
he was standing outside a friend’s house in south London. “The police
pulled up on us, three cars,” he says. “I asked them why they were
searching us, and one said, ‘Because I want to’.” Bill was 11. That was
nearly a decade ago.
Pupils from different
neighbourhoods around the area, such as Brixton, Peckham, and Tulse Hill
went to Bill’s secondary school. Some areas were particularly rough,
and gangs with minors in them were common. Four boys died in a
three-year period back then, he says. One was shot while lying in bed.
Another was stabbed in the heart. “One of them was my close bredrin,”
says Bill. “I didn’t really think too much of it. I just had to keep it
moving, you know?” From then on, officers took a keen
interest in Bill, as they did with many other young black boys from
underprivileged backgrounds in the area. When he was 14, police arrested
him twice in two weeks – without charge. “That’s that network. That
little matrix that they have,” he adds. “They know that people who I
know have been arrested for drugs, so they assume that I’m going to have
drugs on me.” Bill had ended up on what is known as
the Gangs Matrix, a controversial database created by the Metropolitan
Police (the Met) in the aftermath of London’s 2011 riots
to purportedly identify and surveil not only those at risk of
committing gang-related violence, but also potential victims of it. Based on a number of variables such as previous
offences, patrol logs, social media activity and friendship networks,
the matrix relies on a mathematical formula to calculate a “risk score” –
red, amber, or green – for each person, in reference to the likelihood
they will be involved in gang violence. This intelligence in theory
guides an efficient use of police resources and aids court prosecutions. But critics argue it is one of the most flawed policing initiatives in modern times. In a report last year,
human rights charity Amnesty International described it as “a racially
biased database criminalising a generation of young black men”,
revealing that 35 per cent of those on the matrix had no police
intelligence linking them to gang violence and had never been charged
with a crime. Sharing certain YouTube videos of grime or drill music,
meanwhile, is considered a key indicator of gang affiliation. The implications of being on the matrix can be
chilling, but finding out why you are on it, let alone how to be
removed, is extremely difficult. One family received a letter warning
they would be evicted from their home if their son didn’t stop his
involvement with gangs – but he had been dead for more than a year. A
disabled mother's council-provided car was seized after her son – who
acted as her carer and was registered to drive the car – was arrested
without charge or further action. In Bill’s case, he was forced out of
his mother’s house and put into a residential care home due to being on
the matrix. He was later banned from attending the South London Learning
Centre. In November, the Information Commissioner’s
Office (ICO), Britain’s watchdog for data use, ruled that the matrix
breached data protection rules. The investigation found
that it did not clearly distinguish between victims and perpetrators of
crime, some boroughs were keeping informal lists of those who were
supposed to have been removed from the matrix, and there was “blanket
sharing [of data] with third parties” including schools, job centres,
and housing associations. A separate, damning review
published the following month by the Mayor of London’s Office, which
oversees the Met, found that although there is a need to address
violence in the capital, the number of young black people on the matrix
was “disproportionate to their likelihood of criminality and
victimisation.” The review ordered the force to radically reform the
tool within a year. The Met said in a statement at the time that it
“does not believe that the Gangs Matrix directly discriminates against
any community”. But according to data obtained by WIRED via a
freedom of information request, children as young as 13 are currently
listed on the Met’s Gangs Matrix. In total, the list contains some 3,000
people, mostly young, black boys, including 55 children under 16. More
than 7,000 individuals have been on the matrix at some point. About 80
per cent of those on the list are described as “African-Caribbean”, 12
per cent are from other ethnic minority backgrounds, and just eight per
cent are “white European”. Yet the vast majority are considered to pose
little threat of violence by even the police, with 65 per cent currently
rated with a green risk score, 30 per cent amber, and five per cent
red. As a result, campaigners are now calling for the
tool, the remit of which covers more than eight million people, to be
scrapped. “I think it’s deeply problematic,” says Tanya O’Carroll,
director of Amnesty’s global technology and human rights programme.
“It’s a rudimentary use of data in poorly thought-out ways that ends up
being extremely discriminatory to young black boys. The way the matrix
works, intelligence about people is essentially hearsay – feeling, not
fact.” The matrix is part of a growing trend of
police forces across the UK using open-source intelligence, big data and
machine learning as part of its crime-stopping. A report published by Liberty
in February revealed that at least 14 forces across the UK – around a
third – are already using what has been coined “predictive policing”. It
outlined two main strands: “predictive mapping”, which identifies areas
where crime will likely occur, and “individual risk assessment”, which
predicts how likely an individual is to commit crime. However, the
first has led to over-policing of certain communities and the second
facilitates racial profiling, critics argue, while the broader issue of
predictive policing is legally ambiguous, lacking accountability and not
proven to be effective. The Gangs Matrix does not
exactly implement artificial intelligence or machine learning, unlike
the tools of many others forces, says Hannah Couchman, a policy and
campaigns officer at Liberty who authored the report. But there is still
the concept of “pre-criminality” – being investigated by police without
reasonable grounds, she adds. It comes into conflict with age concepts
such as innocence until proven guilty and probable cause. “We are really seeing a pattern of police forces rolling out these technologies without sufficient protection,” says Couchman. Kent Police, the first UK police force to try to use computer algorithms to predict crime, ended its five-year deal with the US company PredPol last March, citing difficulties in proving that the technology could reduce crime. South Wales Police and the Met are testing Automated Facial Recognition (AFR) – despite high rates of incorrect identifications,
particularly for women and black people. Avon and Somerset police have
started using a broad mapping program to assess the likelihood of things
such as being a victim of stalking and taking stress-related sick
leave.
Despite the significant problems, however, the
UK’s national coordination body for law enforcement rejects any
criticism. “For many years police forces have looked to be innovative in
their use of technology to protect the public and prevent harm and we
continue to develop new approaches to achieve these aims,” says Jon
Drake, intelligence lead for the National Police Chiefs' Council.
But
while police use of big data may be inevitable, a significant concern
is the lack of accountability in these systems, whose processes remain a
mystery to even the officers tasked with deploying them – and the
experts that have built them. Automation bias, the hesitancy to overrule
computers’ automated decisions, is also a significant problem, says
Nick Jennings, a professor at Imperial College London. “We generate so much data from so many devices
to make decisions, we can’t possibly do it ourselves and we need
computer support,” he adds. “But even for those that build AI programs,
it’s not easy to understand the logic behind the most powerful machines.
So we need to be really careful about the kind of information that we
feed in, and what biases it may have.” In January,
the British government acknowledged those challenges by launching the
Centre for Data Ethics and Innovation, the first body of its kind in the
world, which will provide advice on “data-enabled technologies”. The
Nuffield Foundation’s new £5 million Ada Lovelace Institute will play a
similar role, considering the ethical questions raised by big data,
algorithms and AI. West Midlands police is also believed to be
introducing an ethics committee as part of its predictive analytics. However,
a lack of transparency continues to plague the field. It has now
emerged that a secret new system called the Concern Hub, headed by a
central team at Scotland Yard that will liaise the Met and hubs in each
of Greater London’s 32 boroughs, has already been undergoing an
unpublicised trial in the capital. A spokesperson
for the Met says that the Concern Hub is “a new multi-agency diversion
initiative” set to launch in south-east London in April, with a wider
rollout across the city in the coming months. The aim is “to safeguard
young people at significant risk of becoming involved in violence,
drugs, or gang activity.” These secretive approaches to policing could
compromise trust, says David Lammy, the Labour MP for Tottenham in north
London. “It is very worrying that another similar tool is being created
without sufficient consultation with communities.” Jude Lanchin, an
associate in criminal law for Bindmans who has worked on a number of
cases involving the Matrix, calls the new methods just as opaque as
ever. Lawyers with significant experience of working
on cases where defendants were described as “gang nominals”, a term
used by the Gangs Matrix to denote association with – but not necessary
membership of – gangs, also lambast the lack of transparency by police
prosecutors in the courts. “The phrase ‘gang nominal’
is cited as evidence without an explanation as to what that
intelligence is based on,” says Suzanne O’Connell, a youth courts
specialist for Tuckers Solicitors who first worked on a case involving
the matrix in 2012. She estimates that 80 per cent of her court cases
involve a reference to the term. “This whole cloud of secrecy is very
difficult to challenge,” she adds. The troubling rise
of murder in London is, however, an issue that can’t be ignored. There
were 135 people murdered or unlawfully killed in London last year alone,
the highest in a decade. More than 40 per cent were men under 30.
Reports of stabbings in the capital come and go almost daily. But
the focus on gangs has always been flawed. The Conservative government
initially blamed gangs for the 2011 riots, but later reviews found the causes
behind the deadly period of looting, arson and civil unrest to be far
broader. A study by the Mayor’s Office for Policing and Crime also estimated
that more than 80 per cent of all knife-crime incidents resulting in
injury to a victim under 25 were deemed to be non-gang related. Put simply, the causes behind violence in London
are deep-seated may be going far beyond criminal groups and the limits
of technology. “It reflects the bias in society,” thinks Katrina
Ffrench, chief executive of the campaign group Stopwatch. “I don’t
expect it to not be racist, why would you?""
The entire story can be read at:
https://www.wired.co.uk/article/gangs-matrix-violence-london-predictive-policing
PUBLISHER'S NOTE: I am monitoring this case/issue. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic" section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/c