Friday, May 31, 2019

Bulletin: Criminalizing Reproduction: Louisiana governor signs oppressive heartbeat abortion bill: CNN;


PUBLISHER'S NOTE:

I have taken on the  them of criminalizing reproduction - a natural theme for a Blog concerned with  flawed science in its myriad forms  and its flawed devotees (like Charles Smith), as I am utterly opposed to the current movement in the United States and some other countries - thankfully not Canada any more - towards imprisoning women and their physicians on the basis of sham science (or any other basis). Control over their reproductive lives is far too important to women in America or anywhere else so they can  participate  equally in the economic and social life of their nations without fear for  loss their freedom at the hands of political opportunists and fanatics. I will continue to follow relevant cases such as  Purvi Patel and Bei Bei Shuai - and the mounting wave of  legislative attacks aimed at chipping away at  Roe V. Wade and ultimately dismantling it.


Harold Levy: Publisher; The Charles Smith Blog;

------------------------------------------------------------

BULLETIN:"Louisiana Democratic Gov. John Bel Edwards signed a bill Thursday banning abortions once a heartbeat is detectable with no exceptions for rape or incest, according to his office. The measure, which passed the state House by 79-23 on Wednesday, would "prohibit the abortion of an unborn human being with a detectable heartbeat," which can occur as early as six weeks into a pregnancy, before many women know they're pregnant. Companion legislation passed the state Senate earlier this month."

https://www.cnn.com/2019/05/30/politics/louisiana-governor-signs-abortion-bill/index.html

------------------------------

PUBLISHER'S NOTE: I am monitoring this case/issue. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic"  section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/charlessmith. Information on "The Charles Smith Blog Award"- and its nomination process - can be found at: http://smithforensic.blogspot.com/2011/05/charles-smith-blog-award-nominations.html Please send any comments or information on other cases and issues of interest to the readers of this blog to: hlevy15@gmail.com.  Harold Levy: Publisher; The Charles Smith Blog.

Bulletin: False Confessions: The Central Park Five: Premiers today on Netflix: Ava DuVernay series 'When They see us' debuts today on Netflix..."The four part limited series will focus on the five teenagers from Harlem -- Antron McCray, Kevin Richardson, Yusef Salaam, Raymond Santana and Korey Wise. Beginning in the spring of 1989, when the teenagers were first questioned about the incident, the series will span 25 years, highlighting their exoneration in 2002 and the settlement reached with the city of New York in 2014."


Link to official trailer: (I got goosebumps just from watching it! HL);

https://www.youtube.com/watch?v=u3F9n_smGWY

---------------------------------------------------------------

"Based on a true story that gripped the country, When They See Us will chronicle the notorious case of five teenagers of color, labeled the Central Park Five, who were convicted of a rape they did not commit. The four part limited series will focus on the five teenagers from Harlem -- Antron McCray, Kevin Richardson, Yusef Salaam, Raymond Santana and Korey Wise. Beginning in the spring of 1989, when the teenagers were first questioned about the incident, the series will span 25 years, highlighting their exoneration in 2002 and the settlement reached with the city of New York in 2014. When They See Us was created by Ava DuVernay, who also co-wrote and directed the four parts. Jeff Skoll and Jonathan King from Participant Media, Oprah Winfrey from Harpo Films, and Jane Rosenthal, Berry Welsh and Robert De Niro from Tribeca Productions will executive produce the limited series alongside DuVernay through her banner, Forward Movement. In addition to DuVernay, Attica Locke, Robin Swicord, and Michael Starrburry also serve as writers on the limited series. The series stars Emmy Award® Nominee Michael K. Williams, Academy Award® Nominee Vera Farmiga, Emmy Award® Winner John Leguizamo, Academy Award® Nominee and Emmy Award® Winner Felicity Huffman, Emmy Award® Nominee Niecy Nash, Emmy Award® Winner and two-time Golden Globe Nominee Blair Underwood, Emmy Award® and Grammy Award® Winner and Tony Award® Nominee Christopher Jackson, Joshua Jackson, Omar Dorsey, Adepero Oduye, Famke Janssen, Aurora Perrineau, Dascha Polanco, William Sadler, Jharrel Jerome, Jovan Adepo, Aunjanue Ellis, Kylie Bunbury, Marsha Stephanie Blake, Storm Reid, Chris Chalk, Freddy Miyares, Justin Cunningham, Ethan Herisse, Caleel Harris, Marquis Rodriguez, and Asante Blackk. When They See Us premieres May 31 only on Netflix."

-----------------------------------------------------------------

Watch When They See Us:

https://www.netflix.com/title/80200549

-----------------------------------------------------------------

Facial recognition technology: Canada: Significant development: Canadian Civil Liberties Association seeks ban on use of facial recognition technology - or risk lawsuits - by Toronto police, The Toronto Star (reporters Jim Rankin and Wendy Gillis) reports..."Toronto police should stop using facial recognition technology or face the prospect of class-action lawsuits, says the head of the Canadian Civil Liberties Association. Michael Bryant on Thursday called on the Toronto Police Services Board to place a moratorium on the use of the technology, “because it renders all of us walking ID cards.”


PUBLISHER'S NOTE: In recent years, I have found myself publishing more and more posts on the  application of artificial intelligence technology to policing, public safety, and the criminal justice process,  not just in North America, but in countries all over the world, including China. Although I accept that properly applied science  can play a positive role in our society, I have learned over the years that technologies introduced for the so-called public good, can eventually be used against the people they were supposed to  benefit. As reporter Sieeka Khan  writes in Science Times:  "In 2017, researchers sent a letter to the secretary of the US Department of Homeland Security. The researchers expressed their concerns about a proposal to use the AI to determine whether someone who is seeking refuge in the US would become a positive and contributing member of society or if they are likely to become a threat or a terrorist. The other government uses of AI are also being questioned, such as the attempts at setting bail amounts and sentences on criminals, predictive policing and hiring government workers. All of these attempts have been shown to be prone to technical issues and a limit on the data can cause bias on their decisions as they will base it on gender, race or cultural background. Other AI technologies like automated surveillance, facial recognition and mass data collection are raising concerns about privacy, security, accuracy and fairness in a democratic society. As the executive order of Trump demonstrates, there is a massive interest in harnessing AI for its full, positive potential. But the dangers of misuse, bias and abuse, whether it is intentional or not, have the chance to work against the principles of international democracies. As the use of artificial intelligence grows, the potential for misuse, bias and abuse grows as well.

Harold Levy: Publisher: The Charles Smith Blog.

---------------------------------------------------------------------

QUOTE OF THE DAY: "Michael Bryant on Thursday called on the Toronto Police Services Board to place a moratorium on the use of the technology, “because it renders all of us walking ID cards.” He called the use of the technology “carding by algorithm, and notoriously unreliable,” and, in a written statement, likened it to police “fingerprinting and DNA swabbing everybody at Yonge and Bloor during rush hour” and running the results through databases. Continued use of the technology leaves the board open to lawsuits and, at the very least, requires formal oversight, Bryant said."

----------------------------------------------------------------------

PASSAGE OF THE DAY: "As reported earlier this week by the Star, Toronto police have been using facial recognition technology driven by artificial intelligence for more than a year. Police say it’s an efficient tool that has led to arrests in major crimes, including homicides. But it also comes with criticisms that the technology is an invasion of privacy and overreach by police and state. San Francisco, a tech-centric city, recently banned the tool. A London, U.K., policing ethics panel this month concluded that the technology should not be used if police can’t prove it works equally well with people of all ethnic and racial backgrounds and women. The panel, set up to advise London city hall, noted there are “important ethical issues to be addressed” but concluded that does not mean the technology should not be used at all, reported The Guardian. "

----------------------------------------------------------------------

STORY: "Toronto police should drop facial recognition technology or risk lawsuits, civil liberties association tells board," by reporters Jim Rankin and Wendy Gillis, published by The Toronto Star on May 30, 2019.
GIST:"Toronto police should stop using facial recognition technology or face the prospect of class-action lawsuits, says the head of the Canadian Civil Liberties Association. Michael Bryant on Thursday called on the Toronto Police Services Board to place a moratorium on the use of the technology, “because it renders all of us walking ID cards.” He called the use of the technology “carding by algorithm, and notoriously unreliable,” and, in a written statement, likened it to police “fingerprinting and DNA swabbing everybody at Yonge and Bloor during rush hour” and running the results through databases. Continued use of the technology leaves the board open to lawsuits and, at the very least, requires formal oversight, Bryant said. Toronto police Deputy Chief James Ramer told the board the use of the technology is nothing like the controversial practice of carding. “It’s not indiscriminate, it’s not random,” he said. “It’s very specific.” The technology saves victims of crimes from having to go through police mugshot databases, Ramer said. On Thursday, the police board passed a motion to receive Chief Mark Saunders’ report on the use of the technology and the deputations from Bryant and others. As reported earlier this week by the Star, Toronto police have been using facial recognition technology driven by artificial intelligence for more than a year. Police say it’s an efficient tool that has led to arrests in major crimes, including homicides. But it also comes with criticisms that the technology is an invasion of privacy and overreach by police and state. San Francisco, a tech-centric city, recently banned the tool. A London, U.K., policing ethics panel this month concluded that the technology should not be used if police can’t prove it works equally well with people of all ethnic and racial backgrounds and women. The panel, set up to advise London city hall, noted there are “important ethical issues to be addressed” but concluded that does not mean the technology should not be used at all, reported The Guardian. Research has shown that differences in race and gender can lead the technology to return false positives. Some systems kick out higher false-positive rates for Black women, compared to white men. Toronto police ran 1,516 facial recognition searches using about 5,000 still and video images between March and December of last year, according to Saunders’ report to the board. They were cross-checked against the service’s mugshot database of 1.5 million individuals, resulting in matches in about 60 per cent of the searches. Of those, 80 per cent of the matches resulted in identifying criminal offenders. There is no count available for how many led to arrests, since the technology identifies potential matches that must be further investigated further using other police methods. Toronto police said they have no plans to extend matches beyond the mugshot database, and that real-time facial recognition, such as searching faces in crowds, is not being used. The technology was used in the investigation into the Gay Village serial murders to help determine the identity of one of the victims."

The entire story can be read at: 
https://www.thestar.com/news/gta/2019/05/30/toronto-police-should-drop-facial-recognition-technology-or-risk-lawsuits-civil-liberties-association-tells-board.html
 
PUBLISHER'S NOTE: I am monitoring this case/issue. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic"  section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/charlessmith. Information on "The Charles Smith Blog Award"- and its nomination process - can be found at: http://smithforensic.blogspot.com/2011/05/charles-smith-blog-award-nominations.html Please send any comments or information on other cases and issues of interest to the readers of this blog to: hlevy15@gmail.com.  Harold Levy: Publisher; The Charles Smith Blog.

Thursday, May 30, 2019

Technology Series: (Part fifteen): A visit to Ecuador - where, according to New York Times reporters Paul Mozur, Jonah M. Kessel, and Melissa Chan, "This voyeur’s paradise is made with technology from what is fast becoming the global capital of surveillance: China. Ecuador’s system, which was installed beginning in 2011, is a basic version of a program of computerized controls that Beijing has spent billions to build out over a decade of technological progress. According to Ecuador’s government, these cameras feed footage to the police for manual review. But a New York Times investigation found that the footage also goes to the country’s feared domestic intelligence agency, which under the previous president, Rafael Correa, had a lengthy track record of following, intimidating and attacking political opponents." (Made in China/Equador; Exported to the World: The Surveillance StateTwchnology Series: Part Fourteen: Equador:


PUBLISHER'S NOTE: In recent years, I have found myself publishing more and more posts on the  application of artificial intelligence technology to policing, public safety, and the criminal justice process,  not just in North America, but in countries all over the world, including China. Although I accept that properly applied science  can play a positive role in our society, I have learned over the years that technologies introduced for the so-called public good, can eventually be used against the people they were supposed to  benefit. As reporter Sieeka Khan  writes in Science Times:  "In 2017, researchers sent a letter to the secretary of the US Department of Homeland Security. The researchers expressed their concerns about a proposal to use the AI to determine whether someone who is seeking refuge in the US would become a positive and contributing member of society or if they are likely to become a threat or a terrorist. The other government uses of AI are also being questioned, such as the attempts at setting bail amounts and sentences on criminals, predictive policing and hiring government workers. All of these attempts have been shown to be prone to technical issues and a limit on the data can cause bias on their decisions as they will base it on gender, race or cultural background. Other AI technologies like automated surveillance, facial recognition and mass data collection are raising concerns about privacy, security, accuracy and fairness in a democratic society. As the executive order of Trump demonstrates, there is a massive interest in harnessing AI for its full, positive potential. But the dangers of misuse, bias and abuse, whether it is intentional or not, have the chance to work against the principles of international democracies. As the use of artificial intelligence grows, the potential for misuse, bias and abuse grows as well. The purpose of this 'technology' series, is to highlight the dangers of artificial intelligence -  and to help readers make their own assessments as to  whether these innovations will do more harm than good.

Harold Levy: Publisher: The Charles Smith Blog;

-----------------------------------------------------------

PASSAGE OF THE DAY: "Under President Xi Jinping, the Chinese government has vastly expanded domestic surveillance, fueling a new generation of companies that make sophisticated technology at ever lower prices. A global infrastructure initiative is spreading that technology even further. Ecuador shows how technology built for China’s political system is now being applied — and sometimes abused — by other governments. Today, 18 countries — including Zimbabwe, Uzbekistan, Pakistan, Kenya, the United Arab Emirates and Germany — are using Chinese-made intelligent monitoring systems, and 36 have received training in topics like “public opinion guidance,” which is typically a euphemism for censorship, according to an October report from Freedom House, a pro-democracy research group."

--------------------------------------------------------------

STORY: "Made in China, Exported to the World: The Surveillance State," by reporters Paul Mozur,  Jonah M. Kessel, and Melissa Chan, published by The New York Times on April 24, 2019.

SUB-HEADING: "In Ecuador, cameras capture footage to be examined by police and domestic intelligence. The surveillance system’s origin: China:

PHOTO CAPTION: "Is Chinese-style surveillance becoming normalized? A Times investigation found the Chinese surveillance state is spreading past its borders. "

GIST: (This is but  a taste. The entire story is well worth the read, at the link below: HL): "The squat gray building in Ecuador’s capital commands a sweeping view of the city’s sparkling sprawl, from the high-rises at the base of the Andean valley to the pastel neighborhoods that spill up its mountainsides. The police who work inside are looking elsewhere. They spend their days poring over computer screens, watching footage that comes in from 4,300 cameras across the country. The high-powered cameras send what they see to 16 monitoring centers in Ecuador that employ more than 3,000 people. Armed with joysticks, the police control the cameras and scan the streets for drug deals, muggings and murders. If they spy something, they zoom in. This voyeur’s paradise is made with technology from what is fast becoming the global capital of surveillance: China. Ecuador’s system, which was installed beginning in 2011, is a basic version of a program of computerized controls that Beijing has spent billions to build out over a decade of technological progress. According to Ecuador’s government, these cameras feed footage to the police for manual review. But a New York Times investigation found that the footage also goes to the country’s feared domestic intelligence agency, which under the previous president, Rafael Correa, had a lengthy track record of following, intimidating and attacking political opponents. Even as a new administration under President Lenín Moreno investigates the agency’s abuses, the group still gets the videos. Under President Xi Jinping, the Chinese government has vastly expanded domestic surveillance, fueling a new generation of companies that make sophisticated technology at ever lower prices. A global infrastructure initiative is spreading that technology even further. Ecuador shows how technology built for China’s political system is now being applied — and sometimes abused — by other governments. Today, 18 countries — including Zimbabwe, Uzbekistan, Pakistan, Kenya, the United Arab Emirates and Germany — are using Chinese-made intelligent monitoring systems, and 36 have received training in topics like “public opinion guidance,” which is typically a euphemism for censorship, according to an October report from Freedom House, a pro-democracy research group. With China’s surveillance know-how and equipment now flowing to the world, critics warn that it could help underpin a future of tech-driven authoritarianism, potentially leading to a loss of privacy on an industrial scale. Often described as public security systems, the technologies have darker potential uses as tools of political repression. “They’re selling this as the future of governance; the future will be all about controlling the masses through technology,” Adrian Shahbaz, research director at Freedom House, said of China’s new tech exports. Companies worldwide provide the components and code of dystopian digital surveillance and democratic nations like Britain and the United States also have ways of watching their citizens. But China’s growing market dominance has changed things. Loans from Beijing have made surveillance technology available to governments that could not previously afford it, while China’s authoritarian system has diminished the transparency and accountability of its use.".........The odds are against Ecuador’s police force. Quito has more than 800 cameras. But during a Times visit, 30 police officers were on duty to check the footage. In their gray building atop the hill, officers spend a few minutes looking at footage from one camera and then switch. Preventing crime is only part of the job. In a control room, dispatchers supported responses to emergency calls. Most of the time, no one was on the other side of the lens. It was a reminder that the system, and others like it, are more easily used for snooping than crime prevention. Following someone on the streets requires a small team, while large numbers of well-coordinated police are necessary to stop crime."

The entire story can be read at;
https://www.nytimes.com/2019/04/24/technology/ecuador-surveillance-cameras-police-government.html

PUBLISHER'S NOTE: I am monitoring this case/issue. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic"  section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/charlessmith. Information on "The Charles Smith Blog Award"- and its nomination process - can be found at: http://smithforensic.blogspot.com/2011/05/charles-smith-blog-award-nominations.html Please send any comments or information on other cases and issues of interest to the readers of this blog to: hlevy15@gmail.com.  Harold Levy: Publisher; The Charles Smith Blog.


---------------------------------------------------------------


Tuesday, May 28, 2019

Technology Series: (Part Fourteen) : Bulletin: Major Revelation: Toronto Star reports that the Toronto Police Service has been using facial recognition for nore than a year. (And this story appears to be the very first time - that the public was made aware that the force was using this technology. HL). (Read Publisher's Note One for my mea culpa. HL):


PUBLISHER'S NOTE ONE:  MEA CULPA: Oooops. I always suspected it would happen. But not so quickly. Not so soon.  Sitting at my breakfast. Dipping into my favourite newspaper the  Toronto Star, my home for many years -  over a cup of Maxwell instant. (Forgot to re-order the Nespresso); And there it is - on page one, above the fold.  Big black, bold heading "Toronto cops using facial recognition technology:  Police say it helps ID suspects more efficiently, but critics are wary of how tool  can be used." For shame, Levy. This is happening in your own backyard. And you didn't warn your readers!  Well, in my defence. I had no idea. This didn't start with a study, a report to the police board, or maybe consultations with The Canadian Civil Liberties Association and other community groups. Nada. Just a story in the Toronto Star. (Did I say it was my favourite newspaper? Long after the horse is out of the barn.  (Just like Torontonians only discovered that the Toronto force had moved to sleek, furtive,  super-hero, futuristic police cars -  the kind that you don't notice until it's too late and you are pulled over - when more and more people were pulled over by them!) Sorry the chief said sheepishly. Maybe  I should have told you. (Now they are all over the place).And now this. Levy. You wrote about the the hidden expansion of facial recognition and other applications - without consideration of  effectiveness, risk and invasions of privacy in China, India, North America,  South America, The Phillipines,  and elsewhere, while governments sit by and do nothing, except  bemoan the fact, and schedule legislative committee meetings where the legislators can grandstand and display their ignorance, doing nothing as the technologies spread, unexamined, unchecked, like viruses. For shame Levy. You have to do better. End of rant... Read the Story.

Harold Levy: Publisher: The Charles Smith Blog.

-----------------------------------------------------------

PUBLISHER'S NOTE TWO:  In recent years, I have found myself publishing more and more posts on the  application of artificial intelligence technology to policing, public safety, and the criminal justice process,  not just in North America, but in countries all over the world, including China. Although I accept that properly applied science  can play a positive role in our society, I have learned over the years that technologies introduced for the so-called public good, can eventually be used against the people they were supposed to  benefit. As reporter Sieeka Khan  writes in Science Times:  "In 2017, researchers sent a letter to the secretary of the US Department of Homeland Security. The researchers expressed their concerns about a proposal to use the AI to determine whether someone who is seeking refuge in the US would become a positive and contributing member of society or if they are likely to become a threat or a terrorist. The other government uses of AI are also being questioned, such as the attempts at setting bail amounts and sentences on criminals, predictive policing and hiring government workers. All of these attempts have been shown to be prone to technical issues and a limit on the data can cause bias on their decisions as they will base it on gender, race or cultural background. Other AI technologies like automated surveillance, facial recognition and mass data collection are raising concerns about privacy, security, accuracy and fairness in a democratic society. As the executive order of Trump demonstrates, there is a massive interest in harnessing AI for its full, positive potential. But the dangers of misuse, bias and abuse, whether it is intentional or not, have the chance to work against the principles of international democracies. As the use of artificial intelligence grows, the potential for misuse, bias and abuse grows as well. The purpose of this 'technology' series, is to highlight the dangers of artificial intelligence -  and to help readers make their own assessments as to  whether these innovations will do more harm than good.

----------------------------------------------------------

QUOTE OF THE DAY: "This technology is being put in place without any legislative oversight and we need to hit the pause button,” NDP MP Charlie Angus told the Star on Monday. The technology is “moving extremely fast,” said Angus, who is examining the ethics of artificial intelligence as part of a House of Commons Standing Committee on Access to Information, Privacy and Ethics. In San Francisco, city officials banned the use of the technology by police and other agencies earlier this month, citing concerns about potential misuse by government authorities. The city’s police department had previously been using the tool, but stopped in 2017. One of the city legislators told reporters that the move was about having security without becoming a security state."

----------------------------------------------------------

STORY: "Toronto police have been using facial recognition technology for more than a year," by Kate Allen, Sciencer and Technology Reporter, and Wendy Gillis, Crime Reporter, published by The Toronto Star on May 28, 2019.

GIST:  "Toronto police have been using facial recognition technology for more than a year — a tool police say increases the speed and efficiency of criminal investigations and has led to arrests in major crimes including homicides. But the emerging technology — which relies on artificial intelligence — has generated enough privacy and civil liberty concerns that the city of San Francisco, worried about police and state overreach, recently became the first U.S. city to ban the tool. Toronto police say that facial recognition technology is being used to compare images of potential suspects captured on public or private cameras to its internal database of approximately 1.5 million mugshots. According to a report submitted by Chief Mark Saunders to the Toronto police services board, the technology is generating leads in investigations, particularly as a growing number of crimes are being captured on video through surveillance cameras. Since the system was purchased in March 2018 — at a cost $451,718 plus annual maintenance and support fees — officers have conducted 2,591 facial recognition searches. The report was submitted in advance of Thursday’s board meeting. The goal of purchasing the system was to identify suspects more efficiently and quickly, including violent offenders. It will also help police conclude major investigations with fewer resources and help tackle unsolved crimes, Saunders said. Funding for the system was provided through a provincial policing modernization grant. But critics are wary of facial recognition technology for reasons including its potential to be misused by police or other government agencies as technological advancements outpace oversight. “This technology is being put in place without any legislative oversight and we need to hit the pause button,” NDP MP Charlie Angus told the Star on Monday. The technology is “moving extremely fast,” said Angus, who is examining the ethics of artificial intelligence as part of a House of Commons Standing Committee on Access to Information, Privacy and Ethics. In San Francisco, city officials banned the use of the technology by police and other agencies earlier this month, citing concerns about potential misuse by government authorities. The city’s police department had previously been using the tool, but stopped in 2017. One of the city legislators told reporters that the move was about having security without becoming a security state. According to Saunders’ report, Toronto police ran 1,516 facial recognition searches between March and December last year, using approximately 5,000 still and video images. The system was able to generate potential mugshot matches for about 60 per cent of those images. About 80 per cent of those matches led to the identification of criminal offenders. The total number of arrests the technology has generated is undetermined, the report states, because unlike fingerprint matches, the facial recognition tool only identifies potential candidates and arrests are made only after further investigation produces more evidence. “Many investigations were successfully concluded due to the information provided to investigators, including four homicides, multiple sexual assaults, a large number of armed robberies and numerous shooting and gang related crimes,” Saunders wrote. Other jurisdictions have come under fire for using facial recognition on crowds in real-time, such as scanning attendees at major sports events to identify the subjects of outstanding warrants and arrest them on the spot. In emailed responses to questions by the Star, Staff Inspector Stephen Harris, Forensic Identification Services, said Toronto police have no plans to extend the use of facial recognition technology beyond comparisons to its pre-existing mugshot database. Harris and Saunders both emphasized that Toronto police does not use real-time facial recognition and has no legal authorization to do so. Last year, Toronto police used the technology during their investigation into serial killer Bruce McArthur. Investigators located what they believed was a post-mortem image of an unknown man on McArthur’s computer. Hoping to identify him, they used the software and found, within their police database, a mugshot image of Dean Lisowick. In documents filed with the courts during the McArthur probe, an investigator notes that there were “undeniable physical similarities” between the two images, including distinctive moles. A relative of Lisowick’s later confirmed the match, and police charged McArthur in Lisowick’s death three days later. Canadians need to have a discussion about what are legitimate uses of the technology — and what aren’t, said Angus. Police using the technology to identify someone caught committing a crime on surveillance footage is reasonable, Angus said, but measures need to be put in place to stop what is determined to be unacceptable use, such as real-time monitoring at a rally. Research has shown that facial recognition technology has racialized false positive rates: some systems are more likely to produce an inaccurate match for Black women than white men, for example. “This strikes me as particularly important, given all the concerns around carding and other kinds of ethnic and racialized surveillance that have taken place by TPS in the past,” said Chris Parsons, research associate at the University of Toronto’s Citizen Lab. Asked by the Star about its false positive rate overall and for different racial and ethnic groups, Harris said that Toronto police “does not use facial recognition to make a positive ID. Suspect identifications are only made after further investigation and evidence leads us to that conclusion.” Civil liberties advocates also appreciated that Toronto police were disclosing details about their facial recognition technology, but wondered why it took so long. The force began a year-long pilot project for the technology in September, 2014, and sent four forensic officers for training at an FBI division in West Virginia before that. “The fact that there has been very little — virtually no — public conversation about the fact that this is happening, despite the fact that they’ve been looking into it for at least the past five years ... raises questions for me,” says Brenda McPhail, director of the privacy, technology and surveillance project at the Canadian Civil Liberties Association. “Being open and accountable and transparent about the ways that new surveillance technologies are being integrated into municipal policing is essential to maintaining public trust, and to enable the kinds of conversations that can help Toronto police understand the concerns of city residents.” Saunders’ report also says that the force conducted a Privacy Impact Assessment for the technology in 2017. The system is only used in criminal investigations, and the only officers with access to it are six FBI-trained personnel. No other databases besides lawfully obtained mugshots are used. Calgary Police Service was the first Canadian force to begin using the technology. In 2014, it signed a contract for its facial recognition software and said the technology would be used as an “investigative tool” to compare photos and videos from video surveillance against the service’s roughly 300,000 mugshot images. Asked if images captured by Toronto police’s body-worn cameras could be used with the facial recognition system, Harris said investigators could only do so if a suspect was on camera committing a criminal offence. In that case, investigators would still have to get the court’s permission to use the facial technology during the probe. The Toronto police board is scheduled to hear discussion of Saunders’ report Thursday."
The entire story can be read at:
https://www.thestar.com/news/gta/2019/05/28/toronto-police-chief-releases-report-on-use-of-facial-recognition-technology.html

PUBLISHER'S NOTE: I am monitoring this case/issue. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic"  section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/charlessmith. Information on "The Charles Smith Blog Award"- and its nomination process - can be found at: http://smithforensic.blogspot.com/2011/05/charles-smith-blog-award-nominations.html Please send any comments or information on other cases and issues of interest to the readers of this blog to: hlevy15@gmail.com.  Harold Levy: Publisher; The Charles Smith Blog.

Monday, May 27, 2019

Central Park Five Jogger Case: Mini-series to debut on Netfilx on Friday: May 31: Toronto Star Deputy Editor Debra Yeo: "The boys’ convictions were vacated in 2002, after they had already spent years in prison, when the real rapist confessed. They were awarded $41 million (U.S.) in 2014 to settle a lawsuit against the city. Yet, there are still people who maintain the so-called “Central Park Five” were involved in the attack, among them Donald Trump, who took out full page ads advocating for the death penalty after the five were arrested. When They See Us sets out a heartbreakingly convincing case that they were victims of an egregious miscarriage of justice by a system that concluded their mere presence in the park that night was proof of their guilt." (Bravo Debra, my former Toronto Star friend and colleague. As usual. really nice piece! Cheers. Harold.).


PUBLISHER'S NOTE: This Blog is interested in false confessions because of the disturbing number of exonerations in the USA, Canada and multiple other jurisdictions throughout the world, where, in the absence of incriminating forensic evidence the conviction is based on self-incrimination – and because of the growing body of  scientific research showing how vulnerable suspects   are to widely used interrogation methods  such as  the notorious ‘Reid Technique.’

Harold Levy: Publisher; The Charles Smith Blog:

-------------------------------------------------------------

PASSAGE OF THE DAY: "When They See Us shows detectives using manipulation, intimidation and violence to interrogate the boys without lawyers or family members present, wearing down the frightened, tired and hungry teens during hours of questioning until they agreed to make videotaped “confessions” in the hopes they’d be allowed to go home. There was no DNA or other physical evidence tying the boys to the attack, nor any eyewitnesses, but they were convicted in two separate trials thanks to those confessions, despite discrepancies in the accounts and the fact they had recanted. They served six to 14 years behind bars (in adult prisons in Wise’s case) before serial rapist Matias Reyes confessed. I tell Jackson, Underwood and fellow American actor Christopher Jackson, who plays Raymond’s lawyer, that I watched the series’ courtroom scenes feeling the teens would have to be exonerated based on the lack of evidence, despite knowing the outcome of the case. “You just described our experience shooting it,” says Christopher Jackson. “I would imagine it’s the same (experience) that the viewer has,” feeling “shock, frustration, astonishment” and also “impotence we couldn’t change what we knew was coming. “One of the things I want people to come away knowing is that this kind of thing happens all the time,” adds Underwood. “A lot of Black and brown people who see this project, they know what to expect. They’re gonna get railroaded. When people survive it and come out the other end, it’s something to celebrate.” The Central Park Five did indeed come out the other end."

-------------------------------------------------------------

STORY:  "Here’s why the story of the Central Park Five has to be told today, says Joshua Jackson,"  by reporter Debra Yeo, published by The Toronto Star on May 26, 2019. (Debra Yeo is a deputy entertainment editor and a contributor to the Star’s Entertainment section.)

GIST: Canadian actor Joshua Jackson was just 10 and living in British Columbia when five teenagers, four Black, one Hispanic, were charged with raping and assaulting the white woman known as the “Central Park Jogger” in New York in April 1989. “I don’t think I ever had a formed opinion of it at that age,” he says in a phone interview. But that doesn’t alter his passion for telling the story now of what happened to those boys. He co-stars as the lawyer of one of the teens in Ava DuVernay’s Netflix miniseries When They See Us. The boys’ convictions were vacated in 2002, after they had already spent years in prison, when the real rapist confessed. They were awarded $41 million (U.S.) in 2014 to settle a lawsuit against the city. Yet, there are still people who maintain the so-called “Central Park Five” were involved in the attack, among them Donald Trump, who took out full page ads advocating for the death penalty after the five were arrested. When They See Us sets out a heartbreakingly convincing case that they were victims of an egregious miscarriage of justice by a system that concluded their mere presence in the park that night was proof of their guilt. “Part of the reason why I think this story is so important and pertinent right now, I think we still have that same rush to judgment,” says Jackson, who plays lawyer Mickey Joseph, who defended Antron McCray, just 15 when he was arrested. “Part of the way that judgment is formed is by media narratives. The broad public narrative was that these children, these boys, were thugs, a gang, were a wolf pack wilding out in the park. It is important to remember that didn’t happen by accident. “It was part of a concerted effort to dehumanize these children so they could be railroaded through the system.” American actor Blair Underwood plays Bobby Burns, lawyer for Yusef Salaam, who was also 15 at his arrest. Underwood calls what happened to the teens “a slow lynching” as opposed to today’s “swift and unjust administration of judgment against undeserving Black men and women and children,” as seen in cellphone and body cam videos of Black people assaulted and killed by police. “It’s all in the same category of crimes against humanity,” he says. And Jackson has a special message for Canadian viewers who might be tempted to “wag (their) fingers and say, ‘Gosh, it’s so terrible down there.’” The injustice visited on people of colour in When They See Us “is very much what we do to the First Nations,” he says. “It’s the robbing of a culture and the systematic injustice visited upon a culture. That is our corollary.” On April 19, 1989, 28-year-old investment banker Trisha Meili was raped, savagely beaten and left for dead while out jogging in Central Park. That same night, Kevin Richardson and Raymond Santana, both 14, were arrested for “unlawful assembly” for being part of a group of teens accused of harassing joggers, cyclists and others in a different part of the park. Police quickly concluded the boys had been involved in Meili’s attack. McCray, Salaam and Korey Wise, 16, were brought in for questioning the next day. When They See Us shows detectives using manipulation, intimidation and violence to interrogate the boys without lawyers or family members present, wearing down the frightened, tired and hungry teens during hours of questioning until they agreed to make videotaped “confessions” in the hopes they’d be allowed to go home. There was no DNA or other physical evidence tying the boys to the attack, nor any eyewitnesses, but they were convicted in two separate trials thanks to those confessions, despite discrepancies in the accounts and the fact they had recanted. They served six to 14 years behind bars (in adult prisons in Wise’s case) before serial rapist Matias Reyes confessed. I tell Jackson, Underwood and fellow American actor Christopher Jackson, who plays Raymond’s lawyer, that I watched the series’ courtroom scenes feeling the teens would have to be exonerated based on the lack of evidence, despite knowing the outcome of the case. “You just described our experience shooting it,” says Christopher Jackson. “I would imagine it’s the same (experience) that the viewer has,” feeling “shock, frustration, astonishment” and also “impotence we couldn’t change what we knew was coming. “One of the things I want people to come away knowing is that this kind of thing happens all the time,” adds Underwood. “A lot of Black and brown people who see this project, they know what to expect. They’re gonna get railroaded. When people survive it and come out the other end, it’s something to celebrate.” The Central Park Five did indeed come out the other end. We see the real men’s faces at the end of the miniseries, just before the credits roll. Four of the five are fathers now. Santana founded a clothing company and Salaam is an author and public speaker. Wise, the only one of the five to remain in New York, started the Korey Wise Innocence Project, which provides free legal counsel to people who are wrongfully convicted. Joshua Jackson had a chance to meet the men while making When They See Us “and shared this very intense experience of reading through the first two scripts in front of them. We were recreating the end of their childhoods, the worst experience of any of their lives.” Afterwards, he and other actors at the read-through asked them questions. “I was amazed at their grace in the situation: to invite us into their lives, to revisit that space with people who were mostly strangers, to give us access to that first-hand. “We did the best we could to honour their story.”
When They See Us debuts May 31 on Netflix."

The entire story can be read at:
https://www.thestar.com/entertainment/television/2019/05/26/heres-why-the-story-of-the-central-park-five-has-to-be-told-today-says-joshua-jackson.html

Read  entire Wikipedia account at the link below. Here is a passage - but a very relevant one: "Accusations by Donald Trump: On May 1, 1989, real estate magnate Donald Trump called for the return of the death penalty when he took out full-page advertisements in all four of the city's major newspapers. Trump said he wanted the "criminals of every age" who were accused of beating and raping a jogger in Central Park 12 days earlier "to be afraid".[82] The advertisement, which cost an estimated $85,000,[82] said, in part, "Mayor Koch has stated that hate and rancor should be removed from our hearts. I do not think so. I want to hate these muggers and murderers. They should be forced to suffer ... Yes, Mayor Koch, I want to hate these murderers and I always will. ... How can our great society tolerate the continued brutalization of its citizens by crazed misfits? Criminals must be told that their CIVIL LIBERTIES END WHEN AN ATTACK ON OUR SAFETY BEGINS!"  (Trump's caps); [83] In a 1989 interview with CNN, Trump said to Larry King: "The problem with our society is the victim has absolutely no rights and the criminal has unbelievable rights" and that "maybe hate is what we need if we're gonna get something done."[84] Lawyers for the five defendants said that Trump's advertisement had inflamed public opinion. After Reyes confessed to the crime and said he acted alone, one of the defendants' lawyers, Michael W. Warren, said, "I think Donald Trump at the very least owes a real apology to this community and to the young men and their families."[82] Protests were held outside Trump Tower in October 2002 with protestors chanting, "Trump is a chump!"[82] Trump was unapologetic at the time, saying, "I don't mind if they picket. I like pickets."[82] After the city announced in June 2014 that they would settle with the defendants for more than $40 million, Trump wrote an opinion article for the New York Daily News. He called the settlement "a disgrace" and said that the group's guilt was still likely: "Settling doesn't mean innocence. ... Speak to the detectives on the case and try listening to the facts. These young men do not exactly have the pasts of angels."[85] According to Yusef Salaam, Trump "was the fire starter", as "common citizens were being manipulated and swayed into believing that we were guilty." Salaam and his family received death threats after papers ran Trump's full-page ad. Warren argued that Trump's advertisements played a role in securing conviction, saying that "he poisoned the minds of many people who lived in New York City and who, rightfully, had a natural affinity for the victim," and that "notwithstanding the jurors' assertions that they could be fair and impartial, some of them or their families, who naturally have influence, had to be affected by the inflammatory rhetoric in the ads." The Guardian wrote in 2016 that the case and the media attention reflected the racial dynamics at the time; a similar attack took place soon after in Brooklyn on May 2, 1989,[86] involving a black woman who was raped and thrown from the roof of a four-story building, but received little media attention.[42] Her case was brought to Trump's attention. He visited the victim in the hospital and promised to pay her medical expenses.[87][88] It is not known whether Trump actually paid anything.[89] In October 2016, when Trump campaigned to be president, he declared that the Central Park Five were guilty and stated that their convictions should never have been vacated. Trump told CNN: "They admitted they were guilty. The police doing the original investigation say they were guilty. The fact that that case was settled with so much evidence against them is outrageous. And the woman, so badly injured, will never be the same."[90] Conservative commentator Ann Coulter presented an argument describing the actions of the attack, Trump's ad, and the nuances of the case within the prism of DNA knowledge of the 1980s.[91] Trump's statement attracted criticism from the Central Park Five themselves[92] as well as others, including Republican U.S. Senator John McCain, who called Trump's responses "outrageous statements about the innocent men in the Central Park Five case" and cited it as one of many causes prompting him to retract his endorsement of Trump.[93] Salaam said that he had falsely confessed out of coercion, after having been mistreated by police while in custody, deprived of food, drink or sleep for over 24 hours.[94] Documentarian Ken Burns called Trump's comments "out and out racist" and "the height of vulgarity".


Technology Series Thirteen: Artifical Intelligence: A Visit to Columbia: Columbia: Reporter Laura Marcela Zuñiga observes that Colombia has turned to data analytics and AI - but cautions that getting the right results will be challenging. Her story in InSight Crime is headed: "Will an Algorithm Help Colombia Predict Crime?"... "An initiative to create an artificial intelligence system that analyzes massive amounts of data to recognize criminal patterns underscores how Colombia seeks to use advanced technology to respond to crime threats. The system, which is expected to first be deployed in Bogotá, will use crime data to pinpoint hotspots, identify patterns, and predict where crimes are likely to occur."



PUBLISHER'S NOTE: In recent years, I have found myself publishing more and more posts on the  application of artificial intelligence technology to policing, public safety, and the criminal justice process,  not just in North America, but in countries all over the world, including China. Although I accept that properly applied science  can play a positive role in our society, I have learned over the years that technologies introduced for the so-called public good, can eventually be used against the people they were supposed to  benefit. As reporter Sieeka Khan  writes in Science Times:  "In 2017, researchers sent a letter to the secretary of the US Department of Homeland Security. The researchers expressed their concerns about a proposal to use the AI to determine whether someone who is seeking refuge in the US would become a positive and contributing member of society or if they are likely to become a threat or a terrorist. The other government uses of AI are also being questioned, such as the attempts at setting bail amounts and sentences on criminals, predictive policing and hiring government workers. All of these attempts have been shown to be prone to technical issues and a limit on the data can cause bias on their decisions as they will base it on gender, race or cultural background. Other AI technologies like automated surveillance, facial recognition and mass data collection are raising concerns about privacy, security, accuracy and fairness in a democratic society. As the executive order of Trump demonstrates, there is a massive interest in harnessing AI for its full, positive potential. But the dangers of misuse, bias and abuse, whether it is intentional or not, have the chance to work against the principles of international democracies. As the use of artificial intelligence grows, the potential for misuse, bias and abuse grows as well. The purpose of this 'technology' series, is to highlight the dangers of artificial intelligence -  and to help readers make their own assessments as to  whether these innovations will do more harm than good."


Harold Levy: The Charles Smith Blog:


-----------------------------------------------------------------

PASSAGE OF THE DAY: "The use of advanced technology to collect and analyze crime data has shown some success in Colombia. But the proposed artificial intelligence system — while innovative and a potential gamechanger in attacking crime — faces vast difficulties in setting up and comes with some potential drawbacks. When describing the system, Gomez Jaramillo said deciding data inputs and designing a reliable algorithm is an enormous hurdle. “If the [data] are not relevant,” he told Semana, “then we would have nothing to interpret.” There is also the concern of machine bias. US authorities have used algorithms to aid judges in determining sentences for crimes by providing scores on the likelihood that the person will commit another crime. Yet an investigation by news organization Pro Publica found that such algorithms were “remarkably unreliable in forecasting violent crime,” and showed bias against black people. Countries in Latin America have deployed ineffective artificial intelligence systems. Uruguay, for example, implemented a predictive policing software, but the program was scrapped after three years due to the system’s “high degree of opacity and itspotential to reinforce discrimination and exclusion,” according to a World Wide Web Foundation report. Even with such issues, it is clear that pattern-recognition tools that can analyze massive data sets are the future of policing, and Colombia appears to be on the cutting edge."

-------------------------------------------------------------------

STORY: "Will an Algorithm Help Colombia Predict Crime?," by reporter   Laura Marcela Zuñiga, published by InSight Crime on May 2, 2019.

SUB-HEADING:  Colombia has turned to data analytics and AI to fight crime but getting the right results will be challenging

GIST: "An initiative to create an artificial intelligence system that analyzes massive amounts of data to recognize criminal patterns underscores how Colombia seeks to use advanced technology to respond to crime threats. The system, which is expected to first be deployed in Bogotá, will use crime data to pinpoint hotspots, identify patterns, and predict where crimes are likely to occur. To develop the system, authorities in Bogotá teamed with the mathematics faculty at the Universidad Nacional and a private company that specializes in predictive modeling. The project is expected to be finished in less than three years, Semana reported. Through data from the police, prosecutors, security cameras, emergency dispatchers, surveys, and other sources, the system’s aim will be to detect the “largest patterns to know the when, how and why of criminal acts in Bogotá, and realize interventions at the local level,” said Francisco Gomez Jaramillo, one of the project’s directors and a mathematics professor at Universidad Nacional. This is just the latest effort by Colombian authorities to incorporate data into their crime-fighting strategies. Others include the online crime reporting portal Adenunciar, and the use of data crunching software by the Attorney General’s Office to detect false claims made within the country’s public health system. The latter found some three million false claims costing some $724 million Colombian pesos (around $228 million), according to a report by the Attorney General’s Office. Attorney General Néstor Humberto Martínez has also credited his office’s use of computer-assisted data analysis to detect an armed robbery ring. The group convinced victims that a fictitious military officer had discovered a “caleta,” or stash, of buried drug money, and then offered to exchange the US cash at half the price. When the victims arrived with the money, the group robbed them at gunpoint. Through data analysis, prosecutors identified 58 cases where the same method was used across eight departments of Colombia, Martínez said in a news release. Colombia has long used DNA in forensic investigations, but the Attorney General’s Office recently asked for funding to create a DNA database for the purpose of crime solving. Martinez stated that such a system would improve the country’s investigative standards by 80 percent. However, the ambitious proposal would have to undergo a long legislative process to receive approval. InSight Crime Analysis: The use of advanced technology to collect and analyze crime data has shown some success in Colombia. But the proposed artificial intelligence system — while innovative and a potential gamechanger in attacking crime — faces vast difficulties in setting up and comes with some potential drawbacks. When describing the system, Gomez Jaramillo said deciding data inputs and designing a reliable algorithm is an enormous hurdle. “If the [data] are not relevant,” he told Semana, “then we would have nothing to interpret.” There is also the concern of machine bias. US authorities have used algorithms to aid judges in determining sentences for crimes by providing scores on the likelihood that the person will commit another crime. Yet an investigation by news organization Pro Publica found that such algorithms were “remarkably unreliable in forecasting violent crime,” and showed bias against black people. Countries in Latin America have deployed ineffective artificial intelligence systems. Uruguay, for example, implemented a predictive policing software, but the program was scrapped after three years due to the system’s “high degree of opacity and its potential to reinforce discrimination and exclusion,” according to a World Wide Web Foundation report. Even with such issues, it is clear that pattern-recognition tools that can analyze massive data sets are the future of policing, and Colombia appears to be on the cutting edge."

The entire story can be read at: 
https://www.insightcrime.org/news/analysis/data-intelligence-helping-predict-crime-in-colombia/


Sunday, May 26, 2019

Charles Ray Finch: North Carolina: Extraordinary (welcome) Development: Wrongly convicted and sentenced to death "based upon false forensic testimony and an eyewitness identification manipulated by police misconduct" has been freed from prison after 43 years, The Death Penalty Information Center reports.



PUBLISHER'S NOTE: How horrifying. Charles Ray Finch spent almost half a century behind bars in North Carolina before finally   being declared "actually innocent" and released  (on May 23): The reason? False forensic testimony (ballistics) and and police misconduct (a rigged lineup). A  miscarriage of justice of this magnitude is almost unimaginable - short of execution of the innocentwhich will remain a risk until the death penalty is abolished in America, which can't happen soon enough.  Kudos  to the Duke  Wrongful Convictions Clinic for the battle it has fought on behalf of Finch - its first case - for fifteen years.

Harold Levy: Publisher: The Charles Smith Blog.

---------------------------------------------------------------

QUOTE OF THE DAY: "In an interview, Finch told WNCN-TV, “[w]hen I was picked up, they didn't question me or nothing. They put me there in a line-up. Straight in a line-up. And they put me in a line-up with a black leather coat on.” Chief Deputy Tony Owens claimed that he had put the jacket on another man in the lineup, but photos the defense had discovered showed that Finch was the only person in the three lineups wearing a coat."

 ---------------------------------------------------------

PASSAGE ONE OF THE DAY: "If charges are not refiled, Finch will become the 166th former U.S. death-row prisoner to have been exonerated since 1973. He will be the second death-sentenced prisoner to have waited more than four decades to be exonerated. In March 2019, Clifford Williams, Jr. was exonerated in Florida 42 years after his wrongful conviction and death sentence."

-----------------------------------------------------------

PASSAGE TWO OF THE DAY:  "A state forensic witness testified at the trial that the victim had died from two shotgun wounds, and a shotgun shell was found in Finch’s car...In 2013, testimony by Dr. John Butts, then North Carolina's Chief Medical Examiner, revealed that the victim had been killed by a pistol, not a shotgun and North Carolina State Crime Laboratory Special Agent Peter Ware, the forensic scientist manager for the lab’s firearm toolmark section, testified that the bullet found at the scene and the shell found in Finch’s car did not come from the same firearm."

--------------------------------------------------------------

PASSAGE THREE OF THE DAY: "A store employee who saw the killer flee the scene told police that the killer had been wearing a three-quarter length jacket. An eyewitness later identified Finch in three different lineups...Finch also presented testimony that the eyewitness identification procedures had been unduly suggestive. In an interview, Finch told WNCN-TV, “[w]hen I was picked up, they didn't question me or nothing. They put me there in a line-up. Straight in a line-up. And they put me in a line-up with a black leather coat on.” Chief Deputy Tony Owens claimed that he had put the jacket on another man in the lineup, but photos the defense had discovered showed that Finch was the only person in the three lineups wearing a coat. “That’s one of the highlights at the evidentiary hearing,” said Jim Coleman, Finch’s long-time lawyer and the director of the Duke Wrongful Convictions Clinic. “[W]e were able to expose that [Owens] had lied about the line-up and he had dressed Ray in a coat and he was the only one wearing a coat in the line-up.”

------------------------------------------------------------------

POST: "Former North  Carolina Death-Row Prisoner Charles Ray Finch Freed (81) After 43 Years," published by Death Penalty Information Center on May 25, 2019.

GIST: "A North Carolina man wrongly convicted and sentenced to death based upon false forensic testimony and an eyewitness identification manipulated by police misconduct has been freed from prison after 43 years. On May 23, 2019, federal district court judge Terrence Boyle ordered North Carolina to release former death-row prisoner Charles Ray Finch...was freed  from custody, five months after a unanimous panel of the U.S. Court of Appeals for the Fourth Circuit found Finch “actually innocent” of the murder. Finch, now 81 years old, was freed from Greene Correctional Institution in Maura, North Carolina, that afternoon. Finch’s daughter, Katherine Jones-Bailey, was two years old when he was convicted and sentenced to death. “I knew the miracle was going to happen,” she said about her father’s release. “I just didn’t know when.” Following the appeals court ruling, Finch’s lawyers from the Duke Wrongful Convictions Clinic filed a motion in federal district court seeking his immediate release. The North Carolina Attorney General’s office joined in the motion. The district court formally overturned Finch’s conviction and gave Wilson County prosecutors 30 days to decide whether to retry him. With no credible evidence of guilt, a retrial is considered unlikely. If charges are not refiled, Finch will become the 166th former U.S. death-row prisoner to have been exonerated since 1973. He will be the second death-sentenced prisoner to have waited more than four decades to be exonerated. In March 2019, Clifford Williams, Jr. was exonerated in Florida 42 years after his wrongful conviction and death sentence.  Finch was convicted in 1976 of murdering a grocery store clerk during an attempted robbery. He was sentenced to death under the mandatory death-sentencing statute then in effect in North Carolina. A state forensic witness testified at the trial that the victim had died from two shotgun wounds, and a shotgun shell was found in Finch’s car. A store employee who saw the killer flee the scene told police that the killer had been wearing a three-quarter length jacket. An eyewitness later identified Finch in three different lineups. Shortly thereafter, the U.S. Supreme Court struck down the sentencing statute and, in 1977, the North Carolina Supreme Court vacated Finch’s death sentence and resentenced him to life in prison. In 2013, testimony by Dr. John Butts, then North Carolina's Chief Medical Examiner, revealed that the victim had been killed by a pistol, not a shotgun and North Carolina State Crime Laboratory Special Agent Peter Ware, the forensic scientist manager for the lab’s firearm toolmark section, testified that the bullet found at the scene and the shell found in Finch’s car did not come from the same firearm. Finch also presented testimony that the eyewitness identification procedures had been unduly suggestive. In an interview, Finch told WNCN-TV, “[w]hen I was picked up, they didn't question me or nothing. They put me there in a line-up. Straight in a line-up. And they put me in a line-up with a black leather coat on.” Chief Deputy Tony Owens claimed that he had put the jacket on another man in the lineup, but photos the defense had discovered showed that Finch was the only person in the three lineups wearing a coat. “That’s one of the highlights at the evidentiary hearing,” said Jim Coleman, Finch’s long-time lawyer and the director of the Duke Wrongful Convictions Clinic. “[W]e were able to expose that [Owens] had lied about the line-up and he had dressed Ray in a coat and he was the only one wearing a coat in the line-up.” Coleman and the clinic have represented Finch for fifteen years, and Finch was the clinic’s first client. “We have students who work their hearts out on these cases,” Coleman said. “We feel an enormous sense of vindication.”

The entire post can be read at:
https://deathpenaltyinfo.org/node/7405

PUBLISHER'S NOTE: I am monitoring this case/issue. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic"  section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/charlessmith. Information on "The Charles Smith Blog Award"- and its nomination process - can be found at: http://smithforensic.blogspot.com/2011/05/charles-smith-blog-award-nominations.html Please send any comments or information on other cases and issues of interest to the readers of this blog to: hlevy15@gmail.com.  Harold Levy: Publisher; The Charles Smith Blog.

Technology Series: (Part Twelve): Artificial intelligence: (Gone wild!) A visit to India, where use of Artificial intelligence is sizzling - and somewhat scary too... As per the "YourStory' story: "How this Gurugram startup is helping police, Indian army catch bad guys using AI," by reporter Ramarko Sengupta..."Staqu is using artificial intelligence to simplify the process of locating criminal suspects and correctly identifying them using a simple app. And then there are the spy glasses."


PUBLISHER'S NOTE: In recent years, I have found myself publishing more and more posts on the  application of artificial intelligence technology to policing, public safety, and the criminal justice process,  not just in North America, but in countries all over the world, including China. Although I accept that properly applied science  can play a positive role in our society, I have learned over the years that technologies introduced for the so-called public good, can eventually be used against the people they were supposed to  benefit. As reporter Sieeka Khan  writes in Science Times:  "In 2017, researchers sent a letter to the secretary of the US Department of Homeland Security. The researchers expressed their concerns about a proposal to use the AI to determine whether someone who is seeking refuge in the US would become a positive and contributing member of society or if they are likely to become a threat or a terrorist. The other government uses of AI are also being questioned, such as the attempts at setting bail amounts and sentences on criminals, predictive policing and hiring government workers. All of these attempts have been shown to be prone to technical issues and a limit on the data can cause bias on their decisions as they will base it on gender, race or cultural background. Other AI technologies like automated surveillance, facial recognition and mass data collection are raising concerns about privacy, security, accuracy and fairness in a democratic society. As the executive order of Trump demonstrates, there is a massive interest in harnessing AI for its full, positive potential. But the dangers of misuse, bias and abuse, whether it is intentional or not, have the chance to work against the principles of international democracies. As the use of artificial intelligence grows, the potential for misuse, bias and abuse grows as well. The purpose of this 'technology' series, is to highlight the dangers of artificial intelligence -  and to help readers make their own assessments as to  whether these innovations will do more harm than good.

Harold Levy: Publisher: The Charles Smith Blog:

------------------------------------------------------------

PASSAGE ONE  OF THE DAY:  "To make the system watertight, Staqu also started to add voice samples of convicted criminals to the database. “The thing with voice samples is that they also come in very handy in identifying ransom callers, who tend to be repeat offenders,” adds the tech entrepreneur, who hails from UP’s Azamgarh. Staqu claims to be the only company in the world that has crossed 90 percent accuracy (94.3 percent, to be precise) in matching voice samples."

-------------------------------------------------------------

PASSAGE TWO OF THE DAY: "There’s one more nifty product from Staqu that seems like it’s right out of a sci-fi movie. The company has developed AI-powered glasses that law enforcement officials can use to identify criminals.  The glasses are fitted with cameras that can take photos of people around and run it past the criminal database. If it finds a match, it displays the information (name, criminal history etc.) through a projector fitted inside. “These glasses are particularly useful at a rally or a VIP event. They are also a great asset to have as part of the prime minister’s, and chief ministers’ security,” Atul points out. Staqu has done a pilot project for this with the Punjab Police and with the Dubai Police."

--------------------------------------------------------------

PASSAGE THREE OF THE DAY: "Predictive policing and proactive monitoring: Staqu has also started to work with some police forces in the area of predictive policing, where the AI-enabled system actively monitors CCTV feed and sends out real-time alerts in case something is amiss. For example, if too many people have gathered close to the prime minister’s residence at an odd time of the night, the system would send out an alert to the concerned authorities.The company now also wants to branch out into proactive monitoring, wherein if a known highway robber is on spotted on a CCTV camera, an alert would go out immediately before the criminal gets a chance to commit another crime."

----------------------------------------------------------------

STORY: "How this Gurugram startup is helping police, Indian army catch bad guys using AI," by Ramarko Sengupta, published by 'YourStory' on April 30,  2019. (Ramarko Sengupta is a Senior Editor at 'YourStory'.  "YourStory cover stories  of young entrepreneurs, startup owners and change-makers.")

SUB-HEADING: "Staqu is using artificial intelligence to simplify the process of locating criminal suspects and correctly identifying them using a simple app. And then there are the spy glasses."

GIST: "Violence cost India a staggering $1.19 trillion in 2017 in terms of constant purchasing power parity (PPP) according to a report released by the Institute for Economics and Peace last year. Purchasing power parity is a theory of measuring economic variables in different countries so that exchange rate variations do not distort comparisons. Expenses related to preventing, containing and dealing with the consequences of violence, along with, expenses on military and security were included in the cost. That figure is equal to nine percent of India’s GDP (gross domestic product) for that year. Nine percent of GDP that the country lost to violence. To address the issue of violence and curb its impact on the country’s economy, Gurugram-based Staqu has been working with police forces from across the country to mitigate crime. The company has also been working with the Indian Army, a fact that it has revealed for the first time, exclusively to YourStory. The company’s Co-founder and CEO, 29-year old Atul Rai is understandably reticent about the work they have been doing with the defence forces for the past one year.
“The only thing I can reveal is that the work we have been doing with the Indian Army is in the area of aerial imaging analysis and some other security aspects which are too sensitive to be revealed,” Rai told YourStory.
As far as working with the police is concerned, Staqu has so far teamed up with eight state police forces across the country, including Punjab, Rajasthan, Uttarakhand, Uttar Pradesh, and Telangana to check and solve crimes using artificial intelligence (AI). Once the ongoing general elections conclude, Staqu will add two more state police forces to its portfolio of customers. Overseas, Staqu has also worked with Dubai Police. So, how exactly is Staqu helping the police curb and solve crime? Staqu realised that police forces in India “did not really have any tech on the ground” to help them curb or solve crimes. “And that’s when we realised that there is a use case for us,” says Atul. In the safety and security space, Staqu offers three products: ABHED for fingerprint, facial and voice analysis, Jarvis for video analytics, and Pine for big data on criminals. These products can be accessed on multiple platforms, including a video wall panel, desktop, and mobile app. To start with, criminal records data has never been digitized, so Staqu began its work there. Then it created an app called ABHED (Artificial Intelligence Based Human Efface Detection) for the Rajasthan Police in late 2017 to create a digital record of convicted criminals – a database that can be updated and is searchable. For instance, if the police intercept or come across a suspect, they can take a picture of the person, and run it for matches on the criminal database through the app. Similarly, if a person is booked for a crime, their details can be added to the database via the app, on the spot. Staqu then added another provision, to run fingerprints via the app to “eliminate the 1 percent chance of mistaken identity that may take place with visual identification”. To make the system watertight, Staqu also started to add voice samples of convicted criminals to the database. “The thing with voice samples is that they also come in very handy in identifying ransom callers, who tend to be repeat offenders,” adds the tech entrepreneur, who hails from UP’s Azamgarh. Staqu claims to be the only company in the world that has crossed 90 percent accuracy (94.3 percent, to be precise) in matching voice samples. To further boost visual identification, especially from CCTV footage which tends to be of poor quality, Staqu has also developed a low-resolution image search that will accurately help identify criminals. Atul shares an instance where the police were looking for a robber in Ghaziabad. They used Staqu’s facial recognition software to match a sketch of the suspect through the database. As a result, they were able to catch the robber and recover Rs 2.5 lakh from him. So far, the company has helped solve 1,100 criminal cases. Since inception, the platform has over 10 lakh criminal records, and is adding records of 1,000 criminals every day. Straight out of a Hollywood thriller: There’s one more nifty product from Staqu that seems like it’s right out of a sci-fi movie. The company has developed AI-powered glasses that law enforcement officials can use to identify criminals.  The glasses are fitted with cameras that can take photos of people around and run it past the criminal database. If it finds a match, it displays the information (name, criminal history etc.) through a projector fitted inside. “These glasses are particularly useful at a rally or a VIP event. They are also a great asset to have as part of the prime minister’s, and chief ministers’ security,” Atul points out. Staqu has done a pilot project for this with the Punjab Police and with the Dubai Police. Predictive policing and proactive monitoring: Staqu has also started to work with some police forces in the area of predictive policing, where the AI-enabled system actively monitors CCTV feed and sends out real-time alerts in case something is amiss. For example, if too many people have gathered close to the prime minister’s residence at an odd time of the night, the system would send out an alert to the concerned authorities.The company now also wants to branch out into proactive monitoring, wherein if a known highway robber is on spotted on a CCTV camera, an alert would go out immediately before the criminal gets a chance to commit another crime. Not always the crime fighter" Staqu did not start its journey fighting crime. In fact, it was launched in 2015 as an AI startup focussed on data and ways to monetise it. When they started, the low-hanging fruit was image analysis for e-commerce companies. It was basically tech that would help a consumer click a picture of something they liked and using that picture, run a search for it on e-commerce platforms. The company still operates in this area, although the main focus has shifted to security. Clients in the e-commerce space currently include Paytm Mall and Jaypore. From there, Staqu went on to partner with Indian smartphone makers such as Karbonn, Intex, Gionee and Lava to integrate AI in their devices. This would ensure that users would only see relevant content and advertisements based on their interests and usage. Staqu also started partnering with browsers such as Microsoft Bing for the same purpose. It was in 2017 when Atul and his co-founders realised that no other company was working with AI and law enforcement. Getting into the domain was initially an experiment, but once they saw good traction, they decided to dive right in. At present, 60 percent of the company’s business comes from security. The private affair: The Gurugram-based startup, which says it achieved breakeven in 2018, also caters to the private sector and has clients like L&T and Wipro as a part of its portfolio. Staqu is working with the two corporate giants on smart city projects to monitor things such as speeding vehicles, unlawful activities in public spaces, among other things. It also works with the chemicals industry to ensure that safety and security is maintained on factory premises through active, AI-enabled visual monitoring. For example, if there are workers in an area they shouldn’t be in, or if there is an intruder in the building, or if workers are not wearing safety gear, the system alerts the authorities and security staff in real-time. The early days and funds: Back in 2015, the founders started out with a capital of Rs 19 lakh from personal savings and money borrowed from family and friends. Atul reminisces of the early days, “The hardware was expensive, each GPU cost Rs 4 lakh. And we needed 4 GPUs, and the rent for the flat where we worked was around Rs 30,000 per month. So, for one year, we did not draw any salary. In 2016, we started drawing a salary of around Rs 5,000 per month.” Then in June 2016, the company received its first round of funding, when Indian Angel Network (IAN) invested $500,000. Currently, Staqu is in the process of raising its Series A funding from a prominent US-based venture capital firm. “We are also looking for Indian investors,” adds Atul. The people behind Staqu: Atul and his three co-founders were all early members of a Delhi-based tech startup called Cube26, which Paytm’s parent company, One97 Communications, acquired last year. The four wanted to start their own AI company and left in 2015 to set up Staqu. Commenting on the name of the company, Atul says,
“Stack and queue are two data structures in computer science. We created a hybrid and named our startup Staqu.”
Atul holds a master’s degree in AI from Manchester in the UK, and worked there for some time before moving to the University of Szeged, Hungary, to work as a research associate in machine learning and computer vision. After the stint in Hungary, he moved back to India.  Two of his other co-founders, Chetan Rexwal (28) and Anurag Saini (28) are both engineering graduates from Indraprastha University, Delhi, while the fourth co-founder Pankaj Sharma (26) is a BTech graduate from Jamia Millia Islamia University, Delhi. The 35-member team of Staqu is a young one, with a median age of 26 years. Competition and the road ahead: According to Atul, the company has no competition in India, as far as the security space is concerned. Globally, there is the Palantir Technologies in the US, which works in a similar space, as does Sensetime of China. Staqu is currently exploring Western markets, as well as Eastern Europe, Africa and Southeast Asia, and believes it can make some inroads there. Explaining the move Atul says,

“The tech is there, but at a high cost. We can help bring that down and are already in discussions with some companies there. We would not like to enter directly but through collaborations with local companies as they would be in a better position to deal with their authorities.”
When asked if other companies with similar expertise entering the security space in India is a worry for them, Atul says, “Our database is our advantage. Working with the police, we have picked up expertise and research, and have an early mover advantage. It’s not easy to just parachute into this space.”

The entire story can be read at:
 https://yourstory.com/2019/04/gurugram-startup-indian-army-artificial-intelligence