Sunday, January 19, 2020

Technology: Is 'deepfake' evidence being inroduced into the courts? The good news. (First): Experts interviewed by Politico's very talented Artificial Intelligence Correspondent Janosch Delker opine for a story headed, 'Welcome to the age of uncertainty', are not aware of any instance in which a deepfake video had appeared as evidence in a courtroom." The bad news? Technology is developing so rapidly, as one expert put it, "it could only be a matter of time, they said, before judges in family court, for example, will have to decide whether to trust footage of a mother drinking while taking care of her child."


QUOTE OF THE DAY: “It’s taken everyone by surprise how quickly the technology has been advancing,” said Alexa Koenig, the executive director of the human rights center at the UC Berkeley School of Law. “All of a sudden, there’s this sense among people who have been paying attention to the use of video as evidence that we need to figure out what we’re going to do to ensure that what we introduce in courtrooms is actually what it purports to be.”

--------------------------------------------------------------

PASSAGE OF THE DAY: "Beyond politics, the uncertainty introduced by deepfakes is also gearing up to be a major headache for lawyers. In 2017, the chief prosecutor at the International Criminal Court in The Hague was lauded for setting a new legal milestone: the first arrest warrant based on videos posted on social media. Investigators had found several videos showing a man, whom they identified as special forces commander Mahmoud Mustafa Busayf Al-Werfalli, wearing camouflage and a black T-shirt with the logo of a Libyan elite military unit, carrying out a series of gruesome executions in or near war-torn Benghazi. The arrest warrant, which accused Al-Werfalli of 33 murders, was heralded as a sign the ICC had caught up to the realities of an age in which 500 hours of video are uploaded to YouTube every minute. Fast forward two years, however, and advances in deepfake technology have made mining open sources and platforms such as Facebook, Twitter and YouTube to gather evidence and build cases a lot riskier."

---------------------------------------------------------------

STORY" "Welcome to the age of uncertainty," by Janosch Delcker, published by Politico on December 17, 2019. (Janosch Delcker is POLITICO’s Artificial Intelligence Correspondent based in Berlin, covering how the ascent of big data, machine learning and automation is changing politics and policy-making.  He joined POLITICO in 2015 as a political reporter and spent almost three years covering German politics before becoming AI correspondent in January 2018.)

SUB-HEADING: "If everything can be faked, how can we know if  something is real."

GIST:  “Deepfake” technology may have already destabilized national politics for the first time.
When Ali Bongo, the president of Gabon, appeared on video to give his traditional New Year’s address last year, he looked healthy — but something about him was off. His right arm was strangely immobile, and he mumbled through parts of his speech. Some of his facial expressions seemed odd.
It could have been sickness. Bongo, whose family has ruled the oil-rich African nation for 50 years, had suffered a stroke on a trip to Saudi Arabia three months earlier and hadn’t been seen since. He had gone through “a difficult period,” he told viewers, but had recovered thanks to “the grace of God.” Or it could have been something else. National newspapers ran headlines suggesting the president’s appearance in the video could have been the product of deepfake technology, which uses artificial intelligence to produce convincing fake videos that make people appear to do or say things they never did. Speculation mounted feverishly, culminating a week later in an attempted coup d’état by members of the military. The plotters seized the state broadcaster and deployed through the capital, only to be put down in a matter of hours by loyal units of the military.
“It’s not just that you might make people believe that something that’s fake is real. But that you might make them believe that something that’s real is fake” — Lilian Edwards, professor
Much has been written about the possibility that deepfakes will one day be used to roil politics with disinformation. The example from Gabon indicates that it needn’t take a skillfully produced video to cause disruption. The technology’s very existence can be enough to inject uncertainty into an already volatile situation. “It’s not just that you might make people believe that something that’s fake is real, said Lillian Edwards, professor of law, innovation and society at Newcastle University. “But that you might make them believe that something that’s real is fake.” One year later, the debate over whether that was the real Bongo in the video remains unsettled. “It’s a total information disorder,” said Julie Owono, the executive director of the Paris-based digital rights organization Internet Without Borders, which teamed up with the U.S. publication Mother Jones to analyze the footage. The researchers found no evidence the clip of Bongo’s address was doctored, but they also couldn’t definitely rule out the possibility that it was. And so speculation about Bongo’s health has continued unabated. “People are even considering that someone else might be impersonating the president, and may be exercising the highest office of the country — which is quite frankly, very frightening,” said Owono.
* * *
Deepfake technology, like many innovations, started with pornography. In late 2017, a user named “u/deepfakes” — a reference to the machine-learning technique at the core of most cutting-edge AI today — posted a video on Reddit. The clip, taken from a porn film, had been altered with free software to superimpose the faces of female celebrities on the bodies of porn actresses. Reports of the “AI-assisted fake porn” spread quickly. New software, online platforms and even marketplaces, where professional deepfake creators offered their services, popped up everywhere across the web. The technology is still overwhelmingly used to create “non-consensual deepfake pornography,” according to an October 2019 study by Dutch cybersecurity company Deeptrace.
But as deepfakes proliferate, experts are warning that the technology is likely to cross over into mainstream, where it could be used to manipulate elections, trigger social unrest or fuel diplomatic tensions. In May, a video of Speaker of the U.S. House of Representatives Nancy Pelosi, one of President Donald Trump’s most outspoken opponents, went viral. It had been slowed down to make the 79-year-old look drunk. In November 2018, a former Trump spokesperson posted a video that had been accelerated to make it seem like a CNN reporter had pushed a White House staffer. The clip also took social media by storm. Both videos were examples of so-called cheap fakes, created with basic video-editing tools. Quickly debunked, they nonetheless dominated the global news cycle for several days. As the technology becomes more sophisticated, it will become harder to expose deepfakes for what they are. Already, one of the greatest challenges of fighting fake videos is the difficulty of proving conclusively whether footage has been tampered with. Research groups from California to Bavaria, many of them with the support of governments or tech companies, are trying to beat deepfakes at their own game, deploying the same machine-learning technology used to generate them to train AI systems to detect them. But none of these tools will be able to detect every deepfake video or to label it as such with full certainty, researchers admit. The contest between fakers and debunkers is likely to remain a game of cat and mouse, with creators working to evade detection by forensic tools — particularly if they’re released to the public. Detecting fakeries is only half the battle. Research suggests that news coverage of deepfakes, even if it is to debunk them, instills suspicion among viewers. Many assume there must be some truth to the content. Deepfakes have also created what American law professor Danielle Citron describes as a “liar’s dividend” — difficulty in authenticating videos allows those caught on camera to claim the footage was fabricated. When video emerged during the 2016 U.S. presidential campaign of then-candidate Trump bragging about women that he could “grab ... by the pussy,” he admitted to making the comments and apologized. Just over a year later — after deepfake technology had become better known — the New York Times reported Trump had changed tack to suggest the video wasn’t authentic. “The more people are educated about the advent of deepfakes, the more they may disbelieve real recordings,” Citron told members of the U.S. House intelligence committee during a June hearing. “Regrettably and perversely, the liar’s dividend grows in strength as people learn more about the dangers of deepfakes.”
* * *
Beyond politics, the uncertainty introduced by deepfakes is also gearing up to be a major headache for lawyers. In 2017, the chief prosecutor at the International Criminal Court in The Hague was lauded for setting a new legal milestone: the first arrest warrant based on videos posted on social media. Investigators had found several videos showing a man, whom they identified as special forces commander Mahmoud Mustafa Busayf Al-Werfalli, wearing camouflage and a black T-shirt with the logo of a Libyan elite military unit, carrying out a series of gruesome executions in or near war-torn Benghazi. The arrest warrant, which accused Al-Werfalli of 33 murders, was heralded as a sign the ICC had caught up to the realities of an age in which 500 hours of video are uploaded to YouTube every minute. Fast forward two years, however, and advances in deepfake technology have made mining open sources and platforms such as Facebook, Twitter and YouTube to gather evidence and build cases a lot riskier. “It’s taken everyone by surprise how quickly the technology has been advancing,” said Alexa Koenig, the executive director of the human rights center at the UC Berkeley School of Law. “All of a sudden, there’s this sense among people who have been paying attention to the use of video as evidence that we need to figure out what we’re going to do to ensure that what we introduce in courtrooms is actually what it purports to be.” Legal experts interviewed by POLITICO said they are not aware of any instance in which a deepfake video had appeared as evidence in a courtroom. “It’s a curiosity to most lawyers at the moment — if they’ve heard of it at all,” said Newcastle University’s Edwards, a longtime internet governance expert. But it could only be a matter of time, they said, before judges in family court, for example, will have to decide whether to trust footage of a mother drinking while taking care of her child. Digital forensics experts argue that, in general, videos should only be used in court if they can be matched by corroborating evidence such as witness testimony or additional footage shot from other angles.
“Because [the West is] not where things happen first. They only happen there when it’s too late, once things have become too big to solve” — Julie Owono, executive director of Internet Without Borders
“If we find a video and it shows an incident — fake or real — then we’re going to look for additional information about that incident,” said Eliot Higgins, the founder of investigative journalism group Bellingcat, which has trained investigators at the International Criminal Court in open-source investigation. For important cases, like those in front of the ICC, “I don’t think any lawyer in the world is going to a court case with just one video,” Higgins said. The uncertainty surrounding the Ali Bongo video might be the first example of deepfake technology — whether it was used in Gabon or not — injecting dangerous levels of uncertainty into politics. It’s unlikely to be the last. “One big lesson we should take from what’s happened in Gabon is, really, to pay more attention to what’s happening outside the borders of Europe, or the U.S., or the Western world more generally,” said Owono of Internet Without Borders. “Because [the West is] not where things happen first. They only happen there when it’s too late, once things have become too big to solve.”

The entire story can be read at:
https://www.politico.eu/article/deepfake-videos-the-future-uncertainty/
https://www.google.ca/amp/s/www.politico.eu/article/deepfake-videos-the-future-uncertainty/amp/

PUBLISHER'S NOTE: I am monitoring this case/issue. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic"  section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/charlessmith. Information on "The Charles Smith Blog Award"- and its nomination process - can be found at: http://smithforensic.blogspot.com/2011/05/charles-smith-blog-award-nominations.html Please send any comments or information on other cases and issues of interest to the readers of this blog to: hlevy15@gmail.com.  Harold Levy: Publisher: The Charles Smith Blog;

 ---------------------------------------------------------------

FINAL WORD:  (Applicable to all of our wrongful conviction cases):  "Whenever there is a wrongful conviction, it exposes errors in our criminal legal system, and we hope that this case — and lessons from it — can prevent future injustices.""

Lawyer Radha Natarajan:
 https://www.providencejournal.com/news/20191210/da-drops-murder-charge-against-taunton-man-who-served-35-years-for-1979-slaying

---------------------------------------------------------------