PUBLISHER'S NOTE: Artificial intelligence, once the stuff of science fiction, has become all to real in our modern society - especially in the American criminal justice system; As the ACLU's Lee Rowland puts it: "Today, artificial intelligence. It's everywhere — in our homes, in our cars, our offices, and of course online. So maybe it should come as no surprise that government decisions are also being outsourced to computer code. In one Pennsylvania county, for example, child and family services uses digital tools to assess the likelihood that a child is at risk of abuse. Los Angeles contracts with the data giant Palantir to engage in predictive policing, in which algorithms identify residents who might commit future crimes. Local police departments are buying Amazon's facial recognition tool, which can automatically identify people as they go about their lives in public." The algorithm is finding its place deeper and deeper in the nation's courtrooms on what used to be exclusive decisions of judges such as bail and even the sentence to be imposed. I am pleased to see that a dialogue has begun on the effect that increasing use of these logarithms in our criminal justice systems is having on our society and on the quality of decision-making inside courtrooms. As Lee Rowland asks about this brave new world, "What does all this mean for our civil liberties and how do we exercise oversight of an algorithm?" In view of the importance of these issues - and the increasing use of artificial intelligence by countries for surveillance of their citizens - it's time for yet another technology series on The Charles Smith Blog focusing on the impact of science on society and criminal justice. Up to now I have been identifying the appearance of these technologies. Now at last I can report on the realization that some of them may be two-edged swords - and on growing pushback. The following article on 'deepfake' videos cause me great concern. Video-evidence plays a huge role in courts in North America and elsewhere. Yet we are told: "Software to create deepfakes is available for free online, and it doesn’t require advanced production skills to use. It works by feeding hundreds of pictures of a person’s face into a machine learning algorithm that then maps them onto video of another person’s body. Anything the person in the video does or says can be made to look like it's coming from the victim. The results are sometimes so seamless that it's difficult to tell with the naked eye that the videos are fraudulent."I remember all the excitement in my university days when the movie "Deep Throat" came out. Stand by for the release of "Deep Fake!"
Harold Levy: Publisher; The Charles Smith Blog:
------------------------------------------------------------
PASSAGE OF THE DAY: "Warner said the easily accessible technology used to make the videos could “usher in an unprecedented wave of false and defamatory content.” In his policy paper, he wrote, “Just as we’re trying to sort through the disinformation playbook used in the 2016 election and as we prepare for additional attacks in 2018, a new set of tools is being developed that are poised to exacerbate these problems." Software to create deepfakes is available for free online, and it doesn’t require advanced production skills to use. It works by feeding hundreds of pictures of a person’s face into a machine learning algorithm that then maps them onto video of another person’s body. Anything the person in the video does or says can be made to look like it's coming from the victim. The results are sometimes so seamless that it's difficult to tell with the naked eye that the videos are fraudulent. Lawmakers caution that it's a tool that could send the fake news crisis into overdrive. Think about it: Realistic-looking videos appearing to show politicians meeting taking bribes or uttering inflammatory statements could be used to try to sway an election. Or doctored footage purporting to show officials announcing military action could trigger a national security crisis."
STORY:"The Cybersecurity 202: Doctored videos could send fake news crisis into overdrive, lawmakers warn," by reporter Derek Hawkins, published by The Washington Post on July 31, 2018. (Derek Hawkins is a cybersecurity policy reporter and author of The Cybersecurity 202 newsletter.)
GIST: "Two lawmakers are warning that the country is woefully unprepared for the rise of deepfakes, alarmingly realistic videos that appear to show people doing things they didn’t do. Sens. Mark R. Warner (D-Va.) and Marco Rubio (R-Fla.) are exploring ways to curb the trend of doctored videos before it becomes too widespread, saying they could wreak havoc if used in disinformation campaigns like the one conducted by the Russian government in 2016. In a wide-ranging technology policy paper Monday, Warner floated the idea of holding social media platforms liable for failure to take down deepfakes. And Rubio in a recent speech called on government and political leaders to treat them as a national security threat. The attention from lawmakers means deepfakes are no longer a fringe issue but a more serious front in the fight against fake news, and tech companies may soon feel pressure to get ahead of them. But any policy solution would have to balance the harm to potential victims against free-speech rights for people who use deepfakes for creative or satirical purposes. Warner said the easily accessible technology used to make the videos could “usher in an unprecedented wave of false and defamatory content.” In his policy paper, he wrote, “Just as we’re trying to sort through the disinformation playbook used in the 2016 election and as we prepare for additional attacks in 2018, a new set of tools is being developed that are poised to exacerbate these problems." Software to create deepfakes is available for free online, and it doesn’t require advanced production skills to use. It works by feeding hundreds of pictures of a person’s face into a machine learning algorithm that then maps them onto video of another person’s body. Anything the person in the video does or says can be made to look like it's coming from the victim. The results are sometimes so seamless that it's difficult to tell with the naked eye that the videos are fraudulent. Lawmakers caution that it's a tool that could send the fake news crisis into overdrive. Think about it: Realistic-looking videos appearing to show politicians meeting taking bribes or uttering inflammatory statements could be used to try to sway an election. Or doctored footage purporting to show officials announcing military action could trigger a national security crisis. “This all sounds fantastic, it all sounds exaggerated, it all sounds hyperbolic. But the capability to do all of this is real and exists now, the willingness exists now, all that's missing is the execution. And we are not ready for it,” Rubio said in a speech earlier this month at the right-leaning Heritage Foundation. “I know for a fact that the Russian Federation at the command of Vladimir Putin tried to sow instability and chaos in American politics in 2016,” he said. “They did that through Twitter bots and they did that through a couple of other measures that will increasingly come to light. But they didn’t use this. Imagine using this. Imagine injecting this in an election.” To chip away at the problem, Warner has proposed is amending the Communications Decency Act to hold social media platforms liable under state law if they don’t take down deepfakes and other manipulated content shown in court to be defamatory. Right now, the law provides immunity for platforms in such cases. “Currently the onus is on victims to exhaustively search for, and report, this content to platforms — who frequently take months to respond and who are under no obligation thereafter to proactively prevent the same content from being re-uploaded in the future,” Warner wrote in his policy proposal. The platforms, he said, were “in the best place to identify and prevent this kind of content from being propagated.” Legislation to do this would almost certainly run into opposition from civil liberties groups. This year, organizations such as the Electronic Frontier Foundation lobbied unsuccessfully against a similar carve-out in the Communications Decency Act that sought to hold media platforms liable for sex trafficking. The groups said the move, while well-intended, was so broadly written that it criminalized protected speech. “Any effort on this front would need to address the challenge of distinguishing true deepfakes aimed at spreading disinformation from satire or other legitimate forms of entertainment or parody,” Warner wrote. “Attempting to distinguish between true disinformation and legitimate satire could prove difficult,” he said, but “courts already must make distinction between satire and defamation/libel.” Deepfakes started cropping up last year on Reddit after a user superimposed the faces of Gal Gadot, Taylor Swift and other celebrities onto the faces of actors in pornographic videos. They've also been used to lampoon President Trump by pasting his face over Russian President Vladimir Putin and German Chancellor Angela Merkel. And the comedian Jordan Peele used the technology to graft President Barack Obama's face over his own in a widely-circulated public service announcement warning of the dangers of deepfakes. “It’s only a matter of time until ‘deepfake’ videos become a household term,” Rubio told me in an email. Rubio hasn’t offered any concrete policy proposals yet. For now, he told me, he’s simply trying to sound the alarm in hopes of bringing new ideas to the table. “I’m working to raise awareness,” he said, “and find ways to address this threat from foreign actors and criminals and defend our elections this fall and in the future.”
GIST: "Two lawmakers are warning that the country is woefully unprepared for the rise of deepfakes, alarmingly realistic videos that appear to show people doing things they didn’t do. Sens. Mark R. Warner (D-Va.) and Marco Rubio (R-Fla.) are exploring ways to curb the trend of doctored videos before it becomes too widespread, saying they could wreak havoc if used in disinformation campaigns like the one conducted by the Russian government in 2016. In a wide-ranging technology policy paper Monday, Warner floated the idea of holding social media platforms liable for failure to take down deepfakes. And Rubio in a recent speech called on government and political leaders to treat them as a national security threat. The attention from lawmakers means deepfakes are no longer a fringe issue but a more serious front in the fight against fake news, and tech companies may soon feel pressure to get ahead of them. But any policy solution would have to balance the harm to potential victims against free-speech rights for people who use deepfakes for creative or satirical purposes. Warner said the easily accessible technology used to make the videos could “usher in an unprecedented wave of false and defamatory content.” In his policy paper, he wrote, “Just as we’re trying to sort through the disinformation playbook used in the 2016 election and as we prepare for additional attacks in 2018, a new set of tools is being developed that are poised to exacerbate these problems." Software to create deepfakes is available for free online, and it doesn’t require advanced production skills to use. It works by feeding hundreds of pictures of a person’s face into a machine learning algorithm that then maps them onto video of another person’s body. Anything the person in the video does or says can be made to look like it's coming from the victim. The results are sometimes so seamless that it's difficult to tell with the naked eye that the videos are fraudulent. Lawmakers caution that it's a tool that could send the fake news crisis into overdrive. Think about it: Realistic-looking videos appearing to show politicians meeting taking bribes or uttering inflammatory statements could be used to try to sway an election. Or doctored footage purporting to show officials announcing military action could trigger a national security crisis. “This all sounds fantastic, it all sounds exaggerated, it all sounds hyperbolic. But the capability to do all of this is real and exists now, the willingness exists now, all that's missing is the execution. And we are not ready for it,” Rubio said in a speech earlier this month at the right-leaning Heritage Foundation. “I know for a fact that the Russian Federation at the command of Vladimir Putin tried to sow instability and chaos in American politics in 2016,” he said. “They did that through Twitter bots and they did that through a couple of other measures that will increasingly come to light. But they didn’t use this. Imagine using this. Imagine injecting this in an election.” To chip away at the problem, Warner has proposed is amending the Communications Decency Act to hold social media platforms liable under state law if they don’t take down deepfakes and other manipulated content shown in court to be defamatory. Right now, the law provides immunity for platforms in such cases. “Currently the onus is on victims to exhaustively search for, and report, this content to platforms — who frequently take months to respond and who are under no obligation thereafter to proactively prevent the same content from being re-uploaded in the future,” Warner wrote in his policy proposal. The platforms, he said, were “in the best place to identify and prevent this kind of content from being propagated.” Legislation to do this would almost certainly run into opposition from civil liberties groups. This year, organizations such as the Electronic Frontier Foundation lobbied unsuccessfully against a similar carve-out in the Communications Decency Act that sought to hold media platforms liable for sex trafficking. The groups said the move, while well-intended, was so broadly written that it criminalized protected speech. “Any effort on this front would need to address the challenge of distinguishing true deepfakes aimed at spreading disinformation from satire or other legitimate forms of entertainment or parody,” Warner wrote. “Attempting to distinguish between true disinformation and legitimate satire could prove difficult,” he said, but “courts already must make distinction between satire and defamation/libel.” Deepfakes started cropping up last year on Reddit after a user superimposed the faces of Gal Gadot, Taylor Swift and other celebrities onto the faces of actors in pornographic videos. They've also been used to lampoon President Trump by pasting his face over Russian President Vladimir Putin and German Chancellor Angela Merkel. And the comedian Jordan Peele used the technology to graft President Barack Obama's face over his own in a widely-circulated public service announcement warning of the dangers of deepfakes. “It’s only a matter of time until ‘deepfake’ videos become a household term,” Rubio told me in an email. Rubio hasn’t offered any concrete policy proposals yet. For now, he told me, he’s simply trying to sound the alarm in hopes of bringing new ideas to the table. “I’m working to raise awareness,” he said, “and find ways to address this threat from foreign actors and criminals and defend our elections this fall and in the future.”
- The entire story can be read at the link below:
- https://www.washingtonpost.com/news/powerpost/paloma/the-cybersecurity-202/2018/07/31/the-cybersecurity-202-doctored-videos-could-send-fake-news-crisis-into-overdrive-lawmakers-warn/5b5f39c91b326b0207955e39/?utm_term=.e47ec0d6b289
PUBLISHER'S NOTE: I am monitoring this case/issue. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic" section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/
Harold Levy: Publisher; The Charles Smith Blog;
---------------------------------------------------------------------