PASSAGE OF THE DAY: "When fake news meets deep fake video, the possibilities are harrowing."
-----------------------------------------------------------------
SECOND PASSAGE OF THE DAY: "While most of us are aware that images can be altered through Photoshop and digital editing, it is not widely recognized that in the last year a series of technological leaps have made it possible for creation of “audio and video of real people saying and doing things they never said or did,” the authors said. Originally phrased as a single word “deepfakes,” the technology showed up first in 2017 in the world of pornography, where major celebrities’ faces were face-swapped onto bodies of adult-film actresses. Just as alarmingly, users of a deep fake app began taking the faces of real people—former girlfriends, for example—and putting them into porn videos.In February 2018, Reddit, which had become a center for deep fake discussion and sharing, banned certain subreddits, saying it was updating its “site-wide rules against involuntary pornography and sexual or suggestive content involving minors.” But at the same time, technological leaps have opened up this idea to uses far beyond pornography. With the use of neural networks, computers can “produce altered (or even wholly invented) images, videos, and audios that are more realistic and more difficult to debunk than they have been in the past.”
STORY: "True Lies: Fake-Video Technology Called ‘Toxic’ Threat to National Security," by reporter Nancy Bilyeau, published by The Crime Report on August 30, 2018; (Nancy Bilyeau is Deputy Editor/Digital Services of The Crime Report.)
GIST: "The rapidly evolving technology that produces “deep
fakes”—convincing videos of people saying and doing things that are
completely concocted—is now widely accessible, and creating nightmarish
scenarios of extortion and theft, according to a forthcoming paper in
the California Law Review. The paper, titled “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security,”
also argues that phony videos have wider consequences for democracies,
including the disruption of elections and military conflict. “The ability to distort reality has taken an exponential leap forward
with ‘deep fake’ technology,” according to the paper, written by Robert
Chesney of the University of Texas School of Law and Danielle Keats
Citron of the University of Maryland Francis King Carey School of Law. In one possible scenario cited by the authors, an authentic-looking
video of a political candidate doing something shocking or
criminal—“taking bribes, displaying racism, or engaging in
adultery”—could be released during what the authors describe as a
critical window: appearing near enough to an election day to sway a
number of votes, but not allowing enough time for convincing denial and
debunking. While most of us are aware that images can be altered through
Photoshop and digital editing, it is not widely recognized that in the
last year a series of technological leaps have made it possible for
creation of “audio and video of real people saying and doing things they
never said or did,” the authors said. Originally phrased as a single word “deepfakes,” the technology
showed up first in 2017 in the world of pornography, where major
celebrities’ faces were face-swapped onto bodies of adult-film
actresses. Just as alarmingly, users of a deep fake app began taking the
faces of real people—former girlfriends, for example—and putting them
into porn videos. In February 2018, Reddit, which had become a center for deep fake
discussion and sharing, banned certain subreddits, saying it was
updating its “site-wide rules against involuntary pornography and sexual
or suggestive content involving minors.” But at the same time, technological leaps have opened up this idea to uses far beyond pornography. With the use of neural networks, computers can “produce altered (or
even wholly invented) images, videos, and audios that are more realistic
and more difficult to debunk than they have been in the past.” Technology exists to create extremely convincing audio, particularly
if many voice recordings of the targeted subject exist. Last year, University of Washington researchers, employing a neural network tool, created a video of former President Barack Obama in which his lips moved convincingly as he said things he had not actually said. Such experiments have gone beyond the classroom. “Diffusion has begun for deep fake technology,” with people using
apps and readily accessible computer programs, wrote Chesney and Citron. The bad news: there are few laws or regulations to address the
proliferation of fake videos. Current criminal or civil law in the U.S.
does not ban their creation or distribution. “There will be no shortage of harmful exploitations,” the authors
wrote. “Some will be in the nature of theft, such as stealing people’s
identities to extract financial or some other benefit. Others will be in
the nature of abuse, commandeering a person’s identity to harm them or
those who are about them. As the authors point out, “The marketplace of ideas already suffers
from truth decay as our networked information environment interacts in
toxic ways with our cognitive biases.” A deep fake video might not have gotten very far when traditional
media controlled the dissemination of news. But now, such a video could
explode into the public awareness through a person sharing it on social
media. If one influencer picks up the video, it goes viral in hours. State-sponsored willingness to engage in fakery to swing elections
was demonstrated in France last year. To prevent the election of
Emmanuel Macron as president of France, “Russia mounted a covert action
program blending cyber-espionage and information manipulation.” Poor-quality faked documents and smart countermeasures by Macron’s
team defused the attack. But Russia might have done permanent damage if
it “distributed a deep fake consisting of seemingly real video or audio
evidence persuasively depicting Macron speaking or doing something
shocking,” the paper said. A particularly dangerous use of deep fakes would be in a country engaged in armed conflict, the authors say. One example is a deep fake video of someone in the U.S. military “burning a Koran or killing a civilian” in the Middle East. “There is no question that deep fakes will play a role in future
armed conflicts. Information operations of various kinds have long been
an important aspect of warfare.” Said Chesney and Citron: “The risks to our democracy and to national security are profound.”"
PUBLISHER'S NOTE: I am monitoring this case/issue. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic" section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/ charlessmith.
Information on "The Charles Smith Blog Award"- and its nomination
process - can be found at: http://smithforensic.blogspot. com/2011/05/charles-smith- blog-award-nominations.html
Please send any comments or information on other cases and issues of
interest to the readers of this blog to: hlevy15@gmail.com.
Harold Levy: Publisher; The Charles Smith Blog.
The entire story can be read at:
https://thecrimereport.org/2018/08/30/true-lies-fake-video-technology-called-toxic-threat-to-national-security/#PUBLISHER'S NOTE: I am monitoring this case/issue. Keep your eye on the Charles Smith Blog for reports on developments. The Toronto Star, my previous employer for more than twenty incredible years, has put considerable effort into exposing the harm caused by Dr. Charles Smith and his protectors - and into pushing for reform of Ontario's forensic pediatric pathology system. The Star has a "topic" section which focuses on recent stories related to Dr. Charles Smith. It can be found at: http://www.thestar.com/topic/