PUBLISHER'S NOTE: (1): This Blog has been addressing the increasing risk that increased reliance on logarithms in courtrooms in North America and elsewhere in the world can lead to the falsification of audio, video, and documentary evidence in criminal and civil courts - with potentially severe ramifications to justice systems (and society at large) throughout the world. Here is more grist for our mill.
Harold Levy: Publisher: The Charles Smith Blog:
————————————————————
PUBLISHER'S NOTE: (2): The Washington Post makes a pretty good point in its editorial calling deepfakes "a reason to despair about the digital future" which makes the point that "Convincingly edited video could confuse military officers in the field. The ensuing uncertainty could also be exploited to undermine journalistic credibility; tomorrow’s deepfake may be today’s “fake news.” From the point of view of this Blog, if a convincingly edited video could confuse military officers in the field, a convincingly edited video - could be powerful, difficult evidence to detect and counter in a courtroom, be it in a criminal or civil case. Which raises the question as to how effectively today's judges - all too many of whom are far from computor savvy - are able to handle them. Ergo, I continue to follow forensic developments in this nefarious technology on this Blog.
Harold Levy: Publisher: The Charles Smith Blog.
----------------------------------------------------------
QUOTE OF THE DAY: "Perhaps the scariest part of these Frankenstein-ish creations is how easy they are to make, especially when the software for a specific application — such as pornography — is publicly available. A layman can simply plug sufficient photos or footage into prewritten code and produce a lifelike lie about his or her subject. Deepfakery is democratizing, and malicious actors, however unsophisticated, are increasingly able to harness it. Deepfakes are also inherently hard to detect."
PASSAGE OF THE DAY: "Convincingly edited video could confuse military officers in the field. The ensuing uncertainty could also be exploited to undermine journalistic credibility; tomorrow’s deepfake may be today’s “fake news.” Perhaps the scariest part of these Frankenstein-ish creations is how easy they are to make, especially when the software for a specific application — such as pornography — is publicly available. A layman can simply plug sufficient photos or footage into prewritten code and produce a lifelike lie about his or her subject. Deepfakery is democratizing, and malicious actors, however unsophisticated, are increasingly able to harness it. Deepfakes are also inherently hard to detect."
EDITORIAL: "A reason to despair about the digital future," published by The Washington Post on June 1, 2019.
GIST: "A despairing prediction for the digital future came from an unlikely
source recently. Speaking of “deepfakes,” or media manipulated through
artificial intelligence, the actress Scarlett Johansson told The Post that “the Internet is a vast wormhole of darkness that eats itself.”
A
stark view, no doubt, but when it comes to deepfakes, it may not be
entirely unmerited. The ability to use machine learning to simulate an
individual saying or doing almost anything poses personal and political
risks that societies around the world are ill-equipped to guard against. Ms.
Johansson’s comments appeared in a report in The Post about how
individuals’ faces, and celebrities’ faces in particular, are grafted
onto pornographic videos and passed around the Web — sometimes to
blackmail, sometimes just to humiliate. But deepfakes could also have
applications in information warfare. A foreign adversary hoping to
influence an election could plant a doctored clip of a politician
committing a gaffe. Convincingly edited video could confuse military
officers in the field. The ensuing uncertainty could also be exploited
to undermine journalistic credibility; tomorrow’s deepfake may be
today’s “fake news.” Perhaps
the scariest part of these Frankenstein-ish creations is how easy they
are to make, especially when the software for a specific application —
such as pornography — is publicly available. A layman can simply plug
sufficient photos or footage into prewritten code and produce a lifelike
lie about his or her subject. Deepfakery is democratizing, and
malicious actors, however unsophisticated, are increasingly able to
harness it. Deepfakes are also inherently hard
to detect. The technology used to create them is trained in part with
the same algorithms that distinguish fake content from real — so any
strides in ferreting out false content will soon be weaponized to make
that content more convincing. This means online platforms have their
police work cut out for them, though investment in staying one step
ahead, along with algorithmic tweaks to demote untrustworthy sources and
de-emphasize virality, will always be needed. Some suggest holding
sites liable for the damages caused by deepfakes if companies do too
little to remove dangerous content. Like
technical solutions, policy answers to the deepfake problem are elusive,
but steps can be taken. Many harmful deepfakes are already illegal
under copyright, defamation and other laws, but Congress should tweak
existing fraud-related regulations to cover the technology explicitly —
amping up penalties and bringing federal resources, as well as public
attention, to bear on a devilish problem. Humans have so far hardly had
to think about what happens when someone else uses our faces. To avoid
that wormhole of darkness, we will have start thinking hard."
The entire editorial can be read at:
https://www.washingtonpost.com/opinions/a-reason-to-despair-about-the-digital-future-deepfakes/2019/01/06/7c5e82ea-0ed2-11e9-831f-3aa2c2be4cbd_story.html