Monday, October 11, 2021

Get Ready for Deepfakes

The Tom Cruise TikTok deepfakes last spring didn’t spur me into writing about deepfakes, not even when Justin Bieber fell so hard for them that he challenged the deepfake to a fight.  When 60 Minutes covered the topic last night., though, I figured I’d best get to it before I missed this particular wave.



We’re already living in an era of unprecedented misinformation/disinformation, as we’ve seen repeatedly with COVID-19 (e.g., hydroxychloroquine, ivermectin, anti-vaxxers), but deepfakes should alert us that we haven’t seen anything yet. 

ICYMI, here’s the 60 Minutes story:

The trick behind deepfakes is a type of deep learning called “generative adversarial network” (GAN), which basically means neural networks compete on which can generate the most realistic media (e.g., audio or video).  They can be trying to replicate a real person, or creating entirely fictitious people.  The more they iterate, the most realistic the output gets. 

Audio deepfake technology is already widely available, and already fairly good.  The software takes a sample of someone’s voice and “learns” how that person speaks.  Type in a sentence, and the software generates an audio that sounds like the real person. 

Credit: Deepfake Challenge
The technology has already been used to trick an executive into sending money into an illicit bank account, by deepfaking his boss’s voice.  “The software was able to imitate the voice, and not only the voice: the tonality, the punctuation, the German accent,” a company spokesperson told The Washington Post.

One has to assume that Siri or Alexa would fall for such deepfaked voices as well. 

Audio deepfakes are scary enough, but video takes it to another level.  As the saying goes, seeing is believing.  A cybercrime expert told The Wall Street Journal: “Imagine a video call with [a CEO’s] voice, the facial expressions you’re familiar with. Then you wouldn’t have any doubts at all.” 

As is often the case, the porn industry is an early adopter of the new technology.  Last month MIT Technology Review reported on a site that allows someone to upload a picture of a face, and see that face morphed into an adult video.  The impacts on innocent victims are horrifying. 

That particular site (which Technology Review now says is no longer available) was not the first such porn site to use the technology, probably didn’t had the most realistic deepfakes, and won’t be the last.  Sadly, though, deepfake porn is far from the biggest problem we’re likely to have with the technology.

We’re going to see mainstream actors in movies that they never filmed.  We’re going to see dead actors in new movies.  We’re going to see deepfaked business executives saying all sorts of ridiculous things (Mark Zuckerberg may already be a deepfake).  We’re going to see politicians saying things that make their opponents look good. 

Martin Ford, writing in Market Watch, warns:

A sufficiently credible deepfake could quite literally shift the arc of history—and the means to create such fabrications might soon be in the hands of political operatives, foreign governments or just mischievous teenagers.

Hany Farid, a Cal Berkeley professor, told NPR: "Now you have the perfect storm.  I can create this content easily, inexpensively and quickly, I can deliver it en masse to the world, and I have a very willing and eager public that will amplify that for me."

Nina Schick.  Credit: 60 Minutes
Similarly, technology consultant Nina Schick, who has written a book on deepfakes, told 60 Minutes: “the fact that AI can now be used to make images and video that are fake, that look hyper realistic. I thought, well, from a disinformation perspective, this is a game-changer.”

Imagine what the COVID misinformation crew could do with a deepfake Dr. Fauci.

He has been, in many ways, the face of modern medicine and science during the pandemic.  There are countless hours of video/audio of him over the last eighteen months.  He’s usually been right, sometimes been wrong, but has done his best to follow the science.  COVID-19 skeptics/deniers constantly parse his words looking for inconsistencies, for times when he was wrong, for any opportunity to challenge his expertise.

With deepfakes, we could have him telling people not to bother with masks or even vaccines.  His deepfake could tout unproven and even unsafe remedies, and denounce the FDA, the CDC, even President Biden.  Heck, they could have President Biden attacking Dr. Fauci and praising Donald Trump (conversely, of course, a deepfake Trump could urge vaccine mandates). 

We struggle now to find the best health information, about COVID and anything else that worries us about our health.  We look for credible sources, we look for reputable people’s opinions, and we use that information to make our health decisions.  But, as Ms. Schick said on 60 Minutes, deepfakes are “going to require all of us to figure out how to maneuver in a world where seeing is not always believing.”       

That will not be easy.

We’re just starting to realize how deepfakes may impact healthcare.  In a recent Nature article, Chen, et. alia warned:

...in healthcare, the proliferation of deepfakes is a blind spot; current measures to preserve patient privacy, authentication and security are insufficient. For instance, algorithms for the generation of deepfakes can also be used to potentially impersonate patients and to exploit PHI, to falsely bill health insurers relying on imaging data for the approval of insurance claims 46 and to manipulate images sent from the hospital to an insurance provider so as to trigger a request for reimbursement for a more expensive procedure.

The authors believe that there is a role for synthetic data in healthcare, but say: “it is urgent to develop and refine regulatory frameworks involving synthetic data and the monitoring of their impact in society.”

So it is generally.  The technology for detecting deepfakes is improving but, of course, so is the technology for creating them.  It’s an arms race, like everything with cybersecurity.  As Ms. Schick pointed out on 60 Minutes, “The technology itself is neutral.”  How it is used is not.

She also believes, though: “It is without a doubt one of the most important revolutions in the future of human communication and perception. I would say it's analogous to the birth of the internet.”

I’m not sure I’d go that far. 

Doctored audio/video have been with us for pretty much all of the time we’ve had audio/video; deepfake technology just takes it to a new, and more convincing, level.  We still haven’t figured out how to use the internet responsibly, and, if they do nothing more, deepfakes remind us that we’d better do so soon. 

No comments:

Post a Comment