Playing Whack-a-Mole With Deepfakes

Tech advancements plus ill-intent are creating a deepfakes nightmare. Can the legal system keep up?

Playing Whack-a-Mole With Deepfakes
Art by Alena Berger

When I was 15, I spent an all-night house party holed up in my friend’s dad’s office room, drunkenly kissing a boy named Tom.

The next day, Tom sent an email to all of  our friends: “I f***d Emma while she was asleep,” he wrote. I remember my confusion. Was this a joke? If it was, would our friends realize that? And if it wasn’t, then what was it? A threat? A confession? An audacious brag?

In 2001, reputational damage was close to the worst thing a teenage boy could do online to a girl of the same age. But my eldest daughter, who turns 5 this year, will come of age in an online world in which the power dynamics are far more pernicious. 

Already, “nudification” apps, which allow users to create deepfake pornography of unsuspecting victims, are invading schools. Governments are racing to introduce legislation to prevent their use without the subjects’ consent, but the availability and ubiquity of these apps means that trying to crack down on the deepfakes they produce is quickly becoming an impossible game of whack-a-mole. 

The use of nudification apps is growing: A 2019 study from the cybersecurity company Deeptrace found 96 percent of online deepfake content was nonconsensual pornography. Research by the intelligence company Sensity found that in December 2020, there were 85,000 harmful deepfake videos online, a number which the company says had doubled every six months over the preceding two years. In January, X (nee Twitter) was forced to suspend searches for Taylor Swift after deepfake pornographic videos of her went viral. 

Schools are trying to figure out how to respond to the epidemic.