Life in a nightmarish dystopia isn’t all bad. For Winston Smith, the tragic everyman of George Orwell’s 1984, the “greatest pleasure in life” is his work at the Ministry of Truth, where lies are manufactured and truths tossed down memory holes. To help “correct” the historical record, Winston takes pride in inventing fake statistics, fake events, fake people, and even revises newspapers to include phantoms like Comrade Ogilvy, a made-up war hero. In essence, the MoT is a fake news factory.

In the real world of 2020, we’re witnessing an inversion of the top-down fakery of 1984. High-quality fakes are bubbling up as AI is increasingly democratized. Meanwhile, members of “the state,” far from carrying out the efficient machinations imagined by Orwell, are themselves flummoxed by, and often the targets of, malicious fakes (like the “Drunk Pelosi” video). In a recent essay about the enduring relevance of 1984, George Packer writes, “The Ministry of Truth is Facebook, Google, and cable news. We have met Big Brother and he is us.” Any one of us can be a propagandist. What’s more, the powers that be lack an effective regulatory mechanism for dealing with the next phase of the disinformation age, when indistinguishable fakes will flood the internet. While countless commentators have viewed 1984 as a black cauldron simmering with horrors to avoid, we may need to salvage the idea of a Ministry of Truth in order to preserve what’s left of our shared reality.

Consider that our forgeries are already much better and far stranger than those of Big Brother. Right now, you can experience the pleasure of a well-done fake with only a click or a tap, no Winston Smith required. Visit the website This Person Does Not Exist, and refresh as many fictitious comrades as your heart desires. When the digital ghosts get too weird, you can marvel at a deepfake face swap: Behold, if you dare, Steve Buscemi as Jennifer Lawrence. Or you can fake yourself with FaceApp.

Fakery isn’t always harmless. More than 90 percent of deepfakes are pornographic, much of it “revenge porn.” Criminals have deployed deepfaked audio to impersonate CEOs. Synthetic content, experts warn, could be used to influence elections, sway financial markets, or trigger wars. Back in June 2019, House Democrat Adam Schiff, a man permanently on the cusp of letting out a long, tired sigh, led a congressional hearing on deepfakes. All told, it was a gloomy affair.

That day, representatives learned that a “high school kid with a good graphics card can make this stuff.” That the creators of malicious deepfakes (the bad guys) and those working to identify and intercept fake content (the good guys) are locked in an unending arms race. Hany Farid, an expert in digital forensics at UC Berkeley, has said, “We are outgunned … The number of people working on the video-synthesis side, as opposed to the detector side, is 100 to 1.” Finally, representatives learned of the tipping point of indistinguishability: In a few years, it will be impossible for the naked eye to distinguish a real video from a deepfake. The prospects are harrowing: perfect fakes, creatable by anyone, unleashed at scale and difficult to discern. It’s no wonder that during the hearing Washington representative Denny Heck repeatedly quoted from Dante’s Inferno: “Abandon hope all ye who enter here.”

Putting aside such brash pessimism, what can be done? The platforms on which fake content appears (Facebook, Instagram, YouTube, Twitter) have taken some steps to combat disinformation. In a recent blog post, Facebook pledged to “remove misleading manipulated media” if “it is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.” This is important, but detection is a serious challenge, and few trust Big Tech to fully self-regulate. Sure, there’s a profusion of laws around identity theft and defamation that might dissuade creators of harmful fakes, but it’s unclear who will enforce them or how.

SUBSCRIBE

Subscribe to WIRED and stay smart with more of your favorite Ideas writers.

As law professors Danielle Citron and Robert Chesney describe in their paper “Deepfakes: A Looming Challenge for Privacy, Democracy, and National Security,” three federal agencies (the FCC, the FEC, and the FTC) could in theory regulate the dissemination of fake content, but “on close inspection, their potential roles appear quite limited.” The FCC’s jurisdiction is limited to radio and television. The FEC is concerned only with the electoral process. The FTC oversees “fake advertising,” but deepfakes aren’t typically hawking products or services.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

WhatsApp warning: Trick can STOP disappearing texts from vanishing – you need to know how it works

WHATSAPP is working on a feature that could stop vanishing texts from…

How to Figure Out the Mass of Earth—With Balls and String

It’s fun to think about how we know stuff. For instance, the…

10 Great Tech Books to Gift (or Keep for Yourself)

Technology is exerting an ever-growing influence on our world. A cast including…

PS5 owners shocked as one of gaming’s most iconic series is removed – don’t miss out

PLAYSTATION is removing 11 games from its service in July. Games are…