Last month, the British television network Channel 4 broadcast an “alternative Christmas address” by Queen Elizabeth II, in which the 94-year-old monarch was shown cracking jokes and performing a dance popular on TikTok. Of course, it wasn’t real: The video was produced as a warning about deepfakes—apparently real images or videos that show people doing or saying things they never did or said. If an image of a person can be found, new technologies using artificial intelligence and machine learning now make it possible to show that person doing almost anything at all. The dangers of the technology are clear: A high-school teacher could be shown in a compromising situation with a student, a neighbor could be depicted as a terrorist.
Can deepfakes, as such, be prohibited under American law? Almost certainly not. In U.S. v. Alvarez, decided in 2012, a badly divided Supreme Court held that the First Amendment prohibits the government from regulating speech simply because it is a lie. The case concerned a man who falsely claimed that he was a recipient of the Medal of Honor. Lying about receiving that medal is a crime under the 2006 Stolen Valor Act, but the Supreme Court struck down the key provision of that law, ruling that the lie was protected by the First Amendment. The plurality opinion declared that “permitting the government to decree this speech to be a criminal offense…would endorse government authority to compile a list of subjects about which false statements are punishable. That governmental power has no clear limiting principle…. Were this law to be sustained, there could be an endless list of subjects the National Government or the States could single out.”
Under existing law, a key question is whether deepfakes cause sufficient harm. If they are libelous, they could be regulated under current legal standards, which allow plaintiffs to recover damages when, for example, a speaker hurts their reputation by spreading what the speaker knows to be a lie.
But deepfakes need not be libelous. They might be positive, making people look impressive or wonderful. A deepfake might be used, say, to advertise an energy pill by showing an 80-year-old man taking the pill and then dunking a basketball like LeBron James. And if a deepfake shows a politician doing something amazing or heroic, there’s no libel.
Does the government have the right to ban these kinds of deepfakes? According to the plurality opinion in Alvarez, “The remedy for speech that is false is speech that is true. This is the ordinary course in a free society.” By this standard, the best response to many deepfakes is a smile and a laugh—or counter-speech and disclosure—rather than censorship. Social media platforms are not bound by the First Amendment, but in cases in which people could be misled, such platforms might want to label deepfakes as such, but not take them down, certainly as a matter of course. Twitter has voluntarily adopted an approach of this kind, potentially adding labels to manipulated media but taking down tweets that contain them only if they are “likely to cause harm.”