Nonconsensual sexually explicit deepfakes of Taylor Swift went viral on X on Wednesday, amassing over 27 million views and more than 260,000 likes in 19 hours before the account that posted the images was suspended.

Deepfakes portraying Swift nude and in sexual scenarios continue to proliferate on X, including reposts of the viral deepfake images. Such images can be generated with AI tools that develop entirely new, fake images, or they can be created by taking a real image and “undressing” it with AI tools.

The origin of the images isn’t clear, but a watermark on them indicates that they came from a years-old website that is known for publishing fake nude images of celebrities. The website has a portion of its website titled “AI deepfake.”

Reality Defender, an AI-detection software company, scanned the images and said that there was a high likelihood that they were created with AI technology.

The mass proliferation of the images for nearly a day shines a spotlight on the increasingly alarming spread of AI-generated content and misinformation online. Despite the escalation of the issue in recent months, tech platforms like X, which have developed their own generative-AI products, have yet to deploy or discuss tools to detect generative-AI content that goes against their guidelines.

The most viewed and shared deepfakes of Swift portrayed her nude in a football stadium. Swift has faced months of misogynistic attacks for supporting her partner, Kansas City Chiefs player Travis Kelce, by attending NFL games. Swift acknowledged the backlash during an interview with Time, saying, “I have no awareness of if I’m being shown too much and pissing off a few dads, Brads, and Chads.”

X didn’t immediately respond to a request for comment. A representative for Swift declined to comment on the record.

X has banned manipulated media that could cause harm against specific people, but has repeatedly been slow or failed to address the issue of sexually explicit deepfakes on the platform. In early January, a 17-year-old Marvel star spoke out about finding sexually explicit deepfakes of herself on X and not being able to remove them. As of Thursday, NBC News was able to find such content on X. And in June 2023, an NBC News review found nonconsensual sexually explicit deepfakes of TikTok stars circulating on the platform. After X was contacted for comment, only some of the material was removed.

According to some of Swift’s fans, Swift and X weren’t the ones responsible for taking down the most prominent images of the artist — it was the result of a mass-reporting campaign.

After “Taylor Swift AI” trended on X, Swift’s fans began flooding the hashtag with positive posts about her, according to an analysis performed by Blackbird.AI, a firm that works to protect organizations from narrative-driven online attacks using AI technology. “Protect Taylor Swift” also began trending on Thursday.

One of the people who took credit for the reporting campaign shared two screenshots with NBC News of notifications she received from X showing that her reports resulted in two accounts that shared Swift deepfakes being suspended for breaking X’s “abusive behavior” rule.

The woman who shared the screenshots, who exchanged direct messages on the condition of anonymity, said she’s been increasingly disturbed by recent consequences of AI deepfake technology on everyday women and girls.

“They don’t take our suffering seriously, so now it’s in our hands to mass report these people and get them suspended,” the woman who reported the Swift deepfakes wrote in a direct message.

In the U.S., dozens of high school-age girls have reported being victimized by deepfakes. There is no current federal U.S. law governing the creation and spread of nonconsensual sexually explicit deepfakes.

Rep. Joe Morelle, D.-N.Y., who introduced a bill in May 2023 that would criminalize nonconsensual sexually explicit deepfakes at the federal level, posted on X about the Swift deepfakes, writing, “Yet another example of the destruction deepfakes cause.” The bill has not moved forward since it was introduced, despite a prominent teen deepfake victim rallying behind it in early January.

Carrie Goldberg, a lawyer who has represented victims of deepfakes and other forms of nonconsensual sexually explicit material for more than a decade, said even tech companies and platforms that have rules against deepfakes fail to prevent them from being posted online and spreading rapidly by way of their services.

“Most human beings don’t have millions of fans who will go to bat for them if they’ve been victimized,” Goldberg said. “Even those platforms that do have deepfake policies, they’re not great at enforcing them, or especially if content has spread very quickly, it becomes the typical whack-a-mole scenario.”

“Just as technology is creating the problem, it’s also the obvious solution,” she continued. “AI on these platforms can identify these images and remove them. If there’s a single image that’s proliferating, that image can be watermarked and identified as well. So there’s no excuse.”

Source: | This article originally belongs to Nbcnews.com

You May Also Like

For many Black Brits, Harry and Meghan’s racism accusation ‘confirms what we already knew’

LONDON — Prince Harry and Meghan’s comments to Oprah Winfrey unraveled issues…

Simone Biles says she’s ‘still scared to do gymnastics’

Simone Biles fought back tears on the “TODAY” show Thursday as she…

Chevron Sets Out to Cut Carbon Emissions in Operations

Chevron Corp. said Monday it has an “aspiration” to reduce or offset…

YouTube scuba divers say they may have found the body of a man missing for nearly two decades

A group of volunteer rescue divers known for their YouTube channel may…