Fake, artificial-intelligence-generated nude and sexually suggestive photos of Taylor Swift continue to circulate on social media platforms, days after they first spread on Elon Musk’s X, having now made their way to Instagram and Facebook.
Basic keyword searches of those platforms on Monday quickly turned up the fake images of the singer, despite some efforts to limit their spread. On Saturday, X stopped Swift’s name from working in its search function, though alternate phrasings still turned up the pictures. As of Tuesday morning, however, X had unblocked searches of Swift’s name.
The continued proliferation of sexually explicit deepfakes of Swift on leading social media platforms underscores the challenge of stopping the spread of fake images once they’re disseminated on the internet — it’s difficult, if not impossible, to contain them. Similar material can be created and posted almost immediately, even if the original content has been taken down.
A representative for Swift didn’t immediately respond to a request for comment. Neither did X, the company previously known as Twitter.
Meta, which owns both Facebook and Instagram, said in a statement: “No one should ever have to experience online abuse like this.” On Monday, Meta removed eight Instagram posts and one Facebook post containing sexually suggestive and sexually explicit deepfake images of Swift that were flagged by NBC News. The platform left up an AI-generated Facebook post that depicted Swift pregnant, as well as one that depicted Swift kissing the head coach of the Kansas City Chiefs.
“We strongly condemn the content that has appeared across different internet services, and we worked quickly to remove it from ours,” Meta said in the statement. “We continue to monitor our platforms for this violating content and will take appropriate action as needed.”
A handful of fake images of Swift first began to spread after an unidentified X user published them on Wednesday. That post was viewed more than 27 million times before Swift’s fans mass-reported the account, resulting in its suspension.
On Saturday, X temporarily blocked searches for “Taylor Swift” and “Taylor Swift AI.” Searching those terms resulted in a loading page that never returned results. But terms like “T Swift,” “Taylor Swift deepfake” and “Taylor Swift nude” still returned search results that included those sexually explicit deepfakes of Swift, and other fake images such as one that portrayed her as a Nazi.
The images first posted on X last week have now spread to other platforms. A search for “Taylor Swift AI” on Instagram on Monday retrieved sexually suggestive and explicit deepfakes of Swift alongside posts talking about the viral deepfakes within the top 100 post results. On Facebook, a search for “Taylor Swift AI” surfaced AI-generated images of Swift, including the sexually suggestive deepfake images as well as others that were not part of the initial viral incident.
The rapid evolution of AI technology has made once-arduous media manipulation readily available to the public, with some smartphone apps now capable of near-seamless face swaps. Advanced generative AI from major tech companies that is widely available is capable of creating entirely new and lifelike images based on text inputs. Swift is just one of many prominent victims of this technology.
Image-based sexual abuse targeting celebrities has appeared in top Google and Bing search results, as well as on websites that monetize the content. In recent months, teenage girls have spoken out about being victimized with sexually explicit deepfakes at school. One teen victim has pushed for a federal law that would criminalize deepfakes — it would be the first federal law to address nonconsensual sexually explicit deepfakes.
But the fake Swift images have spread farther and sparked more outrage than any other deepfakes to date. Some U.S. politicians have reacted vigorously and White House press secretary Karine Jean-Pierre said Congress should take action.
Source: | This article originally belongs to Nbcnews.com