Researchers have slammed US officials for not rolling out stricter AI rules before popstar Taylor Swift became victim of deepfakes.

Images showing the four-time Grammy winner in a series of sexual acts while dressed in Kansas City Chief memorabilia and in the stadium – and the pornography share – was viewed 47 million times online before being removed.

A professor at George Washington University Law School said if proper legislation was ‘passed years ago’ Swift and others would not have experienced such abuse. 

‘We are too little, too late at this point,’ said Mary Anne Franks.

‘It’s not just going to be the 14-year-old girl or Taylor Swift. It’s going to be politicians. It’s going to be world leaders. It’s going to be elections.’ 

Nonconsensual, sexually explicit deepfake images of Taylor Swift circulated on social media and was viewed 47 million times before they were taken down

Nonconsensual, sexually explicit deepfake images of Taylor Swift circulated on social media and was viewed 47 million times before they were taken down 

A group of teenage girls were targeted by deepfake images at a New Jersey high school when their male classmates started sharing nude photos of them in group chats.

On October 20, one of the boys in the group chat reportedly spoke about it to one of his classmates who brought it to school administrators.

‘My daughter texted me, ‘Mom, naked pictures of me are being distributed.’ That’s it. Heading to the principal’s office,’ one mom told CBS News.

She added that her daughter, who is 14, ‘started crying, and then she was walking in the hallways and seeing other girls of Westfield High School crying.’

But it wasn’t until deepfake photos of Taylor Swift went viral that lawmakers pushed to take action.

X shut down the account that originally posted graphic deepfake images of Swift for violating platform policy, but it was too late – they were already reposted 24,000 times.

A 404 Media report revealed the images might have originated from a group on Telegram after users reportedly joked about how the images of Swift went viral.

X said its teams were taking ‘appropriate action’ against the accounts that posted the deepfakes and said it was monitoring the situation and removing the images.

Last week, U.S. senators introduced the Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024 (DEFIANCE Act) shortly after Swift became a victim of the technology.

‘Although the imagery may be fake, the harm to the victims from the distribution of sexually explicit ‘deepfakes’ is very real,’ Senate Majority Whip Dick Durbin (D-Illinois) said last week.

‘Victims have lost their jobs, and they may suffer ongoing depression or anxiety.

‘By introducing this legislation, we’re giving power back to the victims, cracking down on the distribution of ‘deepfake’ images, and holding those responsible for the images accountable.’

Lawmakers proposed the Defiance Act that would allow people to sue those who created deepfake content of them

Lawmakers proposed the Defiance Act that would allow people to sue those who created deepfake content of them

Politicians submitted the Preventing Deepfakes of Intimate Images Act last year – which would make it illegal to share nonconsensual deepfake pornography – but it has not yet been passed.

‘If there had been legislation passed years ago when advocates were saying this is what’s bound to happen with this kind of technology, we might not be in this position,’  Franks, a professor at George Washington University Law School and president of the Cyber Civil Rights Initiative, told Scientific American.

Franks said lawmakers are doing too little too late.

‘We can still try to mitigate the disaster that’s emerging,’ she explained.

Women are ‘canaries in the coal mine,’ Franks said, speaking about how AI disproportionately affects the female population.

She added that eventually, ‘it’s not just going to be the 14-year-old girl or Taylor Swift. It’s going to be politicians. It’s going to be world leaders. It’s going to be elections.’

A 2023 study found that in the last five years, there has been a 550 percent rise in the creation of doctored images, with 95,820 deepfake videos posted online last year alone.

In a Dailymail.com/TIPP poll: 75 percent of people agreed that people who share deepfake pornographic images online should face criminal charges. 

Deepfake technology uses AI to manipulate a person’s face or body, and there currently are no federal laws in place to protect people against the sharing or creation of such images.

Representative Joseph Morelle (D-New York) who unveiled the Preventing Deepfake of Intimate Images Act, called for other lawmakers to step up and take urgent action against rising deepfake images and videos.

Images and videos ‘can cause irrevocable emotional, financial, and reputational harm,’ Morelle said, adding: ‘And unfortunately, women are disproportionately impacted.’

75 percent of people agree that people who share deepfake pornographic images online should face criminal charges

75 percent of people agree that people who share deepfake pornographic images online should face criminal charges

Yet for all their talk, there are still no set guardrails to protect Americans from falling victim to nonconsensual deepfake images or videos.

‘It is clear that AI technology is advancing faster than the necessary guardrails,’ said Congressman Tom Kean, Jr., who proposed the AI Labeling Act in November of last year.

The Act would require AI companies to add labels to any content generated by AI and force them to take responsible steps to prevent the publication of non-consensual content.

‘Whether the victim is Taylor Swift or any young person across our country – we need to establish safeguards to combat this alarming trend,’ said Kean.

However, there is one major hiccup to all the legislative hoopla: who to charge with a crime once a law is passed criminalizing deepfakes.

It is highly unlikely that the person at fault will step up and identify themselves, and forensic studies can’t always identify and prove which software created the content, according to Amir Ghavi, lead counsel on AI at the law firm Fried Frank.

And even if law enforcement could determine where the content originated, they might be prohibited from taking action by Section 230, which says websites aren’t responsible for what users post.

Regardless, the potential barriers aren’t slowing down politicians in the wake of Swift’s run-in with sexually explicit deepfake content.

‘Nobody—neither celebrities nor ordinary Americans—should ever have to find themselves featured in AI pornography,’ said Senator Josh Hawley (R-Missouri).

Speaking about the Defiance Act, he said: ‘Innocent people have a right to defend their reputations and hold perpetrators accountable in court. This bill will make that a reality.’

This post first appeared on Dailymail.co.uk

You May Also Like

The Effort to Build the Mathematical Library of the Future

“Two years ago you would have had to [apply the associative property]…

Every FIFA cover athlete

FIFA COVER stars – there have been a lot of them.  At…

Revealed: The most polluted cities in the world – with Hanoi topping the list

Scientists have revealed the most polluted cities in the world where inhabitants…

Twitter: 70,000 users join rival Mastodon after Musk takeover

Following Elon Musk‘s controversial takeover of Twitter, the platform’s users seem to…