Microsoft CEO Satya Nadella said Friday that the company has to “move fast” on combatting nonconsensual sexually explicit deepfake images, after AI-generated fake nude pictures of Taylor Swift went viral this week.

In an exclusive interview with NBC News’ Lester Holt, Nadella commented on the “alarming and terrible” deepfake images of Swift posted on X that by Thursday had been viewed more than 27 million times. The account that posted them was suspended after it was mass-reported by fans of Swift.

“Yes, we have to act,” Nadella said in response to a question about the deepfakes of Swift. “I think we all benefit when the online world is a safe world. And so I don’t think anyone would want an online world that is completely not safe for both content creators and content consumers. So therefore, I think it behooves us to move fast on this.”

Read more on this story at NBCNews.com and watch “NBC Nightly News with Lester Holt” tonight at 6:30 p.m. ET/5:30 p.m. CT.

X didn’t respond to an NBC News request for comment about the deepfake images of Swift, while the singer’s representative declined to comment on the record.

Microsoft has invested into and created artificial intelligence technology of its own, including being a primary investor in OpenAI — one of the leading AI organizations, which created ChatGPT — as well as tools integrated within Microsoft products, like Copilot, an AI chatbot tool on Microsoft’s search engine, Bing.

“I go back to what I think’s our responsibility, which is all of the guardrails that we need to place around the technology so that there’s more safe content that’s being produced,” Nadella said. “And there’s a lot to be done and a lot being done there.”

“But it is about global, societal, you know, I’ll say convergence on certain norms,” he continued. “Especially when you have law and law enforcement and tech platforms that can come together, I think we can govern a lot more than we give ourselves credit for.”

404 Media reported that the deepfake images of Swift that went viral on X were traced back to a Telegram group chat, where members said they used Microsoft’s generative-AI tool, Designer, to make such material. NBC News has not independently verified that reporting. Nadella didn’t comment directly on the 404 Media’s report, but in a statement to 404 Media Microsoft said it was investigating the reports and would take appropriate action to address them.

“Our Code of Conduct prohibits the use of our tools for the creation of adult or non-consensual intimate content, and any repeated attempts to produce content that goes against our policies may result in loss of access to the service,” Microsoft said in its statement to 404 Media. “We have large teams working on the development of guardrails and other safety systems in line with our responsible AI principles, including content filtering, operational monitoring and abuse detection to mitigate misuse of the system and help create a safer environment for users.” 

Source: | This article originally belongs to Nbcnews.com

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Retailers Near Restocking as Inventory Paring Winds Down

Share Listen (2 min) This post first appeared on wsj.com

Coco Gauff in tears after defeat at Australian Open

A frustrated Coco Gauff broke down in tears after her 7-5 6-3…

New Nashville museum memorializes Black music’s ‘origin story’

Take in Louis Armstrong’s trumpet. Or legendary jazz singer Ella Fitzgerald’s Grammy.…

Why Biden Isn’t Benefiting From a Strong Economy

It’s a bedrock assumption of American politics that a strong economy is…