SAN FRANCISCO — Meta said Wednesday it would begin forcing political advertisers to disclose when they use altered or digitally created media, such as a deepfake video of a candidate, as the tech industry braces for a wave of video, images and audio made with artificial intelligence ahead of the 2024 election. 

Meta, which owns Facebook and Instagram, said in a blog post that it would require advertisers to disclose during the ad-buying process “whenever a social issue, electoral, or political ad contains a photorealistic image or video, or realistic sounding audio, that was digitally created or altered.” 

Nick Clegg, Meta’s president for global affairs, said in a statement that the policy would go into effect worldwide early next year — just in time for the 2024 presidential primaries and caucuses. 

The social media company said in the blog post, “If we determine that an advertiser doesn’t disclose as required, we will reject the ad and repeated failure to disclose may result in penalties against the advertiser.” Meta also said it would put a label on such ads. 

The policy stops short of banning altered media altogether — in effect, conceding that AI-generated media is here to stay. In April, the Republican National Committee used AI to create a 30-second ad imagining a second term for President Joe Biden. In March, critics of former President Donald Trump circulated fake AI-generated images of Trump being arrested. 

But if advertisers use synthesized media, they will need to disclose it to ensure people are not misled, Meta said. 

The new policy echoes a similar move Google announced in September requiring advertisers to disclose “synthetic” media. Google and Meta are the two largest internet ad companies by total sales, so their decisions can become de facto standards online. 

Meta has been embroiled in fights over altered videos for years. In 2019, Facebook refused to take down doctored videos of then-House Speaker Nancy Pelosi, D-Calif., prompting Pelosi to accuse the California company of lying to the public. The platform changed its policies the next year to ban or label certain posts with manipulated media. 

But advances in generative AI in the past year have led to more realistic fakes created with far less effort than a few years ago, posing a challenge to online platforms, candidates and voters. 

Meta still bans manipulated media in some cases laid out in its rulebook for users

It said the new advertising policy will apply to situations such as depicting “a real person as saying or doing something they did not say or do” — a form of advanced video or audio editing known as deepfake technology, used recently to impersonate well-known figures such as Tom Hanks

The policy will also apply to ads that “depict a realistic-looking person that does not exist or a realistic-looking event that did not happen, or alter footage of a real event that happened,” and to ads that “depict a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.” 

The disclosure requirement will not apply if the digital editing is “inconsequential or immaterial” to the issues raised in an ad, Meta said. 

Source: | This article originally belongs to Nbcnews.com

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Road Trips, Falling Crude Prices Pump Up Refiners’ Profits

Summer road trips and a pullback in crude prices have pumped up…

Democrats hope to court AAPI voters in crucial battleground districts

Democrats across the country celebrated last month when Tom Suozzi, their party’s…

YouTube says it will require creators to label ‘realistic’ AI content

YouTube announced it will now require users to indicate whether the videos…

The CIA honors Underground Railroad and Civil War hero Harriet Tubman as a model spy with a new statue

When CIA employees walk into their headquarters in suburban Virginia, they are…