The White House has struck a deal with major AI developers—including Amazon, Google, Meta, Microsoft, and OpenAI—that commits them to take action to prevent harmful AI models from being released into the world.

Under the agreement, which the White House calls a “voluntary commitment,” the companies pledge to carry out internal tests and permit external testing of new AI models before they are publicly released. The test will look for problems including biased or discriminatory output, cybersecurity flaws, and risks of broader societal harm. Startups Anthropic and Inflection, both developers of notable rivals to OpenAI’s ChatGPT, also participated in the agreement.

“Companies have a duty to ensure that their products are safe before introducing them to the public by testing the safety and capability of their AI systems,” White House special adviser for AI Ben Buchanan told reporters in a briefing yesterday. The risks that companies were asked to look out for include privacy violations and even potential contributions to biological threats. The companies also committed to publicly reporting the limitations of their systems and the security and societal risks they could pose.

The agreement also says the companies will develop watermarking systems that make it easy for people to identify audio and imagery generated by AI. OpenAI already adds watermarks to images produced by its Dall-E image generator, and Google has said it is developing similar technology for AI-generated imagery. Helping people discern what’s real and what’s fake is a growing issue as political campaigns appear to be turning to generative AI ahead of US elections in 2024.

Recent advances in generative AI systems that can create text or imagery have triggered a renewed AI arms race among companies adapting the technology for tasks like web search and writing recommendation letters. But the new algorithms have also triggered renewed concern about AI reinforcing oppressive social systems like sexism or racism, boosting election disinformation, or becoming tools for cybercrime. As a result, regulators and lawmakers in many parts of the world—including Washington, DC—have increased calls for new regulation, including requirements to assess AI before deployment.

It’s unclear how much the agreement will change how major AI companies operate. Already, growing awareness of the potential downsides of the technology has made it common for tech companies to hire people to work on AI policy and testing. Google has teams that test its systems, and it publicizes some information, like the intended use cases and ethical considerations for certain AI models. Meta and OpenAI sometimes invite external experts to try and break their models in an approach dubbed red-teaming.

“Guided by the enduring principles of safety, security, and trust, the voluntary commitments address the risks presented by advanced AI models and promote the adoption of specific practices—such as red-team testing and the publication of transparency reports—that will propel the whole ecosystem forward,” Microsoft president Brad Smith said in a blog post.

The potential societal risks the agreement pledges companies to watch for do not include the carbon footprint of training AI models, a concern that is now commonly cited in research on the impact of AI systems. Creating a system like ChatGPT can require thousands of high-powered computer processors, running for extended periods of time.

You May Also Like

Mystery as Earth smashed by ultra-powerful ‘goddess particle’ cosmic ray from deep space…and no one knows what caused it

ASTRONOMERS have been left scratching their heads after a powerful cosmic beam…

The 40 Best Shows on Disney+ Right Now

Disney+, if you didn’t know, isn’t just for kids. With its ownership…

In Life, as in Wordle, Success Often Depends on Where You Start

As a writer, words are my tools, so I fell quickly for…

Can you work out what these mind-blowing iPhone photos REALLY are?

APPLE has revealed its 10 best ‘Shot on iPhone’ photos for 2022…