Seven leading artificial intelligence companies have agreed to a handful of industry best practices, a first step toward more meaningful regulation, the White House announced Thursday.
The companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — have agreed to the principles that include security, transparency with the public and testing of their products internally before debuting them to the public.
In a call with reporters Wednesday evening previewing the announcement, a White House official, who requested to not be named as part of the terms of the call, said that President Joe Biden will eventually sign an executive order more strongly regulating AI, though officials are still working out the details.
“The White House is actively developing executive action to govern the use of AI for the president’s consideration,” the official said. “This is a high priority for the president.”
Top executives from the companies involved will also be meeting with the White House on Friday, including Microsoft President Brad Smith, Google and Alphabet President of Global Affairs Kent Walker, Amazon Web Services CEO Adam Selipsky, Meta President of Global Affairs Nick Clegg, OpenAI President Greg Brockman, Anthropic CEO Dario Amodei, and Inflection AI CEO Mustafa Suleyman.
The seven companies have each agreed to hire independent experts to probe their systems for vulnerabilities and to share information they discover with each other, governments and researchers, the official said. They also agreed to develop so-called “watermarking” mechanisms to help users identify when content they see or hear is generated by AI.
The commitments are voluntary and are not binding.
Gary Marcus, a leading artificial intelligence critic, said that the commitments were important but noted that they don’t compel AI companies to say what data they’re using to train their models.
“I think it’s a great first step and it doesn’t go far enough,” Marcus said. “First of all, because it’s voluntary. Secondly, one of the most important things we need here is what data are being used to train the models, and that’s not part of this.”
“Red teaming is great. Having the companies share information is terrific. An agreement about watermarking is terrific. These are all good steps. But until we have real transparency around data we’re not done,” he added.
Generative AI systems like OpenAI’s ChatGPT became a sensation late last year, prompting a swarm of new users and leaving the U.S. government scrambling to find an appropriate role. In May, OpenAI CEO Sam Altman courted members of Congress and asked them to regulate the industry. In June, Biden flew to San Francisco to meet with leaders from some of the Silicon Valley companies leading AI in the U.S.
The White House official said that the emphasis of the agreement is on creating transparency that will allow experts to better scrutinize AI systems.
“What the companies are committing to is independent analysis by domain experts, and setting up a broader, multifaceted regime to ensure that that analysis is credible and trustworthy,” the official said. “And they’re also committing, more generally, to engaging with academia and civil society and the U.S. and other governments on establishing best practices for safeguarding these systems and then adhering to those practices.”
Source: | This article originally belongs to Nbcnews.com