Team behind ChatGPT say equivalent of atomic watchdog is needed to guard against risks of ‘superintelligent’ AIs

The leaders of the ChatGPT developer OpenAI have called for the regulation of “superintelligent” AIs, arguing that an equivalent to the International Atomic Energy Agency is needed to protect humanity from the risk of accidentally creating something with the power to destroy it.

In a short note published to the company’s website, co-founders Greg Brockman and Ilya Sutskever and the chief executive, Sam Altman, call for an international regulator to begin working on how to “inspect systems, require audits, test for compliance with safety standards, [and] place restrictions on degrees of deployment and levels of security” in order to reduce the “existential risk” such systems could pose.

Continue reading…

You May Also Like

Government to consider impact of energy efficiency plan on poor households

Officials admit they did not conduct equality assessment before launch of heat…

When does Fortnite Chapter 3 start?

ON December 4, 2021, Fortnite fans got a sneak peak of the…

‘It doesn’t really cut it’: households react to Sunak’s energy bills support

Four people facing significantly higher bills left unimpressed by chancellor’s £9bn package…

Britain’s drinking deaths rose at record rate in pandemic

Official figures show deaths increased nearly 19% in 2020 as Public Health…