Team behind ChatGPT say equivalent of atomic watchdog is needed to guard against risks of ‘superintelligent’ AIs

The leaders of the ChatGPT developer OpenAI have called for the regulation of “superintelligent” AIs, arguing that an equivalent to the International Atomic Energy Agency is needed to protect humanity from the risk of accidentally creating something with the power to destroy it.

In a short note published to the company’s website, co-founders Greg Brockman and Ilya Sutskever and the chief executive, Sam Altman, call for an international regulator to begin working on how to “inspect systems, require audits, test for compliance with safety standards, [and] place restrictions on degrees of deployment and levels of security” in order to reduce the “existential risk” such systems could pose.

Continue reading…

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Baby born on plane that happened to be carrying doctor and neonatal nurses

Woman only 29 weeks pregnant delivers son on flight between Salt Lake…

Fund home insulation and heat pumps for people on low incomes, PM urged

Campaigners also demand windfall tax on fossil fuel firms to subsidise energy…

Say cheese! Mars’ ‘Happy Face’ crater has significantly grown over a decade due to thermal erosion

Mars has something to smile about! The infamous ‘Happy Face’ crater near the…

Jamie Dornan was stuck in Australian quarantine when dad died of Covid

The actor said he had four days left in isolation when he…