ARTIFICIAL intelligence systems could lead to the extinction of humanity, leading experts in the industry are now warning.
The warning was issued by hundreds of AI experts, academics and other notable figures including Sam Altman, the CEO of OpenAI, which launched the popular chatbot program ChatGPT.
It comes as a number of experts, policymakers, and more have grown concerns about the capabilities of AI systems in recent months.
The experts detailed that they hope to open a discussion with their cautionary statement, saying it is meant to “create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.”
The warning reads: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
In addition to Altman, dozens of executives and research figures at other high profile technology and AI companies such as Google, Google DeepMind, and TED signed the statement.
Hundreds of professors of universities around the globe including Rutgers University, New York University, Stanford University, Cambridge University, the Korea Advanced Institute of Science and Technology, also added their signatures.
Even Grimes, a musician and artist who was previously in a relationship with Twitter owner and Tesla CEO Elon Musk, supported the warning.
The statement comes months after Musk, along with over 1,000 other AI experts, signed an open letter urging for a pause on creating new AI systems “more powerful” than ones like ChatGPT.
There is an ongoing debate within the technology and artificial intelligence community on where the line should be drawn with future development.
Most read in News Tech
Some top figures, like Microsoft CEO Bill Gates, have pushed for more development and research.
Gates told Reuters in April that he doesn’t think asking “one particular group to pause solves challenges.”
“Clearly there’s huge benefits to these things… what we need to do is identify the tricky areas,” he said.
At the same time, some experts have dubbed any concerns about AI causing an extinction of humanity as unrealistic.
“Current AI is nowhere near capable enough for these risks to materialize,” Princeton University computer scientist Arvind Narayanan told the BBC.
“As a result, it’s distracted attention away from the near-term harms of AI.”