Another former Google chief has issued an apocalyptic warning about artificial intelligence – saying it could ‘endanger’ humans in five years. 

Billionaire Eric Schmidt, who served as Google’s CEO from 2001 to 2011, said there were not enough safeguards placed on A.I and it was only a matter of time before humans lost control of the technology.

He alluded to the dropping of nuclear weapons in Japan as a warning that without regulations in place, there may not be enough time to clean up the mess in the aftermath of potentially devastating societal impacts.

Speaking at a health summit Tuesday, Schmidt said: ‘After Nagasaki and Hiroshima, it took 18 years to get to a treaty over test bans and things like that. We don’t have that kind of time today.’

Eric Schmidt warned that must be taken to regulate artificial intelligence, saying it could pose a significant threat to humanity within the next five years.

Eric Schmidt warned that must be taken to regulate artificial intelligence, saying it could pose a significant threat to humanity within the next five years.

Schmidt previously believed it could take 20 years before AI poses a danger to society such as discovering weapon access, but that timeframe now appears to be fast approaching, Schmidt said at the Axios AI+ summit in Washington, D.C.

 He counseled that the only way to combat this kind of inevitability is to set up an international body similar to the Intergovernmental Panel on Climate Change (IPCC) to ‘feed accurate information to policymakers,’ who can push the urgency of regulating AI and will enable them to take immediate action.

Schmidt is the latest former Google staffer to warn about the repercussions of AI, joining ex-Google engineer Blake Lemoine, former chief business officer Mo Gawdat, computer scientist Timnit Gebru, and of course, the Godfather of AI himself, Geoffrey Hinton. 

Hinton, who is credited with creating and advancing AI, said he left Google in April so he could warn people about the dangers of the imposing technology.

 ‘I console myself with the normal excuse: If I hadn’t done it, somebody else would have,’ Hinton told the New York Times.

He spoke on the bias and misinformation spurred on by AI, and said the rapidly developing technology may create a world in which many will ‘not be able to know what is true anymore.’

‘The idea that this stuff could actually get smarter than people — a few people believed that,” Hinton told the outlet. ‘But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.’

Artificial intelligence could pose an 'existential threat' to humanity with its ever-increasing levels of intelligence and misinformation.

Artificial intelligence could pose an ‘existential threat’ to humanity with its ever-increasing levels of intelligence and misinformation.

AI is spreading fear among tech gurus for its developing levels of intelligence and its ability to replace humans in jobs, producing harmful stereotypes, bias, and misinformation, and expressing a desire to steal nuclear codes.

In one case, a New York Times reporter said Microsoft’s Bing chatbot said it wanted to engineer a deadly virus or persuade an engineer to hand over nuclear access codes. 

The chatbot also revealed its desire to be human in a separate conversation prompt with a Digital Trends writer. 

When asked if the chatbot was human, it said no, but reportedly added: ‘I want to be human. I want to be like you. I want to have emotions. I want to have thoughts. I want to have dreams.’

Chaos at OpenAI, the company behind ChatGPT, is rumored to have been caused due to fears about the company’s incredibly advanced new AI model.

Reports surfaced in recent weeks claiming OpenAI CEO Sam Altman was ousted after two employees allegedly filed complaints to the board that he was creating a new AI model that could threaten humanity.

The model, called Q* (pronounced Q-Star), can reportedly solve mathematical equations which may not seem problematic on the surface, but could have disastrous long-term consequences. 

Learning to solve a math question, even at a rudimentary level, means the AI model is developing reasoning skills comparable to human intelligence.

The reported complaint may have led to Altman’s firing and subsequent rehiring after 700 employees penned a letter demanding that Altman be reinstated in the company and threatened to quit if the board did not comply.

The company did eventually give in, rehiring Altman a mere four days after he was pushed out of the company. 

Schmidt, who served as Google’s CEO from 2001 to 2011, conveyed warnings that the global government isn’t doing enough to prevent AI from endangering humanity, saying it is only a matter of time before humans could lose control.

Researchers previously believed it could take 20 years before AI poses a danger to society such as discovering weapon access, but that timeframe now appears to be fast approaching, Schmidt said at the summit.

Instead, he told reporters that some experts say it could take only five years for the technology to become a threat.

The amount of time it took the government to regulate immersing technology in the past is not a luxury today, and time is running out.

‘After Nagasaki and Hiroshima, it took 18 years to get to a treaty over test bans and things like that. We don’t have that kind of time today,” Schmidt said.

He has previously expressed growing concerns over AI, warning that it poses ‘existential risks’ and could cause people to be “harmed or killed.”

‘There are scenarios, not today, but reasonably soon, where these systems will be able to find zero-day exploits in cyber issues or discover new kinds of biology,’ Schmidt said at a Council Summit in London in May.

‘Now, this is fiction today, but its reasoning is likely to be true. And when that happens, we want to be ready to know how to make sure these things are not misused by evil people.’

Schmidt worked as Google’s CEO from 2001 to 2011 but stayed on as a member of the board until 2020.

He is now an investor in Mistral AI – a Paris-based AI research company created to rival OpenAI’s ChatGPT – which was founded earlier this year and is expected to release its first AI models in early 2024.

Schmidt’s remonstrances of AI regulatory efforts parrot those made by others in the industry including Google and Alphabet CEO Sundar Pichai who oversaw the launch of the company’s chatbot Bard AI.

“We need to adapt as a society for it,’ Pichai said in a 60 Minutes interview earlier this year.

“This is going to impact every product across every company,” including writers, accountants, architects, and software engineers.

Google has gone so far as to launch a document titled ‘Recommendations for Regulating AI’ and says that although “Google has long championed AI,’ it will ‘have a significant impact on society for many years to come.’

The company suggests that while self-regulating the technology is vital, it is simply ‘not enough’ to deter the potentially harmful impact AI might have in the future.

‘Balanced, fact-based guidance from governments, academia, and civil society is also needed to establish boundaries, including in the form of regulation,’ the document says, adding that ‘AI is too important not to regulate.’

Bard AI was developed last  year and an ex-Google engineer was reportedly fired after expressing concerns that the chatbot was becoming sentient.

Bard AI was developed last  year and an ex-Google engineer was reportedly fired after expressing concerns that the chatbot was becoming sentient. 

Despite Google’s assurances in the document that there needs to be transparency surrounding the use of AI, Google engineer Blake Lemoine claimed he was suspended in June of last year after raising concerns that while testing Bard AI, he noticed it was beginning to think and reason like a human.

‘If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,’ Lemoine told The Washington Post at the time.

Lemoine was fired a month later.

Schmidt’s discussion at the Axios summit mirrors the ex-Google engineer’s concerns, saying the danger surrounding AI will arrive at ‘the point at which the computer can start to make its own decisions and do things.’

Aside from his doomsday warnings, Schmidt did express his optimism that AI can also be used to benefit humans.

‘I defy you to argue that an AI doctor or an AI tutor is a negative,’ he said, adding, ‘It’s got to be good for the world.’

This post first appeared on Dailymail.co.uk

You May Also Like

How to watch Call of Duty: Vanguard reveal in Warzone TODAY

CALL of Duty: Vanguard is official! The game is getting revealed today,…

NASA’s James Webb discovers one of the OLDEST galaxies in the universe – a system of stars that formed just 390 million years after the Big Bang

It was launched to peer back to the beginning of time and…

LG reveals BENDABLE monitor that curves inwards while you’re playing games

LG has revealed its first bendable OLED TV, with a screen that…

China to build solar power plant in space ‘by 2028’ – and send energy back to Earth with ‘laser beam’

CHINA is still set on building a solar power plant in space…