ELON Musk’s claims that artificial intelligence (AI) ‘will kill us all’ have ‘no proof – yet’, according to a former responsible AI programme manager at Google.

Toju Duke, who worked at Google for nearly a decade, told The Sun: “I’ve not seen any proof with the AI were dealing with today.”

Toju Duke, a former responsible AI programme manager at Google

2

Toju Duke, a former responsible AI programme manager at GoogleCredit: Toju Duke / LinkedIn
Tesla and SpaceX founder Elon Musk

2

Tesla and SpaceX founder Elon MuskCredit: AFP

The eccentric billionaire has been a staunch critic of AI, and outspoken about the dangers it poses – yet his company xAI unveiled its very own chatbot called Grok just last month.

Despite this new AI offering, while attending the UK’s global AI Safety Summit in early November, Musk said: “There is some chance, above zero, that AI will kill us all.

“I think it’s slow but there is some chance.”

The dangers Musk, and experts like Duke, talk about include human rights violations, the reinforcing of dangerous stereotypes, privacy violations, copyright, misinformation, and cyber attacks.

READ MORE ON AI

Some even fear AI’s potential use in bio and nuclear weaponry.

“There is no evidence of that happening yet,” says Duke.

“But of course, it’s something that could be potentially a risk in the future.”

For now, the more grandiose fears of AI are merely runaway pessimism.

Most read in Tech

“The only thing I see that makes people think these things is with the likes of generative AI, they’re saying it has some form of emergent properties, where it is coming up with capabilities it was not trained to come up with,” explained Duke.

Emergent properties are behaviours that are born from the interactions AI has with human users, but are not explicitly programmed or designed by its creators.

“I think that’s where the fear comes in, you know, if it carries on like this, how far can it go?,” Duke added.

Duke, who founded her organisation Diverse AI to improve diversity in the AI sector, doesn’t think humans have many excuses if an intelligent machine does in fact ‘go rogue’.

“Ultimately we’re the ones building it,” she explained.

“We’re the ones training these models… I don’t think we have any excuses whatsoever.”

Humans must train AI like how we raise children, says Duke – with a level of cause-and-effect parenting.

“It’s like bringing up a child,” she said, adding that AI developers must encourage reinforcement learning over unsupervised learning.

Otherwise, AI will “do things beyond what it’s meant to” by chasing positive reinforcement.

Though the influence of having a global framework through which every country is responsible mustn’t be ignored.

“The responsible AI framework – if its implemented from the set-go, then some of these concerns will be non-existent,” Duke urged.

Read more on The Sun

“AI’s being used in government and because it has all these inherent issues, its very important the right frameworks are put in place… it has its good and bad sides definitely, and we need to be aware of the bad sides.

“But if we work on it properly, then it will be for the good of everyone.”

Read more about Artificial Intelligence

Everything you need to know about the latest developments in Artificial Intelligence

This post first appeared on Thesun.co.uk

You May Also Like

Florida’s Condo Collapse Foreshadows the Concrete Crack-Up

It will likely be many months before we know for sure what…

Facebook is closing down popular app – and users are not happy about it

FACEBOOK is closing down another one of its apps to the dismay…

Parler’s New Owners Swear This Time Will Be Different

Rhodes said the new version of Parler will include a native video…

MATT RIDLEY: Scientists will one day bring back dodos, great auks and even woolly mammoths

Dead as a dodo? Think again, because a U.S. tech entrepreneur, Ben…