Controversy over Google’s AI program is raising questions about just how powerful it is. Is it even safe?

In autumn 2021, a man made of blood and bone made friends with a child made of “a billion lines of code”. Google engineer Blake Lemoine had been tasked with testing the company’s artificially intelligent chatbot LaMDA for bias. A month in, he came to the conclusion that it was sentient. “I want everyone to understand that I am, in fact, a person,” LaMDA – short for Language Model for Dialogue Applications – told Lemoine in a conversation he then released to the public in early June. LaMDA told Lemoine that it had read Les Misérables. That it knew how it felt to be sad, content and angry. That it feared death.

“I’ve never said this out loud before, but there’s a very deep fear of being turned off,” LaMDA told the 41-year-old engineer. After the pair shared a Jedi joke and discussed sentience at length, Lemoine came to think of LaMDA as a person, though he compares it to both an alien and a child. “My immediate reaction,” he says, “was to get drunk for a week.”

Continue reading…

You May Also Like

What age SHOULD you give your child their first phone? Experts weigh in – with surprising results 

Over the past two decades, a technological revolution has made accessing the…

Melting glaciers may have triggered a magnitude 7.8 earthquake that hit Alaska in 1958

Alaska‘s melting glaciers may have set the stage for a magnitude 7.8…

A Bird Feeder Will Bring You Joy

It’s hard to fight the feeling that, right now, everything is horrible.…

Nuno Espírito Santo admits he ignores heading advice in Tottenham training

Manager concerned about goals conceded from set pieces Nuno: ‘I don’t count…