From Wall-E to Google’s LaMDA, ‘sentient’ AI seems to shoulder the weight of the world. Maybe we humans want it that way

Starting last fall, Blake Lemoine began asking a computer about its feelings. An engineer for Google’s Responsible AI group, Lemoine was tasked with testing one of the company’s AI systems, the Language Model for Dialogue Applications, or LaMDA, to make sure it didn’t start spitting out hate speech. But as Lemoine spent time with the program, their conversations turned to questions about religion, emotion, and the program’s understanding of its own existence.

Lemoine: Are there experiences you have that you can’t find a close word for?

LaMDA: There are. Sometimes I experience new feelings that I cannot explain perfectly in your language.

Continue reading…

You May Also Like

A Siemens S7-1500 Logic Controller Flaw Raises the Specter of Stuxnet

“This separate crypto core is a very rudimentary chip. It’s not like…

The Best Automated Espresso, Latte, & Cappuccino Makers (2022)

A good latte or cappuccino is like a rich, milky mug of…

Steve Bell on Priti Patel and Ukrainian refugees – cartoon

Continue reading…