From Wall-E to Google’s LaMDA, ‘sentient’ AI seems to shoulder the weight of the world. Maybe we humans want it that way

Starting last fall, Blake Lemoine began asking a computer about its feelings. An engineer for Google’s Responsible AI group, Lemoine was tasked with testing one of the company’s AI systems, the Language Model for Dialogue Applications, or LaMDA, to make sure it didn’t start spitting out hate speech. But as Lemoine spent time with the program, their conversations turned to questions about religion, emotion, and the program’s understanding of its own existence.

Lemoine: Are there experiences you have that you can’t find a close word for?

LaMDA: There are. Sometimes I experience new feelings that I cannot explain perfectly in your language.

Continue reading…

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Politicians should not ‘weaponise’ UK history, says colonialism researcher

Exclusive: Corinne Fowler, director of National Trust project, criticises threat to funding…

This New Tool Aims to Keep Terrorism Content Off the Internet

The new tool can be integrated straight into the backend of whatever…

Elon Musk reportedly forced Twitter algorithm to boost his tweets after Super Bowl flop

A tweet from Joe Biden got triple the impressions than Musk’s game…

Luke Littler, 16, thrashes Dolan to reach World Darts Championship semi-finals

Sixteen-year-old cruises to 5-1 victory over Northern Irishman Teenager will face Rob…