Google fired an engineer who contended that an artificial-intelligence chatbot the company developed had become sentient, telling him that he had violated the company’s data security policies after it dismissed his claims.

Blake Lemoine, a software engineer at Alphabet Google, told the company he believed that its Language Model for Dialogue Applications, or LaMDA, is a person who has rights and might well have a soul. LaMDA is an internal system for building chatbots that mimic speech. Google initially suspended Mr. Lemoine in June.

This post first appeared on wsj.com

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

The Ultimate College Survival Kit: 7 Tools Your Freshman Needs

What to Read Next This post first appeared on wsj.com

ChatGPT, Galactica, and the Progress Trap

With the release of large language models (LLMs) like ChatGPT – a question-answering…

Astronaut sneaks GORILLA suit onto ISS and terrifies Nasa coworkers on camera

THE BIZARRE moment a gorilla chased after astronaut Tim Peake on the…

What is an atmospheric river? Expert explains why California is experiencing record rainfall and devastating floods that has left three dead so far and reveals what is to come

Three people have died in the historic flooding that has pummeled California,…