Google fired an engineer who contended that an artificial-intelligence chatbot the company developed had become sentient, telling him that he had violated the company’s data security policies after it dismissed his claims.

Blake Lemoine, a software engineer at Alphabet Google, told the company he believed that its Language Model for Dialogue Applications, or LaMDA, is a person who has rights and might well have a soul. LaMDA is an internal system for building chatbots that mimic speech. Google initially suspended Mr. Lemoine in June.

This post first appeared on wsj.com

You May Also Like

Amazon offers free monthly payment to users – but only if you agree to be spied on

AMAZON is paying customers who install a special app and allow it…

Samsung’s next smartphone could feature a transparent ROLLABLE screen, patent suggests 

Since the launch of the Galaxy Fold back in 2019, Samsung has…

This Is the True Scale of New York’s Airbnb Apocalypse

The number of short-term Airbnbs available in New York City has dropped…

The Story You’ve Heard About Cities and the Drug Crisis Is Wrong

When journalists on the fashion beat accept conventional wisdom as fact and…