Our tendency to humanise large language models and AI is daft – let’s worry about corporate grabs and environmental damage

On 14 February, Kevin Roose, the New York Times tech columnist, had a two-hour conversation with Bing, Microsoft’s ChatGPT-enhanced search engine. He emerged from the experience an apparently changed man, because the chatbot had told him, among other things, that it would like to be human, that it harboured destructive desires and was in love with him.

The transcript of the conversation, together with Roose’s appearance on the paper’s The Daily podcast, immediately ratcheted up the moral panic already raging about the implications of large language models (LLMs) such as GPT-3.5 (which apparently underpins Bing) and other “generative AI” tools that are now loose in the world. These are variously seen as chronically untrustworthy artefacts, as examples of technology that is out of control or as precursors of so-called artificial general intelligence (AGI) – ie human-level intelligence – and therefore posing an existential threat to humanity.

Continue reading…

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

It’s Time to Bring Back Cargo Pants

Plus, and I’m sorry to have to say this, but—you already look…

When people use foodbanks, it’s not because they’ve lost their copy of Nigella | Frances Ryan

The offensive suggestion from MP Lee Anderson that poor people can’t cook…

What People Googled in 2022: Wordle, Ukraine, Queen Elizabeth

Wordle was the most-searched term around the globe in 2022, edging out…

Johnson defends hotel quarantine amid hunt for Brazil variant case

Prime minister downplays threat from ‘variant of concern’ and says lifting of…