I rumbled a chatbot ruse – but as the tech improves, and news outlets begin to adopt it, how easy will it be to spot it next time?

A couple of weeks ago I tweeted a call-out for freelance journalists to pitch me feature ideas for the science and technology section of the Observer’s New Review. Unsurprisingly, given headlines, fears and interest in LLM (large language model) chatbots such as ChatGPT, many of the suggestions that flooded in focused on artificial intelligence – including a pitch about how it is being employed to predict deforestation in the Amazon.

One submission however, from an engineering student who had posted a couple of articles on Medium, seemed to be riding the artificial intelligence wave with more chutzpah. He offered three feature ideas – pitches on innovative agriculture, data storage and the therapeutic potential of VR. While coherent, the pitches had a bland authority about them, repetitive paragraph structure, and featured upbeat endings, which if you’ve been toying with ChatGPT or reading about Google chatbot Bard’s latest mishaps, are hints of chatbot-generated content.

Continue reading…

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

UK car loans: the little-known clause that means you could walk away from your deal

The right to voluntary termination allows the buyer to escape the agreement…

Why Hackers Love Smart Buildings

Buildings are getting smarter, and that opens them up to a host…

Climate Change Drove the American Mastodon to Extinction

This story originally appeared in The Guardian and is part of the…