Readers (and ChatGPT) respond to articles on artificial intelligence chatbots and other content-creating AI tools

Re Chris Moran’s article (ChatGPT is making up fake Guardian articles. Here’s how we’re responding, 6 April), barely a day passes without new risks arising from the use of artificial intelligence to generate factual material. This exciting new technology already offers journalists, whether from mainstream media or niche online sites, the promise of rapid newsgathering, analysis of complex data and near-instantaneous stories written to order. Almost irresistible, especially for news publishers on a budget. But the potential threats to news authenticity, the difficulty for both journalists and consumers in verifying seemingly plausible information, and the near certainty of bad actors creating convincing but spurious content get more concerning the more you think of them.

This is a challenge for all media. With audio and video increasingly capable of digital generation, the risk to the reputation of print, online and broadcast journalism requires an industry-wide response. It is urgent that publishers and regulators come together to agree best practice. This month, Impress, the regulator formed in the wake of the Leveson inquiry, has started the ball rolling, with all its publishers now required to ensure human editorial oversight of digitally generated material and to signal to readers when AI content is included.

Continue reading…

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Sky warns millions of phone owners over dangerous calls – listen out for them and act fast

SKY has revealed the best tips for dealing with nuisance phone calls.…

Readers reply: who is lending the British government all this money?

The long-running series in which readers answer other readers’ questions on subjects…

iPhone 11 Black Friday deals 2021: What to expect this year

BLACK Friday is a great time of year to buy a new…