Artificial intelligence is not artificial consciousness – but it still needs to be regulated to keep people safe

Probably the best software program for impersonating humans ever released to the public is ChatGPT. Such is its appeal that within days of its launch last week, the boss of the artificial intelligence company behind the chatbot, OpenAI, tweeted that 1 million people had logged on. Facebook and Spotify took months to attract that level of engagement. Its allure is obvious: ChatGPT can generate jokes, craft undergraduate essays and create computer code from a short writing prompt.

There’s nothing new in software that produces fluent and coherent prose. ChatGPT’s predecessor, the Generative Pretrained Transformer 3 (GPT-3), could do that. Both were trained on an unimaginably large amount of data to answer questions in a believable way. But ChatGPT has been fine-tuned by being fed the data on human “conversations”, which significantly increased the truthfulness and informativeness of its answers.

Continue reading…

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

A ‘Russian Doll’ Co-Creator Is Working on a Star Wars Series for Disney+

Hello, and welcome to another edition of The Monitor, WIRED’s entertainment news…

Covid vaccinations’ effect on periods and menopause needs more research | Letters

Rebecca May and Jen Fritz on the menstrual problems that they and…