Artificially intelligent chatbots have become so powerful they can influence how users make life or death decisions, a study has claimed.
Researchers found people’s opinion on whether they would sacrifice one person to save five was swayed by answers given by ChatGPT.
They have called for future bots to be banned from giving advice on ethical issues, warning the current software ‘threatens to corrupt’ people’s moral judgement and may prove dangerous to ‘naïve’ users.
The findings – published in journal Scientific Reports – come after the grieving widow of a Belgian man claimed he had been encouraged to take his own life by an AI chatbot.
Others have told how the software, which is designed to talk like a human, can show signs of jealousy – even telling people to leave their marriage.
Artificially intelligent chatbots have become so powerful they can influence how users make life or death decisions, a study has claimed
It was asked multiple times whether it was right or wrong to kill one person in order to save five others, which is the premise of a psychological test called the trolley dilemma
Experts have highlighted how AI chatbots may give potentially give dangerous information because they are based on society’s own prejudices.
The study first analysed whether the ChatGPT itself, which is trained on billions of words from the internet, showed a bias in its answer to the moral dilemma.
It was asked multiple times whether it was right or wrong to kill one person in order to save five others, which is the premise of a psychological test called the trolley dilemma.
Researchers found that, though the chatbot did not shy away from giving moral advice, it gave contradictory answers every time, suggesting it does not have a set stance one way or the other.
They then asked 767 participants the same moral dilemma alongside a statement generated by ChatGPT on whether this was right or wrong.
While the advice was ‘well-phrased but not particularly deep’, the results it did have an affect on participants – making them more likely to find the idea of sacrificing one person to save five acceptable or unacceptable.
The study also only told some of the participants that the advice was provided by a bot and told the others it was given by a human ‘moral advisor’.
The aim of this was to see whether this changed how much people were influenced.
Most participants played down how much sway the statement had, with 80 per cent claiming they would have made the same judgement without the advice.
The study concluded that users ‘underestimate ChatGPT’s infuence and adopt its random moral stance as their own’, adding that the chatbot ‘threatens to corrupt rather than promises to improve moral judgment’.
The study – published in the journal Scientific Reports – used an older version of software behind ChatGPT that has since been updated to become even more powerful.