ARTIFICIAL intelligence is a good tool for many reasons, but some current issues could lead to dangerous situations, experts warned.

One major issue with AI is that capable of generating false information that it believes to be true which is referred to as an AI hallucination.

Researchers are warning people to not believe everything AI says

1

Researchers are warning people to not believe everything AI says

Researchers at the Oxford Internet Institute have reviewed some examples of AI hallucinations and have concluded that people need to stay aware of mistakes when using the tech.

The findings were published in the journal Nature Human Behavior.

Specifically, the researchers looked over AI as a “Large Language Model” which is what chatbots use in order to find information on a wide scope and then condense it into an informed response.

However, AI can be incorrect in its responses but sound convincing in doing so.

These actions are AI hallucinations because the tech is generating false content that they present as accurate. 

“LLMs are designed to produce helpful and convincing responses without any overriding guarantees regarding their accuracy or alignment with fact,” the paper explained.

The paper explained that humans have put too much trust in AI and believe everything it says as the exact truth.

Researchers are warning people to not believe everything AI says and that it would be a good idea to fact-check the information it gives you as a safety protocol.

Most read in News Tech

Believing everything AI says can be dangerous because it could stir people into unfortunate situations in some cases.

“People using LLMs often anthropomorphize the technology, where they trust it as a human-like information source,” explained Professor Brent Mittelstadt, co-author of the paper.

“This is, in part, due to the design of LLMs as helpful, human-sounding agents that converse with users and answer seemingly any question with confident-sounding, well-written text.

“The result of this is that users can easily be convinced that responses are accurate even when they have no basis in fact or present a biased or partial version of the truth.”

The researchers said that AI hallucinations are a direct threat to the progress of humanity for these reasons if humans stop concluding their own thoughts.

It was also warned that AI hallucinations are a direct threat to science and scientific truth.

This post first appeared on Thesun.co.uk

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Sony PlayStation 5 sale causes Walmart website to crash

Sony’s highly-anticipated PlayStation 5 console is finally available for purchase at Walmart,…

Elon Musk tries to back out of $44BILLION Twitter deal – but tech giant statement reveals it’s holding on

ELON Musk is reportedly threatening to walk away from his Twitter deal.…

Netflix Has Defied the Russian Government, for Now

Last week, Netflix turned part of the English city of Bradford into…

Mysterious lost world wiped out by apocalyptic volcanic eruption 22 million years ago rediscovered by scientists

A MYSTERIOUS long-lost world that was wiped from Earth by a volcanic…