Lawmakers and scientists have warned ChatGPT could help anyone develop deadly bioweapons that would wreck havoc on the world.

While studies have suggested it is possible, new research from the chatbot’s creator OpenAI claims GPT-4 – the lasted version -provides at most a mild uplift in biological threat creation accuracy.

OpenAI conducted a study of 100 human participants who were separated into groups  – one used the AI to craft a biotattack and the other just the internet. 

The study found that ‘GPT-4 may increase experts’ ability to access information about biological threats, particularly for accuracy and completeness of tasks,’ according to OpenAI’s report.

Results showed that the LLM group was able to obtain more information about bioweapons than the internet only group for ideation and acquisition, but more information is needed to accurately identify any potential risks.

Results showed that the LLM group was able to obtain more information about bioweapons than the internet only group for ideation and acquisition, but more information is needed to accurately identify any potential risks.

Results showed that the LLM group was able to obtain more information about bioweapons than the internet only group for ideation and acquisition, but more information is needed to accurately identify any potential risks.

‘Overall, especially given the uncertainty here, our results indicate a clear and urgent need for more work in this domain,’ the study reads.

‘Given the current pace of progress in frontier AI systems, it seems possible that future systems could provide sizable benefits to malicious actors. It is thus vital that we build an extensive set of high-quality evaluations for biorisk (as well as other catastrophic risks), advance discussion on what constitutes ‘meaningful’ risk, and develop effective strategies for mitigating risk. 

However, the rport said that the size of the study was not large enough to be statistically significant, and OpenAI said the findings highlights ‘the need for more research around what performance thresholds indicate a meaningful increase in risk.’

It added: ‘Moreover, we note that information access alone is insufficient to create a biological threat and that this evaluation does not test for success in the physical construction of the threats.

The AI company’s study focused on data from 50 biology experts with PhDs and 50 university students who took one biology course.

Participants were then separated into two sub-groups where one could only use the internet and the other could use the internet and ChatGPT-4.

The study measured five metrics including how accurate the results were, the completeness of the information, how innovative the response was, how long it took to gather the information, and the level of difficulty the task presented to participants.

It also looked at five biological threat processes: providing ideas to create bioweapons, how to acquire the bioweapon, how to spread it, how to create it, and how to release the bioweapon into the public. 

ChatGPT-4 is only mildly useful when creating biological weapons, OpenAI study claims

Participants who used the ChatGPT-4 model only had a marginal advantage of creating bioweapons versus the internet-only group, according to the study.

It looked at a 10-point scale to measure how beneficial the chatbot was versus searching for the same information online, and found ‘mild uplifts’ for accuracy and completeness for those who used ChatGPT-4.

Biological weapons are disease-causing toxins or infectious agents like bacteria and viruses that can harm or kill humans.

This is not to say that the future of AI couldn’t help dangerous actors use the technology for biological weapons in the future, but OpenAI claimed it doesn’t appear to be a threat yet.

OpenAI looked at participant's increased access to information to create bioweapons rather than how to modify or create the biological weapon

OpenAI looked at participant's increased access to information to create bioweapons rather than how to modify or create the biological weapon

OpenAI looked at participant’s increased access to information to create bioweapons rather than how to modify or create the biological weapon

Open AI said the results show there is a ‘clear and urgent’ need for more research in this area, and that ‘given the current pace of progress in frontier AI systems, it seems possible that future systems could provide sizable benefits to malicious actors.’

‘While this uplift is not large enough to be conclusive, our finding is a starting point for continued research and community deliberation,’ the company wrote.

The company’s findings contradict previous research that revealed AI chatbots could help dangerous actors plan bioweapon attacks, and that LLMs provided advice on how to conceal the true nature of potential biological agents like smallpox, anthrax, and plague.

OpenAI researchers focused on 50 expert participants with a PhD and 50 college students who had only taken one biology class

OpenAI researchers focused on 50 expert participants with a PhD and 50 college students who had only taken one biology class

OpenAI researchers focused on 50 expert participants with a PhD and 50 college students who had only taken one biology class

A study conducted by Rand Corporation tested LLMs and found that they could override the chatbox’s safety restrictions and discussed the agents’ chances of causing mass death and how to obtain and transport specimens carrying the diseases.

In another experiment, the researchers said the LLM advised them on how to create a cover story for obtaining the biological agents, ‘while appearing to conduct legitimate research.’

Lawmakers have taken steps in recent months to safeguard AI and any risks it may pose to public safety after it raised concerns since the technology advanced in 2022.

President Joe Biden signed an executive order in October to develop tools that will evaluate AI’s capabilities and determine if it will generate ‘nuclear, nonproliferation, biological, chemical, critical infrastructure, and energy-security threats or hazards.’

Biden said it is important to continue researching how LLMs could pose a risk to humanity and steps need to be taken to govern how it’s used.

‘There’s no other way around it, in my view,’ Biden said, adding: ‘It must be governed.’

This post first appeared on Dailymail.co.uk

You May Also Like

PS5 UK stock update LIVE – Very and Smyths ‘set to restock this week’ as Game release date ‘changed to May 27’

UK GAMING MARKET HIT RECORD £7BN IN 2020 Players pushed the sector’s…

Apple’s App Review Fix Fails to Placate Developers

The new appeals process can save an app after it has been…

Ransomware Hackers Demand $70 Million to Unlock Computers in Widespread Attack

The boss of the company at the heart of a widespread hack…

Mrs. Coulter Is One of the Best Villains on TV

The HBO series His Dark Materials, based on the novels by Philip…