OpenAI, the maker of ChatGPT, has quietly changed its rules and removed a ban on using the chatbot and its other AI tools for military purposes – and revealed that it is already working with the Department of Defense.

Experts have previously voiced fears that AI could escalate conflicts around the world thanks to ‘slaughterbots’ which can kill without any human intervention.

The rule change, which occurred after Wednesday last week, removed a sentence which said that the company would not permit usage of models for ‘activity that has high risk of physical harm, including: weapons development, military and warfare.’

An OpenAI spokesperson told DailyMail.com that the company, which is in talks to raise money at a valuation of $100 billion, is working with the Department of Defense on cybersecurity tools built to protect open-source software.

OpenAI, the maker of ChatGPT , has quietly changed its rules and removed a ban on using the chatbot and its other AI tools for military purposes (stock image)

OpenAI, the maker of ChatGPT , has quietly changed its rules and removed a ban on using the chatbot and its other AI tools for military purposes (stock image)

The spokesman said: ‘Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property.

‘There are, however, national security use cases that align with our mission.

‘For example, we are already working with the Defense Advanced Research Projects Agency ( DARPA) to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on.

‘It was not clear whether these beneficial use cases would have been allowed under ‘military’ in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions.’

Last year, 60 countries including the U.S. and China signed a ‘call to action’ to limit the use of artificial intelligence (AI) for military reasons.

Human rights experts at the Hague pointed out that the ‘call to action’ is not legally binding and did not address concerns including lethal AI drones or that AI could escalate existing conflicts.

Signatories said they were committed to developing and using military AI in accordance with ‘international legal obligations and in a way that does not undermine international security, stability and accountability.’

Ukraine has made use of facial recognition and AI-assisted targeting systems in its fight with Russia.

In 2020, Libyan government forces launched an autonomous Turkish Kargu-2 drone that attacked retreating rebel soldiers, the first attack of its kind in history, according to a UN report.

The lethal drone was programmed to attack ‘without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability,’ the UN report said.

Anna Makanju, OpenAI’s VP of global affairs, said in an interview this week that the ‘blanket’ provision was removed to allow for military use cases the company agrees with.

Makanju told Bloomberg’Because we previously had what was essentially a blanket prohibition on military, many people thought that would prohibit many of these use cases, which people think are very much aligned with what we want to see in the world.’

Sam Altman of OpenAI talks at Davos (Getty)

Sam Altman of OpenAI talks at Davos (Getty)

The use of AI for military purposes by ‘Big Tech’ organisations has previously caused controversy.

In 2018, thousands of Google employees protested against a Pentagon contract – Project Maven – which saw the company’s AI tools used to analyze drone surveillance footage.

In the wake of the protests, Google did not renew the contract.

Microsoft employees protested against a $480 million contract to provide soldiers with augmented reality headsets.

In 2017, technology leaders including Elon Musk wrote to the UN calling for autonomous weapons to be banned, under laws similar to those that ban chemical weapons and lasers built to blind people.

The group warned that autonomous weapons threatened to usher in a ‘third revolution in warfare’: the first two being gunpowder, and nuclear weapons.

The experts warned that once the ‘Pandora’s box’ of fully autonomous weaponry has been opened, it may be impossible to close it again.

Could AI control pilotless aircraft built to pick targets and kill? 

In the near future, artificial intelligence will control pilotless attack aircraft, says former MI6 agent and author Carlton King.

The advantages of being able to use machine learning to pilot attack craft will be highly tempting for military leaders.

King says, ‘The moment you start giving an independent robot machine learning, you start losing control of it. The temptation will be there to say, ‘Let a robot do it all.’

King says that at present, drone aircraft are flown by pilots in the US and Britain, but that military leaders may be tempted to remove the human from the equation.

King says, ‘There’s clearly going to be, if there isn’t already, the move towards taking away that pilot on the ground, because their reactions may not be quick enough, and placing that into the hands of an artificial intelligence, that reactions are much quicker, and making that decision of fire or don’t fire.’

This post first appeared on Dailymail.co.uk

You May Also Like

iPhone owner says ‘thank you from the bottom of my heart’ for ‘life-changing’ feature found after their parents died

ONE iPhone user shared a sentimental story on social media about an…

Pokémon Cards Are Surging. So Is Hate Toward Graders

When Peter Graham noticed that people were going nuts for Pokémon cards…

Mysterious object 100x larger than the Milky Way discovered by astronomers in deep space

BY A stroke of luck, astronomers have spotted one of the largest…

Fat people are seen as more interested in long-term relationships, study finds 

While many daters are desperate to find love, others are happy to…