“One wonders if the vision of a rapid, overwhelming, swarm-like robotics technology is really consistent with a human being in the loop,” says Ryan Calo, a professor at the University of Washington. “There’s tension between meaningful human control and some of the advantages that artificial intelligence confers in military conflicts.”
AI is moving quickly into the military arena. The Pentagon has courted tech companies and engineers in recent years, aware that the latest advances are more likely to come from Silicon Valley than from conventional defense contractors. This has produced controversy, most notably when employees of Google, another Alphabet company, protested an Air Force contract to provide AI for analyzing aerial imagery. But AI concepts and tools that are released openly can also be repurposed for military ends.
DeepMind released details and code for a groundbreaking AI algorithm only a few months before the anti-AI weapons letter was issued in 2015. The algorithm used a technique called reinforcement learning to play a range of Atari video games with superhuman skill. It attains expertise through repeated experimentation, gradually learning what maneuvers lead to higher scores. Several companies participating in AlphaDogfight used the same idea.
DeepMind has released other code with potential military applications. In January 2019, the company released details of a reinforcement learning algorithm capable of playing StarCraft II, a sprawling space strategy game. Another Darpa project called Gamebreaker encourages entrants to generate new AI war-game strategies using Starcraft II and other games.
Other companies and research labs have produced ideas and tools that may be harnessed for military AI. A reinforcement learning technique released in 2017 by OpenAI, another AI company, inspired the design of several of the agents involved with AlphaDogfight. OpenAI was founded by Silicon Valley luminaries including Musk and Sam Altman to “avoid enabling uses of AI … that harm humanity,” and the company has contributed to research highlighting the dangers of AI weapons. OpenAI declined to comment.
Some AI researchers feel they are simply developing general-purpose tools. But others are increasingly worried about how their research may end up being used.
“At the moment I’m deep in a crossroads in my career, trying to figure out whether ML can do more good than bad,” says Julien Cornebise, as associate professor at University College London who previously worked at DeepMind and ElementAI, a Canadian AI firm.
Cornebise also worked on a project with Amnesty International that used AI to detect destroyed villages from the Darfur conflict using on satellite imagery. He and the other researchers involved chose not to release their code for fear that it could be used to target vulnerable villages.
Calo of the University of Washington says it will be increasingly important for companies to be upfront with their own researchers about how their code might be released. “They need to have the capacity to opt out of projects that offend their sensibilities,” he says.
It may prove difficult to deploy the algorithms used in the Darpa contest in real aircraft, since the simulated environment is so much simpler. There is also still much to be said for a human pilot’s ability to understand context and apply common sense when faced with a new challenge.
Still, the death match showed the potential of AI. After many rounds of virtual combat, the AlphaDogfight contest was won by Heron Systems, a small AI-focused defense company based in California. Heron developed its own reinforcement learning algorithm from scratch.
In the final matchup, a US Air Force fighter pilot with the call sign “Banger” engaged with Heron’s program using a VR headset and a set of controls similar to those inside a real F-16.
In the first battle, Banger banked aggressively in an attempt to bring his adversary into sight and range. But the simulated enemy turned just as fast, and the two planes became locked in a downward spiral, each trying to zero in on the other. After a few turns, Banger’s opponent timed a long-distance shot perfectly, and Banger’s F-16 was hit and destroyed. Four more dogfights between the two opponents ended roughly the same way.
Brett Darcey, vice president of Heron, says his company hopes the technology eventually finds its way into real military hardware. But he also thinks the ethics of such systems are worth discussing. “I would love to live in a world where we have a polite discussion over whether or not the system should ever exist,” he says. “If the United States doesn’t adopt these technologies somebody else will.”
More Great WIRED Stories