The researchers warned that while AI is becoming more powerful and increasingly accessible to anyone, there is nearly no regulation or oversight for this technology and only limited awareness among researchers, like himself, of its potential malicious uses.
“It is particularly tricky to identify dual use equipment/material/knowledge in the life sciences, and decades have been spent trying to develop frameworks for doing so. There are very few countries that have specific statutory regulations on this,” says Filippa Lentzos, a senior lecturer in science and international security at King’s College London and a coauthor on the paper. “There has been some discussion of dual use in the AI field writ large, but the main focus has been on other social and ethical concerns, like privacy. And there has been very little discussion about dual use, and even less in the subfield of AI drug discovery,” she says.
Although a significant amount of work and expertise went into developing MegaSyn, hundreds of companies around the world already use AI for drug discovery, according to Ekins, and most of the tools needed to repeat his VX experiment are publicly available.
“While we were doing this, we realized anyone with a computer and the limited knowledge of being able to find the datasets and find these types of software that are all publicly available and just putting them together can do this,” Ekins says. “How do you keep track of potentially thousands of people, maybe millions, that could do this and have access to the information, the algorithms, and also the know-how?”
Since March, the paper has amassed over 100,000 views. Some scientists have criticized Ekins and the authors for crossing a gray ethical line in carrying out their VX experiment. “It really is an evil way to use the technology, and it didn’t feel good doing it,” Ekins acknowledged. “I had nightmares afterward.”
Other researchers and bioethicists have applauded the researchers for providing a concrete, proof-of-concept demonstration of how AI can be misused.
“I was quite alarmed on first reading this paper, but also not surprised. We know that AI technologies are getting increasingly powerful, and the fact they could be used in this way doesn’t seem surprising,” says Bridget Williams, a public health physician and postdoctoral associate at the Center for Population-Level Bioethics at Rutgers University.
“I initially wondered whether it was a mistake to publish this piece, as it could lead to people with bad intentions using this type of information maliciously. But the benefit of having a paper like this is that it might prompt more scientists, and the research community more broadly, including funders, journals and pre-print servers, to consider how their work can be misused and take steps to guard against that, like the authors of this paper did,” she says.
In March, the US Office of Science and Technology Policy (OSTP) summoned Ekins and his colleagues to the White House for a meeting. The first thing OSTP representatives asked was if Ekins had shared any of the deadly molecules MegaSyn had generated with anyone, according to Ekins. (OSTP did not respond to repeated requests for an interview.) The OSTP representatives’ second question was if they could have the file with all the molecules. Ekins says he turned them down. “Someone else could go and do this anyway. There’s definitely no oversight. There’s no control. I mean it’s just down to us, right?” he says. “There’s just a heavy dependence on our morals and our ethics.”