EXPERT INTERVIEW — Of the many potential threats posed by artificial intelligence (AI), few are more alarming than the possibility that AI would be used to create dangerous biological pathogens – accidentally or on purpose – and biosecurity experts are concerned that not enough is being done to guard against the dangers.
It’s a classic example of the tug between the good and the frightening potential of AI: on the one hand, AI holds life-changing potential when it comes to helping scientists develop new medicines and vaccines; on the other, it may also be a tool for would-be bioterrorists.
At this year’s Cipher Brief Threat Conference, Jennifer Ewbank, a former CIA Deputy Director for Digital Innovation, said she was “very concerned about the application of AI in biological weapons by unsavory actors.”
“The ability to either jailbreak a model, or leverage an open-source model in a manipulative way to understand how to create bespoke bio weapons – that is a real and genuine threat,” Ewbank said.
Earlier this year, more than 90 leading scientists who specialize in AI signed an agreement under the heading “Responsible AI & Biodesign,” pledging that their AI-aided research would proceed without risks to the outside world. But some biosecurity experts say governments must do more to mitigate the risk.
A report from the Johns Hopkins Center for Health Security warns that in the near future, AI models may “greatly accelerate or simplify the reintroduction of dangerous extinct viruses or dangerous viruses that only exist now within research labs.”
In an interview with The Cipher Brief, Dr. Tom Inglesby, the Hopkins Center’s director, said the U.S. and other countries must create systems and guardrails against the danger.
“The concern is that these [AI] models will simplify, enable and lower the barriers toward creating very high-consequence biological constructs,” Inglesby said. “And whether that then results in accidents or deliberate misuse, that could lead to wide-ranging biological events – epidemics, even pandemics.”
Inglesby spoke with Cipher Brief Managing Editor Tom Nagorski.
This interview has been edited for length and clarity.
The Cipher Brief: From your standpoint as an expert in biosecurity, how serious is the threat – and is it something to worry about now, or down the road?
Inglesby: It's a very important issue. And it’s potentially very high consequence. The models that we are now seeing, both large-language models and biological AI models, do not yet appear to be capable of creating in silico [performed by a computer] designs for the things we're most worried about. But it's very hard given the very rapid rise in capability, both on the language side and on the biological data side, to make predictions about when those kinds of risks will arise.
On the large-language model, the frontier models — the OpenAIs, the Quads, the Metas — the concern is that these models will lower the barriers toward creating very high-consequence biological constructs. And whether that then results in accidents or deliberate misuse, that could lead to very wide-ranging biological events, epidemics, even pandemics. That's the concern. How do we ensure that the next chatbot that comes out doesn't so simplify the design of Covid-times-10, or simplify instructions for smallpox that's now only in two labs in the world, but could be made through genome synthesis technologies?
So on the one side, we have resurrection or recreation of viruses that we already know to fear. And on the other side, we have these increasingly powerful tools that instead of using language as their driver, they use biological data, DNA, protein, RNA. And the concern is that these models could inadvertently or deliberately create a variant of something we know we aren't prepared for, and that we have no medicines or vaccines for – a variant of a virus that becomes the next pandemic.
So what we need to do is to have reasonable, sensible governance. We don't want to slow down all the good things that are happening with large-language models or biological models. We want them to help us make new medicines, new vaccines, new therapies. But we can't at the same time allow them to do whatever they want, or whatever they could do on the downside.
The Cipher Brief: It’s a useful distinction you made, between accidents and bad actors. Let’s take them separately. With the potential for accidents, does the fear involve scientists who are well-meaning, but in places where there aren't good guardrails?
Inglesby: You’re heading right in the exact direction, which is people exploring the limits of these models, presumably for good purposes. They're working on trying to understand a kind of biological phenomenon — why is this virus so transmissible? Could it even be more transmissible if this were combined with this? — and basically using very powerful informatics tools and prediction tools, forecasting tools, to begin to see how to design something.
And if we don't have rules about what we do with that, do people start to test those ideas out in laboratories? Say you're working on a model that says five years from now, three years from now, 10 years from now, this is a design for the next pandemic virus. We have no vaccines. We have no medicines. It's very dangerous, so don't make it. But a scientist then says, I better actually go evaluate that new virus and see if it's as dangerous as we thought it was. Well, maybe they're creating the Andromeda Strain. It's kind of like pushing the limits of our knowledge that maybe humans would never have gotten to on their own.
We have to prepare for that. We know that's coming. Again, we don't want to slow down the good things that are coming from these tools. It's possible these tools will help us make medicines and vaccines a lot faster, years faster.
One way of thinking about this is that these tools will help us to create very powerful new in silico designs. There is a translation function from in silico into a laboratory to create the actual living thing. And so one of the areas of potential control is ensuring that the rules of the road for translating in silico designs into actual viruses or actual living organisms are very clear and are well governed around the world. Right now, we have no rules of the road on that. In some countries there's guidance. But right now, if you make something in silico or you have a design, you can get it translated and you can order it out digitally around the world. So we’ve got to start setting up those controls as well.
Looking for a way to get ahead of the week in cyber and tech? Sign up for the Cyber Initiatives Group Sunday newsletter to quickly get up to speed on the biggest cyber and tech headlines and be ready for the week ahead. Sign up today.
The Cipher Brief: Let’s move to bad actors. For a would-be terrorist, let's say, where are we in terms of ease of creation of a dangerous pathogen? It sounds like it’s still not that simple?
Inglesby: You're right. That isn't my dominant concern today, that all of a sudden this tool is going to take someone without any training at all and be able to create the Andromeda Strain. That is perhaps in the future, but you still need the skills to be able to do certain things, to be able to work in a laboratory space, to be able to read instructions.
But one of the things that's also happening, in parallel with large-language models and biodesign tools, is the rise of cloud laboratories, which will ultimately be driven by robotics. You could use your large-language model or your biodesign tool to drive certain outcomes via in silico designs, and then send those designs to cloud laboratories, which will be robotically managed. They're not there yet. They don't quite have the technical skill to do all these things around virus production, virus manufacture, but they will get there. And then humans will potentially be less and less in the loop. Again, we can control the interactions if we have governance between these different systems. Right now we don't.
So at this point it's not so much that a terrorist is going to get this thing, and be able to do something like that. But there are hundreds of thousands of scientists in the world who work on these tools, hundreds of thousands around the world who've been PhDs or master's level. So the question is more, are there adversaries that could make use of scientists and make use of these tools to simplify their objectives if they have a clandestine biological weapons program? No country is allowed to have biological weapons, but the State Department every year reports on the countries that it believes have active programs and they can use the tools that are available.
The Cipher Brief: So in that nightmare scenario, it’s a nexus of finding scientists willing to work, presumably for a lot of money, and then AI models that will make it that much easier. Is that the idea?
Inglesby: Exactly. There are many scientists who work full-time in the employment of governments. Every government has life scientists working – hopefully mostly on good things in the world and solving problems. But there are lots of people with biological talent already working for governments. And you think about that nexus between autonomous groups or terrorist organizations and their interactions with state programs. And that whole nexus is also an area of concern. Would a country, if it creates some kind of biological weapon, ever allow one of its proxy groups to use that particular weapon? I'm not aware of any evidence of that at the moment, but we have seen in the past, people talk about “ethnic weapons.” Could we design technologies that affect only certain immune signatures in the world? AI will allow much more precision around that — understanding the distinctions between one part of the world, one demographic and the next.
All countries are in this together. And since we all have agreed that we are not going to make biological weapons because almost everyone has signed the Biological Weapons Convention, we should also see it in the best interest of all communities to not use these tools to create the next pandemic, because that hurts everybody.
The Cipher Brief: You mentioned the positive factors with AI and medicine. How do you think of AI in your world, in terms of risk versus opportunity?
Inglesby: I think overall for AI and life sciences, if we get it right, it would be 98% or 99% good, and we'll just have to manage the 1% or 2% that's extremely bad. Some of the big research organizations are beginning to use AI in a much more serious way. We saw a couple of the Nobel Prizes this year given for breakthroughs related to AI and protein design. So we will begin to see an acceleration of timelines for new products that will be lifesaving, and that's going to be exciting, and everyone's going to be really pushing that. But we shouldn’t neglect that one or 2%, that could pose extraordinary harm.
Are you Subscribed to The Cipher Brief’s Digital Channel on YouTube? Watch The Cipher Brief’s interview with CIA Director Bill Burns as he talks about The Middle East, Russia, China and the thing that keeps him up at night.
The Cipher Brief: You and the folks at Johns Hopkins worry about so many other things in the biosecurity space. How big a concern is this? Or are there a hundred other things that are more worrisome at the moment?
Inglesby: I think we've always been worried about the leading edge of science being misused, or about inadvertent catastrophe, and this is a kind of continuation and an acceleration of that phenomenon. This past year the White House came out with a really strong approach to governing dual-use biology, requiring very careful attention to experiments. If you want to make a virus more lethal, where you want to make it more transmissible, you're going to have to go through a very special review process. So this is an accelerant to that whole realm. And so we are quite worried about it, but we also feel like there are practical things that can be done, evaluations that can be done, ways to either create access controls if necessary, or other strategies of unlearning certain types of information. So there seem to be technical solutions that could be applied, but the bigger worry is about some of the language I hear – like, let's get rid of all controls around AI. That seems super dangerous, not just for bio, but for all weapons of mass destruction. If we do nothing about this, then we're going to be in a wild west of AI. No rules.
The Cipher Brief: What can be done about it — or what do you think should be done about it?
Inglesby: I think we need, at the international level, a couple of things at once. We need to push towards the normative guidance that we use for other kinds of disarmament — this isn't a disarmament issue, but we do agree in some places on common approaches that make the world safer. We don't pursue germline editing, for example. We don't try to edit the germline of humans because we all agree that it's a bad idea around the world. Here we should have the same kinds of norms – which are, we are all in this together, and we should not be using AI tools to create the next pandemic. But on the more technical side, since a lot of the power of the tools is in the United States right now, we should be creating technical approaches — and we are, NIST (The National Institute of Standards and Technology) and the Department of Commerce are leading this effort with outside companies.
So for the next generation of chatbots that are released, we should be applying an evaluation approach. Does this tool do these three things? If so, let's fix that before it's released. That is technically possible, and it is something the companies seem very interested in doing. They do not want to make these harms. They need a lot of information about what harms we're trying to avoid, and they need a lot of interaction with the technical community to do that. So I'm optimistic that there are things to do. We need norms, we need technical valuations. We do need guidance from the government about what they're trying to avoid, probably requirements at some level, not just guidance. But at a minimum, we have to start really being very clear with the large-language models, frontier models, and with the people who make biological design tools, what it is that we're trying to avoid and the kinds of valuations that are necessary.
Read more expert-driven national security insights, perspective and analysis in The Cipher Brief.