Intelligence Advanced Research Projects Activity Director Jason Matheny worries a lot about national security risks that probably aren’t headlining many lists of pressing threats to the United States — pandemics, autonomous systems, and strategic nuclear war, to name a few.
“We also have a need to protect what’s right now a wild west of biotechnology,” he told The Cipher Brief’s Annual Threat Conference in Sea Island, Georgia last week.
During a wide-ranging conversation with The Cipher Brief’s Suzanne Kelly, Matheny discussed how he manages his “anxiety budget” when thinking about the threats to national security facing the United States and what questions IARPA always asks its research program managers given the prevalence of leaks of classified information.
Suzanne Kelly: Why don’t you walk us through, kind of the basics, what you do, at Intelligence Advanced Research Projects Activity?
Matheny: We were stood up ten years ago to serve as the external research arm of the Intelligence Community. Deb Frincke, who is one of my heroes working in government, leads the intramural research within NSA, and they’re one of our closest collaborators. We then help to connect the same mission needs in CIA, NGA [National Geospatical-Intelligence Agency], et cetera, to the outside world—the universities, the colleges, the small businesses, the large businesses. We work with over 500 organizations in over a dozen countries to fund research, ranging from computer science to neuroscience to sociology to linguistics, which gives us unusual reach into the state of the art laboratories in academia or industry.
It also gives us a deep sense of humility of just how little we know about some emerging areas of science. They’re profound in terms of their potential consequences for national security.
Kelly: I’m a little nervous about that comment. If you could expand a little bit on what some of these things are that you feel there is not enough research into yet, where we’re at real risk. What are those things?
Matheny: I think we are probably about five years behind where we need to be in biosecurity. The kinds of capabilities that are being introduced in synthetic biology include the ability to edit genomes, to give them characteristics that are not found in nature, and that we don’t have any evolutionary defenses against. And those could be intentional threats, they could be pandemics that are naturally occurring, but they could also be accidents.
Laboratory accidents happen too often, particularly with very complex organisms where we don’t understand how different parts of the organism will interact, say with host immunity. That’s one area where we’re investing a lot right now in order to patch the systems that have been developed in synthetic biology, to create novel organisms, most of which will have beneficial impacts on society—improvements in medicine, agriculture, materials, energy. But either the deliberate or accidental misuse of those technologies could be catastrophic. That’s one area where we feel like we are behind.
Kelly: Just to make sure I’m understanding—you’re worried about a lab accident and either a pathogen or the technology, being able to spread and take on a life of its own in ways we don’t fully understand right now, to be able to contain?
Matheny: That’s right. And it’s happened before. Not only have there been accidents where the physical security of the pathogen failed, but we also have incidents of unexpected pathogens that had a characteristic that had not been predicted. About a decade ago, there were some experiments with mouse pox that led to the change of one gene that rendered that pox virus resistant to all existing vaccines and antivirals. The result is you need defenses against things that are broad spectrum, things that you can’t yet see over the horizon. We also need to protect what’s right now a wild west of biotechnology, in which there’s a DIY community performing experiments because it’s fun and interesting, until somebody gets hurt. The level of responsibility and ethical commitment within that community is outstanding, but even a small error could be quite catastrophic. So that’s one area where we’re investing.
Another area where we’re going to continue to increase our investment is in strategic nuclear warning. I try to manage my anxiety budget because it’s finite. You have to think about what to worry about, so getting a sense of proportion. In general, I’m an optimistic person. We as a society have made extraordinary progress over the last couple centuries. So just objectively thinking about indicators and life expectancy, which has doubled over the last century. Our real incomes have multiplied by twelve since the Industrial Revolution. Literacy has gone from 5 percent to 90 percent. Infant mortality has dropped by 95 percent. The rates of violence, even when you include all the World Wars in the last century, rates of violence have dropped by over half. So things are looking great.
Kelly: That’s not a common perspective.
Matheny: But if you take the long view, the multi-century view, it looks pretty good. We are on an upward trajectory. We are getting wealthier and smarter and more civil at the global level. But the potential for progress to get derailed is something that I do worry about. And the kinds of events that could truly derail progress in the United States aren’t that large in number. So you’re looking at strategic nuclear war, which has been a threat, and even though it doesn’t come front and center in our minds the way it did in the 1980s, 1970s, 1960s, there is risk for strategic miscalculation or accidents.
Kelly: Something that is incredibly difficult to recover from.
Matheny. Yes. Not only in terms of immediate effects, but nuclear winter in the Northern Hempishere, which would be a terminal event for us as a society. We also have the prospect of technologies that we only vaguely understand the capabilities, including in biology, but also in autonomous systems, cyberattacks against critical infrastructure such as power grids, financial trading systems, possibly nuclear command and control systems. So those are the things I would put at the top of the list. And it’s something we have been spending more time not only in thinking how can we address those risks through the research programs that we fund, but also how can we prevent ourselves from contributing to those risks.
One of the sets of questions we ask of research programs now, especially as we live in an era of leaks, if you cannot be assured that this technology you are working on will remain classified and protected forever, would you regret having invented it? We ask our program managers this. Are there intrinsic safeguards built into the technology so even if it leaks, it could not be used against you? Does it provide an asymmetric advantage to the United States relative to others? Are there potential misuse, misapplication that would in themselves be catastrophic? I think we need to start building, as part of our process with advanced R&D, what happens if our security measures fail.
Kelly: And that’s what you worry about.
Matheny: That’s part of what I worry about. I’m trying to manage my anxiety budget. Because I’ve also got an 18-year-old at home who I’m also going to worry about. But the non-18-year-old worries I have, have to be evenly distributed about the risks to us and the risks from us. I’m proud of not only the intelligence but also the conscientiousness of the program managers at IARPA who approach those kinds of problems, think through what are the risks to ourselves from these technologies.
Kelly: We’re always focused on the human with bad intent who is trying to do malicious things, to get whatever goal they’re trying to achieve. But when you have a pathogen, there’s no malicious origin, it gets out of control. Talk to me a little bit about that. And how do you interact with what the CDC does, in terms of they respond and you invest in research, they also do some of that too.
Matheny: CDC doesn’t invest that much in research. They’re more operational. NIH [National Institutes of Health}, which does invest in lots of research, but doesn’t invest that much in disease modeling or bio surveillance tools which can be used globally. We do spend a lot of our research effort in developing better tools to detect and even predict disease outbreaks overseas, some of which go to other agencies. And that is a big worry. If you look in human history, the highest mortality events have been disease events. In fact, the most intense mortality event in human history was the 1918 influenza, which killed somewhere between 50 to 100 million people in a 12-month period. Extraordinary. And the impact of that today would be probably even worse given the interconnectedness of global supply chains and transportation networks. We still don't have reliable defenses against even seasonal flu, much less pandemic flu, so we’re quite vulnerable. What worries me is things that could be worse than the 1918 influenza. The 1918 influenza – its genome is freely available on the Internet. If you wanted to download the 1918 influenza and you wanted to synthesize it from scratch, if you’re technically sophisticated and you have a fair bit of money – a few million dollars – you could do that. So that itself is worrisome – you have something that has been more destructive than a hydrogen bomb, and you don’t really have a way of stopping anybody from making that. And it is freely available. It’s in some ways worse than a hydrogen bomb. With a hydrogen bomb at least, if you have one of them and you leave it alone, if you come back there’s still only one warhead there. That’s not true with biology. The main risk associated with biology is it self-replicates. You can turn a small arsenal into a large one.
So 1918 influenza is something to worry about, but there are things that are worse potentially than something that kills 50 to 100 million people. 1918 flu wasn’t engineered. it wasn’t designed to go out and kill the largest number of people. It just happened to be a random set of mutations that did that highly effectively. What would happen if somebody really had the goal of being a sophisticated misanthrope and deciding to depopulate? That I think is something to worry about.
Kelly: I think I saw that movie. Did you see that?
Matheny: There have been a few.
Kelly: I appreciate the focus on what the threats are, because that’s what everybody wants you to do. What about the successes? Is there an example when you’ve invested time and money and research into something and there’s actually maybe there wasn’t a cure or a complete fix, but what is an example of a success?
Matheny: A few. Our investments in facial recognition and in speech recognition have been widely deployed throughout the U.S. government, and have led to significant operational successes. To see technologies that prevent a terrorist event is something that’s rewarding not only to the program managers, but also the researchers who helped develop the technologies that came from industry and academia. These are technologies that are not improving the advertising revenues by one or two percent for a search engine company, they’re technologies that are saving lives and preventing catastrophic events. That’s one area.
Another area is the work we’ve done in improving geopolitical forecasting, being able to assess the risks of certain kinds of events by doubling the accuracy of those forecasts. That technology has also been transitioned throughout the IC.
Kelly: Is there an example of that?
Matheny: Not an unclassified one. But to see those kinds of technologies applied to real-world problems…
Kelly: There have been successes.
Matheny: Yea. And I think the third area of work that I’m deeply proud of the program managers and our research collaborators has been in cyber security. So doing work not only to predict cyber attacks by looking at trends and things like darkweb of market transactions for malware. These transactions obey the economics of other markets. There’s supply and demand. When demand for a new piece of malware goes up, the prices match it and prices will go up. And you can see that pricing, which helps you detect a change in the planning orientation of a group. You can also monitor chatter within hacker forums, you can monitor web searches for particular IP addresses that are indicative of penetration testing. Or even looking at help desk tickets across an enterprise to see if you’re getting an anomalous number of these tickets that suggest there’s pen-test going on elsewhere. This clever combination of traditional cybersecurity and cyber social science, trying to understand the behaviors of cyber actors, is something that’s leading to real-world results.
Kelly: And it’s the predictive part of what you do too that I think is fascinating. Trying to keep out ahead of the threat, and know that it’s coming, which is never easy, it’s what everyone tries to do and it’s never easy.
Matheny: Part of it is we get a lot of pitches from companies or researchers who come in and say, we could have predicted 9/11, and here’s the PowerPoint briefing to prove it. They would play their prediction backwards and predict history. What we wanted was a rigorous way of testing the capability of tools to actually deliver real intelligence on real problems. We started running forecasting tournaments in which we asked researchers to predict real-world events before they occur, and then we keep score. So for us, it was really just a way of calling BS on what we thought might be bad marketing pitches. But it has now led to maybe a quarter of the research that we do, now organized around these forecasting tournaments. And very often the main objective is to figure out what we can’t forecast. What is the kind of event that will become epistemically unavailable to us. And I think certain classes of events are going to be that way. But other classes of events are ones that we really can get a handle on in advance.
Kelly: Can you tell us about maybe one example from that?
Matheny: I’m happy to. We funded a lot of research on financial forecasting for certain kinds of financial events, particularly in Latin America, the Middle East, and North Africa. And the forecasts, despite applying lots of machine learning higher power to this, lots of unorthodox use of data, we couldn’t do better than the markets, because you’ve already got highly motivated companies that are spending billions of dollars to outmatch each other on this problem. The idea that the intelligence community is going to beat maybe a trillion dollars’ worth of speculation at any time was hubris on my part. But I do think there are places where we can make substantial progress because they’re not monetized by any other organization in society. There’s no organization that’s making billions of dollars by trying to forecast disease outbreaks – so that’s an area where there’s low-hanging fruit to make better or more timely forecasts.