Many view cybersecurity as passively blocking attempts to breach networks, but security experts have long advocated more active measures in defense of sensitive networks. Advances in artificial intelligence and machine learning could make such efforts scalable to the vast connectivity of the modern age. The Cipher Brief spoke with David Brumley, Director of CyLab and a professor at Carnegie Mellon University, about the advent of autonomous hacking bots and the impact they could have in actively defending networks against cybercriminals and nation-states alike.
The Cipher Brief: What is active defense?
David Brumley: As a general term, active defense refers to using offense just enough to deny an enemy an action with respect to a contested area. In today’s Internet, of course, there are a lot of blurry lines on where things end and where they begin, and what constitutes as engaging the enemy and whether or not you’re engaging them or someone else accidentally. Intellectually, the concept of active defense makes a lot of sense. What do you want to do if someone is attacking you? You may need to take aggressive actions back to counteract what they’re doing.
TCB: So how does this lead to better security?
DB: Active defense is a more pro-active way to defend. As a physical analogy, rather than building a really strong wall to protect against an enemy, active defense goes after the enemy before they even reach the wall.
One example could be defending against a botnet – a network of computers that have been compromised. In this scenario, you’ll want to attack the botnet itself to take it down in order to regain legitimate control of something or prevent a denial of service attack.
TCB: There is often a balancing act between offensive and defensive priorities for intelligence agencies and law enforcement, for example, in reporting zero-day vulnerabilities in the software of multinational companies. How does a push toward autonomous defense change this dynamic?
DB: There is a tradeoff between offense and defense when you discover a new vulnerability. If you disclose a vulnerability to a vendor, they’re going to patch it, and you’ll be taking it off the table in terms of being able to use it against an adversary. There are pros and cons to that. Once you take it off the table, you’ve taken it off the table to attack the enemy, but you’ve also taken it off the table for them to attack you.
Suppose U.S. intelligence finds a zero-day in Windows 10. Should they tell Microsoft or not? Suppose they happen to know that some highly wanted criminal is using Windows 10; then what? They’re in this tricky situation, where disclosing the zero-day to Microsoft results in the criminal patching their system or not disclosing means they may have an opportunity to attack the enemy. That’s really the heart of the matter – should you exploit or should you disclose? It’s not an easy binary decision.
TCB: Considering the role of speed and the sheer number of those who would need to be involved in the cybersecurity workforce to be effective against persistent breaches from, for example, nation-state’s like China, what role do you see autonomous bots playing in cyber defense in the future?
DB: In cyberspace, attribution of an attack is difficult, so you may not necessarily want the hacking bots to be completely autonomous.
However, one of the things that we think makes absolute sense is for autonomous bots to be involved in what are called “reflection attacks,” that is, if someone attacks you, they’ve given you their exploit, and you should be able to use it at a time and place of your choosing. If every time someone attacks us with something we didn’t already have, and we don’t immediately take that exploit and keep it for our own use, we’ve made a mistake – a human error in our gameplay. Autonomous bots are going to make it so that it’s closer and closer that those sorts of errors are less commonplace. They are going to help us get to a point where this is all systematic.
TCB: Could you explain how autonomous hacking bots work?
DB: When we talk about autonomous hacking bots, we’re talking about a program that finds new vulnerabilities, not about a program that replays known vulnerabilities. At the heart of the matter, it’s a search problem. What you’re programming these autonomous bots to do is search for those things that will cause the program to go off an edge. Once you find the vulnerability, you exploit it.
The word exploit is both a noun and a verb. It’s a sequence of actions that puts the exploiter in an advantageous state. In layman’s terms, if I give this program this file, I will take control of the computer. Then the question becomes: how do I take advantage of that? Do I install malware? Or do I install a key-logger? Autonomous bots can start to reason strategically what’s the best decision, and that’s the most nascent part.
TCB: How would they be deployed?
DB: Right now, the way in which we find vulnerabilities in software weapons systems, cars, etc. is we sit a human down and have them spend a lot of time looking through these programs for these edge cases. Because of this, if we have 10 humans, we can look at 10 programs. What autonomous bots allow us to do is look through hundreds of programs in the same amount of time. It’s all about scale.
A startling fact that most people don’t know but should is if you go buy a wireless router right now, it’s probably vulnerable. The reason it’s not getting exploited is that there’s so much stuff out there – no one’s gotten around to it. When we start looking at how we deploy autonomous bots, we really see a world where they’re checking the world’s software. Think of it as everything you look at has gone through some sort of security check. That’s really going to increase security, because these autonomous bots are going to find vulnerabilities before attackers do.
TCB: What are the drawbacks of using autonomous bots?
DB: Because these bots are autonomous, they’re going to have a quick reaction; an incorrect attribution can do harm very quickly. Because attribution is so hard, one of the challenges is making sure we build things that have safety built in – that we’d never mistakenly go after someone who wasn’t responsible for doing something bad. If someone walks into a bank with a cap and dark glasses, you don’t assume they’re a bank robber. You might be on alert, but you don’t automatically take action against them. You have to be really careful about being 100 percent certain. That’s one of the biggest dangers.
TCB: Is there a possibility that malicious actors could hijack these bots?
DB: Part of this question comes under the heading of counter-autonomy, where you have an autonomous system and someone goes after the autonomy to take their action. That’s something you have to think through, and you have to build in safety mechanisms. Counter-autonomy is really at the forefront of research. It’s the next step where someone is going after the bots themselves.