We live in an age where what used to be the figment of science fiction is now a reality, changing the way people go about their daily lives. Advances in artificial intelligence and machine learning are the new frontier, and their inception creates just as many risks as opportunities. In the conduct of statecraft, the machine-on-machine paradigm is not confined to conventional weapons, but also the beginnings of a new mode of conflict on the virtual plains of cyberspace. In the near future, computer algorithms will compete with each other, attempting to detect and patch vulnerabilities within their own systems while exploiting those in the networks of adversaries.
The maxim of cybersecurity professionals has long been that adversaries have the advantage—the advantage of time, numerous lines of attack, and the need to find only one lapse in security. So how will the dawn of autonomous hacking affect both cyber defenses and offensively orientated operations? Will artificially intelligent bots capable of learning on the fly better harden networks or will they act as devastating tools in the arsenal of nefarious actors?
Much of the intrigue surrounding autonomous bots draws from the U.S. Defense Department’s DARPA Cyber Grand Challenge last August that pitted bot against bot, with seven bots opposing each other’s systems and displaying both offense search-and-exploit and defensive detect-and-patch capabilities over a 10-hour struggle.
David Brumley, a professor at Carnegie Mellon University and Director of Cylab, participated in this battle of the bots, where his security startup, ForAllSecure, won with a fully autonomous full-spectrum attack and defense bot called Mayhem. Brumley describes the process as “a search problem,” where “what you’re programming these autonomous bots to do is search for those things that will cause the program to go off an edge. Once you find the vulnerability, you exploit it.”
This autonomous search capability could be revolutionary in hardening cyber defenses by resolving impediments of a deficient workforce. Brumley points out that “right now, the way in which we find vulnerabilities in software weapons systems, cars, etc. is we sit a human down and have them spend a lot of time looking through these programs for these edge cases. Because of this, if we have 10 humans we can look at 10 programs. What autonomous bots allow us to do is look through hundreds of programs in the same amount of time. It’s all about scale.”
With a plethora of Internet-connected devices and communication lines susceptible to breaches, along with the inherently insecure design of most software, autonomous bots could augment and maybe even replace red teams conducting penetration testing. While these autonomous bots could eventually hunt intruders while helping build shielding cyber defenses by detecting vulnerabilities both old and new and hardening them against adversaries—blurring the delineations of offense and defense—they could also be designed to persistently breach systems with more aggressive objectives. Given the offensive prowess of autonomous bots, states, criminals, and terrorists alike could command them to hack the expansive connectivity of today’s world, potentially leading to pervasive surveillance, automated extortion, and even kinetic harm.
Matt Devost, a managing director at Accenture Security and CEO of FusionX, points out that “automated data aggregation and analysis will be used to compile very complete personal profiles around victims of attack that will permit not only direct compromise, such as guessing their password, but also heavily targeted social engineering.” Much like red teaming, surveillance can also be labor intensive. Autonomous bots, according to Devost, “might be created by nations to follow hundreds of individuals to discern travel patterns and relationships, which might identify them as a clandestine intelligence operatives, terrorists, or criminals. This kind of tailored tracking is possible in an age in which phones, cars, passports, and other devices all emit unique electronic signatures.”
Former President Barack Obama envisioned “an algorithm that said, ‘Go penetrate the nuclear codes and figure out how to launch some missiles.’” Adding, “If that’s its only job, if it’s self-teaching and it’s just a really effective algorithm, then you’ve got problems.” Because these systems would incorporate machine learning, even if an autonomous bot were not specifically ordered to launch a conventional weapon and simply break into the system’s network and disable it, it is possible the autonomous bot could unpredictably calculate it could achieve its objective most efficiently by launching it instead. The pace with which artificial intelligence makes decisions based on enormous inputs of information makes it difficult for humans to comprehend, let alone predict, the choices of autonomous bots.
But what if criminals, terrorists, or the like get hold of autonomous bots? Perhaps hackers breach the central systems themselves, commandeer, reverse engineer, alter, and deploy the autonomous bots for their own objectives. Devost suggests that “one of the first tasks that an [autonomous bot] might take is to patch itself,” essentially self-hardening “to decrease the likelihood that someone would be able to target them.” However, according to Devost, “ransomware could be developed to automatically seek out targets, successfully compromise them, and collect the ransom to unlock the victim’s files. Once their creation is unleashed on the world, the attacker would just need to sit back and watch their digital currency account grow.” Autonomous bots create just as much of an opportunity for non-state actors as they do for states, and, according to Devost, “given attackers have historically demonstrated an innovation advantage, they just might get there first.”
Brumley, on the other hand, focuses on the positive aspects of autonomous bots, arguing that “when we start looking at how we deploy autonomous bots, we really see a world where they’re checking the world’s software,” and “that’s really going to increase security, because these autonomous bots are going to find vulnerabilities before attackers do.”
Levi Maxey is a cyber and technology producer at The Cipher Brief. Follow him on Twitter @lemax13.