The cybersecurity industry is currently enamored with concepts of autonomous defense, including elements of machine learning, behavioral analytics, and artificial intelligence—and rightly so. Programed to be able to study all vulnerabilities in the public domain, autonomous bots (autbots)—not to be confused with bots simply conducting repetitive tasks like guessing default passwords as programmed—could take what they learned from previous human efforts and come up with innovative methods to target systems, creatively finding unknown vulnerabilities and crafting patches for them.
As current cybersecurity professional shortages show, new approaches and lots of automation and robotics will be required to address the plethora of current and new cyber threats; autbots could help address issues of a deficient workforce and scale needed to achieve cyber defense objectives. Autbots could be used to augment existing red teams conducting penetration testing, either by handling some of the more labor-intensive aspects of the research and assessment, and delivering their results to humans for exploitation, or engaging in red teaming on their own, as they would be capable of assessing, attacking, and securing a network fully autonomously.
These technologies, however, will not be the exclusive domain of defenders. They will also be used to attack networks. These autbots will be designed to meet the objectives of their disreputable makers, but might also be able to adapt in unexpected ways. Attack autbots will find victim machines, compromise them, and then learn from the host to aid the next level of attack or target associated trusted systems and networks.
Automated data aggregation and analysis will be used to compile very complete personal profiles around victims of attack that will permit not only direct compromise, such as deducing passwords, but also heavily targeted social engineering. Just as social media companies deliver targeted advertisements based upon posts, likes, or views, attackers will use that information for spear phishing campaigns victims will inevitably fall for.
Autbots might be created by nations to follow hundreds of individuals to discern travel patterns and relationships, which might identify them as a clandestine intelligence operatives, terrorists, or criminals. This kind of tailored tracking is possible in an age in which phones, cars, passports, and other devices all emit unique electronic signatures. Counterintelligence officials or foreign adversaries could order autbots to find the 19 million security clearance holders compromised in the Office of Personnel Management breach and follow them in their vehicles, through their mobile devices, the networks they visit, and their online activity.
Conventional surveillance is extremely labor intensive if done correctly, but with the constant connectivity and the number of signatures people leave every day, countries could automate that surveillance at scale using autbots.
While autbots could make attribution more difficult, major countries could specifically design autbots to be attributable so competitors are aware of them and their existence acts a deterrent. If China, for instance, has determined at the national level a certain regional security policy issue, they could take enforcement of that policy outside of the human decision-making process. An autbot could act as judge, jury and enforcer of that policy to ensure consistency through constant monitoring and automated triggering of disciplinary action in the incidence of a breach, such as sanctions or even launching kinetic strikes. An international organization, such as the United Nations, could similarly use autbots to enforce international policies, whether it is adherence to sanctions, commitments to relief efforts, or agreements to not occupy particular territories.
Because of systems’ inherent insecurity, adversaries will be able to hack into and commandeer, change, or duplicate autbots. However, one of the first tasks that an autbot might take is to patch itself, where their first analysis might be to look at themselves, find the vulnerabilities, and then make sure that they cannot be exploited, essentially hardening themselves. This would decrease the likelihood that someone would be able to target them.
However, because these autbots would incorporate machine learning, there is a fear that they could act unpredictably, something that will likely inhibit states from wanting autbots independently enforcing policy objectives. An autbot might take an interpretation around a particular objective it has been coded to achieve, but does not meet the objective for which it was initially launched. For example, an autbot penetrating a conventional weapon system might, instead of mounting a cyber attack, calculate it could achieve its objective more efficiently by launching the conventional weapon instead.
In addition, ransomware could be developed to automatically seek out targets, successfully compromise them, and collect the ransom to unlock the victim’s files. Once their creation is unleashed on the world, the attacker would just need to sit back and watch their digital currency account grow. Automated ransomware might also target Internet of Things devices. What happens when it is the dead of winter and your Internet-connected thermostat wants you to pay a ransom before you can turn on the heat?
Others might specifically target critical infrastructure industrial control systems. Terrorists targeting critical infrastructure with autbots would be much more likely to be able to launch attacks with physical consequences. Such attacks could endanger critical infrastructure, communication lines, and national airspace systems; autbots could likewise infect hospital medical devices, preventing them from providing urgent treatment to patients.
Autbots might also be attached to emerging concepts like digital autonomous corporations in which case the company’s anonymous owners issue directions to the autbot. An anonymous vote, for example, could call for lowering a company’s share price by 10 percent or exposing the private correspondence of a law firm. The autbot would then pursue the objective in ways people might not be able to understand.
It is scenarios like these, and not only our current defensive requirements, that are driving innovation in cyber defense. Given attackers have historically demonstrated an innovation advantage, they just might get there first. Modern organizations need to consider how these new technologies could affect their security strategies and we need to drive innovation to defend against these emerging threats.