Should humans delegate the responsibility of decisions over life and death to computer algorithms? The answer is not as simple as it seems—and it is the fundamental question concerning the military application of artificial intelligence in the future.
The world has seen incredible technological advances in recent decades; not least of which is the advent of information technology. Just as nuclear weapons, precision guidance systems and stealth technology set the U.S. military apart from its near-peer adversaries in the past, U.S. research and development institutions now seek innovations in additive manufacturing, advanced materials, synthetic biology and artificial intelligence to do the same today. But there are few technologies that present as many uncertainties for the future as the military application of artificial intelligence to develop lethal autonomous weapons systems.
These so-called “killer robots” are theorized as adaptive, cognitive learning weapons that instantaneously—and autonomously—turn intelligence about their environment into actionable choices about targeting and the application of lethal force. Put simply, these weapons would decide who and when to kill without direct human interaction in the decision loop.
In 2012, the Pentagon set out a policy that only allows the use of semi-autonomous weapons—weapons that can only engage targets selected by a human operator. But this directive expires in 2022, delaying the responsibility of determining the future of autonomous weapons for a few years.
Ultimately, the delineation between semi-autonomous and autonomous weapons is a matter of degree, rather than black and white, resulting in ambiguity over what “meaningful human control” over lethal force actually entails. For example, the U.K.’s “fire and forget” Brimstone missile identifies, tracks, and locks onto vehicles autonomously, selecting its own targets but within an area-defined “kill box” set by a human operator.
Some might argue the half century-old landmine constitutes a lethal autonomous weapon based on sensory-feedback for detonation. Along the South Korean border rest static gun turrets able to actively identify, track and engage targets without human intervention. Israel’s Iron Dome weapons defense system is capable of autonomously identifying incoming rockets, calculating their trajectory, and interdicting them so that debris lands in the least populated areas.
But the future’s autonomous weapons would provide even more of a competitive advantage over near-peer adversaries. For example, they would be able to continue operations in contested space susceptible to communications jamming.
Warfighting of the future will occur across numerous domains, involving multiple weapons systems coordinating with each other through interlinked computer networks—all at the speed of light. The human brain cannot possibly process these factors at a tempo necessary for meaningful involvement.
Doug Wise, now-former Deputy Director of the Defense Intelligence Agency, argued in comments given at The Cipher Brief that military commanders will no longer be playing “just a traditional game of chess.” Rather, he says, they will be playing “multi-tiered games occurring simultaneously where every chess piece is interoperable, self-aware—autonomous—and we will no longer be able to connect with them. Yet, each of those machines—or autonomous chess pieces—must, in some way, have the core values of the United States of America.”
There have been calls for a ban on the lethal autonomous weapons systems—or a moratorium on their deployment until the technology is better understood—from robotics and artificial intelligence experts as well as a broad range of NGOs. But distrust and uncertainty still reign supreme between adversaries, who may develop fully autonomous weapons for a competitive advantage to ensure their own security despite the risks—forcing other states to do the same.
Some have argued for international agreement on the use of autonomous weapons. But basic principles of international law governing conflict such as proportionality, self-defense, and differentiation between combatants and non-combatants are anything but simple when applied by humans—let alone computer algorithms. Additionally, Paul Scharre, Senior Fellow and Director of the Future of Warfare Initiative at the Center for New American Security, argues that “the laws of war govern the effects on the battlefield—what actually happens—but they do not necessarily govern the process by which decision-making is made.” “The law is,” Scharre contends, “effectively moot at this point.”
A counterargument to banning lethal autonomous weapons systems is that humans are emotional, irrational, and prone to mistakes; artificial intelligence could avoid human error by delivering greater precision and discrimination between targets, limiting the amount of collateral damage and creating “cleaner warfare.” Angela Kane, a Senior Fellow at the Vienna Center for Disarmament and Non-Proliferation and former High Representative for Disarmament Affairs at the United Nations, argues that “even a (debatably) ‘positive’ aspect as the minimization of casualties could lower the threshold for the use of force, which in turn might exacerbate regional or even global insecurities.” In other words, with fewer lives at stake, states could deploy autonomous weapons without meaningful democratic checks to facilitate wars of aggression and exploitation.
But what happens when autonomous weapons do make mistakes? After all, they are learning machines that are constantly updating themselves, making their decisions unpredictable. When humans make mistakes, it can be chalked up to the “fog of war.” Kane poses the question of “who has the accountability?” Is it the “robot, the manufacturer of the machine, the developer of the technology, or the overseeing officer” that ultimately has responsibility for the robot’s decisions over life and death?
In the end, it is difficult—if not impossible—to determine why exactly an autonomous weapon might make a mistake. Scharre notes that autonomous weapons containing deep learning capabilities present a “black box” whereby “as systems become more complex, humans may not really understand how it is working or where its boundaries are and so they may be unpleasantly surprised by its behavior,” and unable to reverse engineer the process leading to unforeseen behavior.
The Defense Advanced Research Projects Agency (DARPA) is working on developing explainable artificial intelligence to solve this problem. But until a solution is found, military commanders will have trouble deploying autonomous weapons responsibly—or strategically—without truly understanding how these systems reach conclusions.
This article has been updated to reflect that the directive on autonomous weapons is the Pentagon’s policy and expires in 2022–and that calls for a ban are specifically focused on lethal autonomous weapons systems.
Levi Maxey is a cyber and technology producer at The Cipher Brief. Follow him on Twitter @lemax13.