New technologies are changing the face of future warfare and few will be as impactful as the fusion of artificial intelligence into weapons systems. The Cipher Brief spoke with Paul Scharre, Senior Fellow and Director of the Future of Warfare Initiative at the Center for a New American Security, about the rationale for, and complications presented by, the development of autonomous weapons systems that could eventually choose and engage targets without human involvement.
The Cipher Brief: Where do you see the future of warfare in terms of incorporating artificial intelligence into weapons systems?
Paul Scharre: We are at the beginning of a revolution in artificial intelligence. There are any number of applications for all sorts of industries, and it seems inevitable that militaries will use them as well. Many of these applications are not very contentious—things like better logistics or data processing. But when it comes to how much freedom, or autonomy, is given to machines, it becomes a contentious issue when considering weapons and lethal force.
We already have a lot of automation today, particularly in defensive systems—at least 30 countries have automated defensive systems to shoot down incoming rockets and missiles. But how much autonomy is appropriate going forward? A letter last year signed by over 3000 robotics and AI experts called for a ban on autonomous weapons and a number of NGOs have called for a ban as well.
It is clear artificial intelligence will be important in warfare of the future. We will see artificial intelligence cognifying weapons. From an intelligence standpoint, is it essential in war to go that final step towards autonomous weapons that will go out and target on their own? No country has explicitly said they plan to build autonomous weapons, but very few countries have said they will not either—it is too early to say whether they will be a part of war in the future.
TCB: What are some legal and ethical complications presented by autonomous weapons?
PS: There are a number of legal issues that are challenging. Nothing in the laws of war say a machine could not be used to make targeting decisions all on its own. The laws of war govern the effects on the battlefield—they govern what actually happens—but they don’t necessarily govern the process by which decision-making is made. Some have argued this is because the laws of war never had to specify that a human made those decisions; it was always implicit.
The laws of war do raise very serious challenges to autonomous weapons and, given the state of technology today, it would be hard for autonomous weapons to be used lawfully in certain settings—for example, in civilian-populated urban environments. They could be used lawfully today, for example, for targeting military ships or submarines where there clearly are not civilians present. As technology moves forward, autonomy used in the right way could allow weapons to be more humane and more precision to avoid civilian casualties. At the end of the day, the law is effectively moot at this point.
There are many ways to view autonomous weapons from an ethical standpoint. Humans can exhibit empathy in war and exercise restraint—sometimes refraining from killing that would otherwise be lawful. One concern is that autonomous weapons would take away that capacity for empathy—it is hard to say how significant that is. Some argue being killed by autonomous weapons could violate human dignity, but a lot of things about war are terrible and we should not glamorize war today. Being mowed down by a machine gun or blown up by a bomb is not dignified either.
If people do not feel responsible for killing—instead pointing to the robot’s decision-making—would that result in more killing? What would that mean for society if we engaged in warfare but no one felt bad about it? This suggests that PTSD and moral injury are good things because it means that we are moral actors—even though they are normally thought of as bad side effects of war. There aren’t any easy answers here.
TCB: Why would countries eventually pursue fully autonomous weapons?
There are a lot of reasons to pursue artificial intelligence; to make weapons more precise, to process data faster. But why go the final steps to fully autonomous weapon systems? Two reasons. Even though machines may not necessarily make better decisions than humans, they certainly can make decisions faster. Examples of where speed has driven militaries towards automation are certain defensive systems. These meet the basic criteria for an autonomous weapon—once they turn on they search their space for targets, and engage them all on their own. They are used to defend vehicles, ships, or military installations from rockets, artillery, and mortar attacks that might be coming in so fast that people cannot possibly respond in time.
Another reason is if you have an unmanned or uninhabited system that is operating outside of communications. Major military powers are developing advanced combat drones that would be intended to operate in contested airspace where enemies might jam communications. What do you do if its communications gets cut off? Does the drone come home or just take pictures and do surveillance? Can it strike preplanned, fixed targets—like a Tomahawk cruise missile? What if the drone comes across a new target that has not been authorized but is a threat? Is it allowed to engage or shoot back if someone fires at it? Those are practical questions that militaries will have to address in the next 10 to 15 years.
It is also possible militaries would pursue autonomous weapons out of a fear that others are doing it. It is easy to say we should avoid an arms race, but it is a failure of collective action among countries because they fear what the others are doing.
TCB: These machines would constantly be updated and learning from their environments. Could this make their actions unpredictable? If there is a mistake, how can we go back and look at how they arrived at that conclusion and determine why exactly it made that mistake?
PS: The problem of unpredictability can arise in complex systems, even ones that are ostensibly rule-based—that do not employ deep learning. You can get accidents—called a “normal accident”—in complex, tightly coupled systems where the system acts counter-intuitively in ways people did not anticipate, leading to a negative outcome.
This becomes even more difficult in learning systems that might be acquiring data over time. They are programs, but they are not programs in the sense that someone is writing their rules for behavior—it learns over time. For example, a Tetris playing bot learned to pause the game right before the last brick landed at the very top so that it would never lose. Machines can learn things that are technically within the bounds of what they were programed to do, but are not effectively the kind of behavior that you want.
This is magnified even further with deep learning, because the technique of deep learning—using these neural networks—is very opaque. It is a “blackbox” that involves feeding data in and turning a million different dials until you get the right output. But what is going on inside the box is pretty unclear, which means we need to work on making artificial intelligence more explainable to be able to peel back the layers and understand what it is doing and why. As systems become more complex, humans may not really understand how it is working or where its boundaries are and so they may be unpleasantly surprised by its behavior.