Lethal autonomous weapons systems (LAWS) are weapons that need little, if any, human interaction in order to select and engage their targets. While technology to improve weapons systems is advancing, experts both in and outside the U.S. military are considering whether fully autonomous weapons are moral and whether they fully justify the benefits that can include incredible speed and accuracy in war, or whether fully automated systems are more at risk of disaster because of their lack of ability to fully understand the consequences of their decisions.
The Cipher Brief spoke with Paul Scharre, a former U.S. Army Ranger who served in Iraq and Afghanistan. Scharre also worked to help establish policies on unmanned systems when he worked for the Office of the Secretary of Defense (2008-2013). He was recently awarded the 2019 William E. Colby Award for his book, Army of None: Autonomous Weapons and the Future of War.
The Cipher Brief: When you think about how quickly technology is evolving, what do you think we most need to pay attention to when it comes to the development and use of autonomous weapons systems?
Scharre: The main challenge that we're seeing with technology is that it's evolved in such a way that with each generation of advanced military robotics systems, we are incorporating more technology. The trend lines are toward more and more autonomy over time. It's clear that in the relatively near future, we'll face important challenges with questions of whether we will delegate life and death decisions to machines in times of war. One of the questions is what happens when a predator drone has as much autonomy as a self-driving car? And how will we feel about machines making these kinds of decisions? On the one hand, machines are more repeatable than people. If they have good data and understand environments well, they often outperform humans. The appeal of self-driving cars is that they might reduce deaths on the road. And one of the arguments in favor of autonomous weapons is that the same technology that can reduce accidents in self-driving cars can be used to reduce civilian deaths in war. On the other hand, anybody who’s ever interacted with a computer knows that they fail, and they break and it’s easy to imagine the risks and the harm that could come from a computer breaking if it has lethal capabilities.
The Cipher Brief: When we think about the realm of cyber right now, there's a struggle to get other countries to sign on to a set of norms that would make the environment altogether safer for everyone using it. Is it kind of a similar thing with autonomous weapons in terms of the challenge? How does the west know that the ethics that they've tried to build into the programming is going to be shared among everybody else in the world?
Scharre: The central challenge in trying to craft or harness weapons is that we don't trust each other. That’s why we have militaries in the first place. There have been discussions underway at the United Nations since 2014, and its really a good thing that countries are coming together to discuss the technology. But even though you might have a country with solid principles saying, "Look, there are limits to how we want to use this technology. We don't want to cross certain lines.” How do you get to the place where two countries can trust each other? The intermediate-Range Nuclear Forces Treaty just collapsed because Russia was cheating on this treaty. And that’s a treaty where it’s much easier to verify whether a country is actually abiding by the treat, because it’s much harder to hide things like missile tests than it is corruption or software. So, even if you were to look at a robotics system and someone said, "Well, there's always a human, the human is always in control.” How do you know that's the case? How can you verify that? And that's a real challenge for any kind of international regulation on this technology.
The Cipher Brief: You have a really interesting background, coming from a Ranger battalion. How has that kinetic experience impacted the way you see this issue as a scholar and an academic?
Scharre: In a couple of ways. In a very visceral sense, some of the challenges in warfare, the ethical challenges, lead you into complex moral situations. That's the nature of warfare. I talked about some of these incidents in the book, where people have to makes tough decisions based on incomplete information and considering the consequences. One of the important differences between humans and machines, is whilst humans and machines both make mistakes, machines don't necessarily understand the consequences of their actions, humans can understand that, "Hey, if I screw this up, and I shoot the wrong person," humans understand what that means and know what the value of human life is. And that's, I think, an important aspect of human involvement in warfare that we wouldn't want to give up as this technology advances.
The Cipher Brief: What are the next step issues that people need to consider given how quickly technology is advancing and how quickly everybody is racing to develop the fastest, smartest-thinking computers that lead military actions?
Scharre: One of the really under-explored areas in this space is the effect on stability between nations and the potential for this desire for greater speed and competitive advantage to lead to a world where militaries are deploying weapons that are reacting at super-human speeds, which means that they can also have accidents at super-human speeds. And when you look at what the world of autonomous weapons might look like in the future, one of the interesting points of comparison is stock trading, where we have machines interacting in a very high-consequence environment at machine speed, making decisions far faster than humans can respond and with access to information. The way that regulators in financial markets have dealt with this challenge is by installing circuit breakers that take stocks offline if the price moves too quickly. But what happens in warfare if things begin to spiral out of control? The bottom line to a lot of this is that the challenge has to be to find ways to use this technology to make warfare more precise and more humane and avoid civilian casualties without losing our humanity in the process. And we shouldn't be entranced by the lure of machines, that are better than humans, in some things they are often faster and more precise, but the most advanced cognitive processing system on the planet is the human brain and machines, even those advanced AI systems, today lack the ability to understand context, to translate things they have learned from one area to another. Machine intelligence can be very brittle and can fail quite badly.
Advisors are always going to try to confront military forces with novel problems, and so, the way the U.S. military has evolved on this has shifted in recent years to talking about a consequential central war plan, with humans and machines combined, and I think that's the right approach, and I think that we should look for ways to leverage the best of both human and machine intelligence.