The Department of Defense is seeking a significant budget increase for further development of lethal autonomous weapons programs (LAWS). While technology and the use of weapons technology is advancing rapidly, policy makers are grappling with the questions of just which decisions machines should be making on their own. Should an autonomous weapon be making life and death decisions? How much input should be reserved for humans?
After protests in Silicon Valley over the way that machine learning is being used in lethal systems, the Pentagon is now seeking input. It has tasked the Defense Innovation Board, made up largely of Silicon Valley executives, to provide guidelines for the application of machine learning in future wars that will likely rely heavily on machines making decisions.
Google employees circulated a petition last year, protesting use of the company’s AI capabilities to support DoD’s Project Maven, a drone imaging program.
BACKGROUND:
- Lethal autonomous weapons systems (LAWS) are weapons that need little, if any, human interaction in order to select and engage their targets.
- While technology to improve weapons systems is advancing, experts both in and outside the U.S. military are considering whether fully autonomous weapons are moral.
- Advocates for LAWS argue that they provide incredible speed and accuracy in war and will allow the U.S. to be globally competitive with its adversaries. They also argue that LAWS can reduce the numbers of war casualties by removing warfighters from the battlespace or removing warfighters from missions that include potential exposure to harmful material.
- Those who are opposed to LAWS argue that fully automated systems are more at risk of disaster because of their lack of ability to fully understand the consequences of their decisions as there is a stark lack of accountability for a ‘wrong’ decision.
- The Department of Defense published the ‘Unmanned Systems Roadmap: 2007-2032’, which outlines the potential benefits LAWS can bring to the battlespace in terms of lives and dollars.
- Today’s use of lethal autonomous weapons systems is guided by a directive drafted by the Department of Defense in 2012 that “establishes DoD policy and assigns responsibilities for the development and use of autonomous and semi-autonomous functions in weapon systems, including manned and unmanned platforms.” It also seeks to establish “guidelines designed to minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.”
- Last June, Google said it was withdrawing from a government program after thousands of company employees signed a petition to end their association with ‘Project Maven’ which relied on Google’s AI software for a DoD drone imaging program.
- United Nations Secretary-General Antonio Guterres recently said that lethal weapons that kill, must be banned, saying “machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.”
- In July of 2018, more than 200 organizations did the same, (including individuals like Elon Musk) saying they would “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.”
- The Department of Defense has since asked the Defense Innovation Board to develop ethical principles around the use of AI by the military. The report could be released in June.
The Cipher Brief asked experts to weigh in with their concerns.
Former Deputy Director of the Defense Intelligence Agency, Doug Wise former Special Assistant to the Undersecretary of Defense for Policy and author of the award-winning book, Army of None, Paul Scharre,and adjunct economist at the RAND corporation as well as a fellow at CISAC at Stanford, Radha Iyengar Plumb, provide their thoughts on the future of war, and how important it is for the U.S. (both the public and private sectors) to get this right.
Doug Wise, Former Deputy Director, Defense Intelligence Agency
“We are falling behind the Chinese and the Russians on the development of a broad artificial intelligence capability. AI will be the enabling technology for autonomous systems to include weapons, thus we’ll be in third place with respect to those machines which require sophisticated, reasonably self-aware and independent (autonomous) operating systems. Having said this, I often wonder when we are in a time where conflict is bloodless and the casualties are inanimate objects, whether our policy makers and our adversaries’ policy makers will be inclined for more aggression and military recklessness.”
Radha Iyengar Plumb, Adjunct Economist, RAND Corporation & CISAC Fellow, Stanford
"I think the core issue for all of this debate is what is the relevant counterfactual. That is humans are programming machines, making military engagement decisions, making operational decisions, and other types of actions. How do human-programmed machines compare to the humans themselves? Ultimately, military planners and policy makers need to decide the dimensions we care most about and then asses whether the automated actions better achieve that outcome than a human. The ultimate decision is always a human one – the question is really how is it practically accomplished and whether the application is one where automation makes more sense. There are a number of practical and substantive considerations that need to go into this decisions – which ultimately will be a values-based decision by policy makers."
Paul Scharre, Former Special Assistant to the Under Secretary of Defense for Policy
“We shouldn’t be entranced by the lure of machines, that are better than humans, in some things they are often faster and more precise, but the most advanced cognitive processing system on the planet is the human brain and machines, even those advanced AI systems, today lack the ability to understand context, to translate things they have learned from one area to another. Machine intelligence can be very brittle and can fail quite badly.”
Doug Wise, Former Deputy Director, Defense Intelligence Agency
“Our adversaries are very well aware of our dependencies on space and under-sea cable systems for communications. This awareness applies particularly to our dependence on these communications capabilities to connect with our weapons platforms. We are fast approaching the time when traditional battlefield command and control is no longer possible either because the latencies are too big, or we can communicate with our platforms. The need for our unmanned systems to be able to function without the tether to “headquarters” will determine success or failure on the modern battlefield. This will be especially true when our adversaries have deployed and are using equally, if not more capable, AI enabled weapons systems. The sensor-to-shooter time will be too short for a man in the loop such as the architecture used by the current Predator unmanned vehicle. Thus, our autonomous weapons will be able to sense their environment and anticipate targets, opportunities and threats independent of manned control and will be able to set and dynamically adjust in real time on-platform priorities and behaviors.”
Radha Iyengar Plumb, Adjunct Economist, RAND Corporation & CISAC Fellow at Stanford
"One of the important things to consider is how does the design of these systems actually shift decisions from inconsistent ones that require detailed human judgment to consistent ones that remove judgement and discretion? In many cases, weapons systems, whether human or autonomous, are launched based on clear rules-based criteria which may not be significantly altered by machines. In other cases, decision criteria can be incorporated into machine processes (e.g. weather or location related details) and may be more effective. In still other cases these assessments require human judgement and individual responsibility that cannot be delegated. Understanding which situations fall within which of these different conditional decision making paradigms is critical to assessing risk."
Paul Scharre, Former Special Assistant to the Under Secretary of Defense for Policy
“When you look at what the world of autonomous weapons might look like in the future, one of the interesting points of comparison is stock trading, where we have machines interacting in a very high-consequence environment at machine speed, making decisions far faster than humans can respond and with access to information. The way that regulators in financial markets have dealt with this challenge is by installing circuit breakers that take stocks offline if the price moves too quickly. But what happens in warfare if things begin to spiral out of control?”
Radha Iyengar Plumb, Adjunct Economist, RAND Corporation & CISAC Fellow, Stanford
"We need to consider - when is strategic certainty of response – perhaps from credible automation of reaction useful for stability and/or deterrence and when does it add risk? Understanding the strategic objectives of different types of response and the role that automation can play in meeting those objectives can be a more productive way to assess trade-offs in use of autonomous systems in the future rather than treating these as wholly new capabilities in their own right. That is, what is changed relative to having a human do this task and where is the a benefit and where is that a cost?"
Doug Wise, Former Deputy Director, Defense Intelligence Agency
“Most important is that we don’t listen to Elon Musk and others who are mongering fear over the onset of “Skynet” and “Terminators.” We are a long way from the technological sophistication required to have this threat where the machines turn on their creators. We need to invest we need to include these technologies in weapons systems as they advance, and we need to have a rigorous and healthy debate on the morality and ethics.”
Paul Scharre, Former Special Assistant to the Under Secretary of Defense for Policy
"Advisors are always going to try to confront military forces with novel problems, and so, the way the U.S. military has evolved on this has shifted in recent years to talking about a consequential central war plan, with humans and machines combined, and I think that’s the right approach, and I think that we should look for ways to leverage the best of both human and machine intelligence."
GET INVOLVED IN THE CONVERSATION
Scharre and Plumb took part in The Center for International Security and Cooperation’s Drell Lecture (CISAC) at Stanford University on Tuesday, April 30 and is available via YouTube and on CISAC’s Facebook page.
Have an opinion to express? Hit the POV button below and share your thoughts.
Read also When Weapons Go To War in The Cipher Brief