Skip to content
Search

Latest Stories

Welcome! Log in to stay connected and make the most of your experience.

Input clean

The Ethics of the Kill Decision: Should Humans Always be in the Loop?

The Cipher Brief’s Academic Incubator partners with national security-focused programs from colleges and Universities across the country share the work of the next generation of national security leaders.  

Blaine Ravert is a Political Science and Philosophy Major at Westminster College, in Fulton, MO.


Tobias T. Gibson is the John Langton Professor of Legal Studies and Political Science, and Security Studies Program Chair at Westminster College.

ACADEMIC INCUBATOR - One of the few stable factors in conflict throughout history has been humans deciding who to target, even as technology advanced and attacks became more mechanized. However, as technology advances at ever-increasing speed, the near future suggests the reality of fully-autonomous weapons—and the possibility that this standard may no longer apply.

The DOD defines autonomous weapons systems as “[a] weapon system that, once activated, can select and engage targets without further intervention by a human operator,” which it distinguishes from a semi-autonomous weapon system, which “Is intended to only engage individual targets or specific target groups that have been selected by a human operator.”

Thus, the key distinction between full- and semi-autonomous weapons systems is the level of control a weapon system has in deciding who and what to target. Peter Asaro, Professor at the New School of Media Studies, argues that “Any automated process, however good it might be, and even if measurably better than human performance, ought to be subject to human review before it can legitimately initiate the use of lethal force.” We share this position, because we believe that ethically and legally, the final decision to use lethal force ought to be made by a human.

Like Asaro, Robert Sparrow, writing in The Journal of Applied Philosophy, has argued that “It is a necessary condition … that someone can be justly held responsible for deaths that occur in the course of the war. [If] this condition cannot be met … it would therefore be unethical to deploy such systems.” Though we concur, we also believe that humans should monitor actions carried out by autonomous systems—and retain veto power if the autonomous systems are about to commit unlawful or unethical actions. This condition would ensure that someone could be held responsible and accountable for actions committed by autonomous weapons.

James E. Baker argues in his book, The Centaur’s Dilemma, that “the commander will and should be held responsible, if he knew of, or had reason to know of, violations of law, including that a weapon or system would not work as intended.” While there will always be some degree of uncertainty involved in predicting an autonomous weapon systems actions, this level can be reduced by strict programming and direct human supervision.

While there are good reasons to not create or use fully-autonomous weapon systems, as defined by the DOD, there are arguably equally good reasons to use weapons with a more limited degree of autonomy.

Human beings can have intense and negative emotions, and these feelings can lead them to commit horrible actions on the battlefield. This is one reason why semi-autonomous weapons should be useful in combat, since as C. Anthony Plaff, Research Professor for the Military Profession and Ethics at the U.S. Army War College’s Strategic Studies Institute notes, “they do not suffer from emotions such as anger, revenge, frustration, and others that give raise to war crimes.”

Ronald C. Arkin, Regents’ Professor at Georgia Institute of Technology, summarizes a variety of sources which support Plaff’s argument, including surveys suggesting that soldiers are often driven by negative emotions in combat. Arkin also notes that semi-autonomous systems can consume and act upon more information from sensors beyond human capacity, and that they will not fall victim to the “shoot first, ask questions later” mentality which can cause needless deaths in combat.

Semi-autonomous weapons can have other benefits as well. One major benefit which freelance writer Richard Purcell describes, is how these systems can gather and analyze huge amounts of information quickly, which enables them to provide high quality suggestions regarding courses of actions to human leaders. Similarly, Air Force staff members Brad Dewees, Chris Umpres, and Mandy Tung argue that while machines are useful in listing ways of action, suggesting possible outcomes, and giving the probability of success from a given source of action, they are not useful in value judgments. We think that this general framework is logical, and outlines the strengths, as well as weaknesses of autonomous weapons. And, to be sure, as readers of The Cipher Brief know, autonomous aircraft have recently outperformed human pilots in head-to-head dogfights.

In conclusion, technology has already changed armed conflict by enabling machines to have more autonomy in which targets they select and engage. We are now on the precipice of the next step of this change, where machines are programmed to select and engage targets without human oversight and decision making. This step may be dangerous and undermine fundamental ethical principles of warfare. However, acting now, before this level of technology emerges, will allow humans to more effectively anticipate consequences, establish firm programmed buttresses, and incorporate legal limits to autonomous weapons systems. With preventative measures, ethical violations during combat will be minimized, and operations against dangerous targets can be designed to minimize the cost to human life.

The possibility of autonomous weapons opens a variety of legal, ethical, and practical questions involving combat. How the international community decides to answer these questions will have a major impact on the future of armed conflict. While autonomous weapons have the potential to be a positive influence on combat, they also have the potential to unleash terrible consequences upon the world.

Find out how your University or college national security program can join The Cipher Brief as an Academic Incubator partner.

Read more expert-driven national security insight, analysis and perspective in The Cipher Brief

Related Articles

America, Ukraine and the Illusion of an Isolationist Choice

OPINION — In 2022 Russia launched its full-scale military invasion of Ukraine, a big and bloody war between the two largest countries in Europe with [...] More

Taiwan's Election Offers Strong Lessons on Disinformation

OPINION — Taiwan’s Presidential election last Saturday took place amid widespread concerns that China would use Artificial Intelligence (AI) driven [...] More

Containing the North Korean Nuclear Threat will not be easy in 2024

OPINION / EXPERT PERSPECTIVE — On December 18, 2023, North Korea successfully launched a solid fuel, road mobile Intercontinental Ballistic Missile [...] More

Section 702 Delivers Stronger U.S. National Security

OPINION — In 2011, I took command of our counter-terrorism forces, and the most prolific and dangerous threat we faced at that time came from Al [...] More

Keep an Eye in the Sky for U.S. Missile Defense

OPINION — “We’ve looked extensively at the Ukraine conflict and I can tell you, the use of drones and how we’re seeing drones being utilized in that [...] More

Chinese and Russian Space Pursuits Are Picking Allied Pockets

OPINION — India’s breakthrough lunar landing showed that our free world economies are in an age of healthy research, experimentation, and growth with [...] More