The past week has seen a flurry of news stories on “killer robots,” which wouldn’t be complete without the obligatory Terminator and Robocop images. Countries were supposed to meet this month at the United Nations to discuss lethal autonomous weapons (aka “killer robots”), but meetings have been delayed till November due to funding shortfalls. Instead, we’ve been treated to a spike in media interest courtesy of an open letter signed by over 100 robotics and artificial intelligence company CEOs expressing concern about lethal autonomous weapons. You might be forgiven for thinking amidst all of this noise that momentum is building towards a ban. A careful reading of the letter suggests a different story, though.
Numerous sources have portrayed the letter as calling for an international treaty banning autonomous weapons, but a few astute observers have noted it does not. Instead, the letter warns of the dangers of autonomous weapons but leaves the solution vague. The letter’s authors conclude by asking countries engaged in UN discussions to “find a way to protect us from all these dangers.” As Yale’s Rebecca Crootof points out, the nuance was “clearly intentional.” A similar letter in 2015 with many of the same signatories called for “a ban on offensive autonomous weapons beyond meaningful human control.” The Campaign to Stop Killer Robots unambiguously has advocated for a “comprehensive, pre-emptive prohibition on the development, production and use of fully autonomous weapons.” This makes the mushy call to action in the recent letter all the more striking. The letter makes no bones about the dangers of autonomous weapons, calling them a “Pandora’s Box” that should never be opened. Yet it notably does not call for an international treaty to ban them.
If autonomous weapons are so dangerous, why does the letter remain vague on what to do about them? It’s most likely because there isn’t a clear consensus on what the best solution is. Successfully banning weapons is really hard. History is littered with failed attempts, from the Pope’s 12th Century crossbow ban to early 20th Century attempts to regulate submarines and aircraft.
That isn’t to say that weapons bans are impossible. Largely successful examples include prohibitions on land mines, cluster munitions, blinding lasers, chemical and biological weapons, using the environment as a weapon, and placing nuclear weapons in space or on the seabed. In some cases, militaries have refrained from using weapons without formal treaties in place; examples include sawback bayonets, anti-satellite weapons, and neutron bombs. But merely passing an international treaty – as hard as that may be – does not guarantee states will hold back in wartime if they believe it is in their interest not to.
A number of factors influence whether a ban is likely to succeed, including the availability of the weapon, its dangers, and its military benefits. A prerequisite for successful restraint is a clear understanding about which weapons are allowed and which are forbidden. This is a major challenge with autonomous weapons. There is no agreed-upon international definition of what an autonomous weapon is exactly.
Militaries have used fire-and-forget homing munitions since World War II and many countries have automated defensive systems to shoot down incoming missiles. Arizona State University’s Heather Roff has compiled an exhaustive database of the various ways automation is already used in weapons. Just like with autonomous features in cars, such as intelligent cruise control, automatic lane keeping, collision avoidance, and self-parking, automation is incrementally creeping into weapons over time. That doesn’t mean it’s impossible to delineate between different forms of autonomy and human control, some of which might be beneficial and others problematic. But to do so requires a more nuanced conversation than much of the public debate.
The fact that nations have already used automation in a variety of ways for over 70 years have led some to conclude that regulation, rather than an outright ban, is a better solution. To some extent, this may be a semantic difference. If nations pass a treaty proscribing some forms of autonomy, advocates of a ban are likely to claim victory, even if the result isn’t everything they hoped for.
A bigger problem is managing the inevitability that some actors will ignore whatever rules the international community agrees upon. Even successful weapons bans like those on land mines, cluster munitions, and chemical and biological weapons have cheaters. Widespread horror at chemical weapons didn’t hold back Saddam Hussein or Bashar al-Assad from using them. This means that regardless of what law-abiding nations decide to do, we’ll have to contend with a future where terrorists and rogue regimes use automation for nefarious purposes.
The recent letter warns that autonomous weapons could be “weapons of terror, weapons that despots and terrorists use against innocent populations.” Indeed, that’s possible, but history suggests that a ban won’t solve that problem. A ban could even be counter-productive if it ties the hands of law-abiding nations from using automation for defensive purposes.
The 2015 open letter attempted to balance these concerns by only advocating for a ban on “offensive autonomous weapons beyond meaningful human control.” While this signals more flexibility than the Campaign to Stop Killer Robots’ call for a “comprehensive, pre-emptive prohibition on … fully autonomous weapons,” qualifiers like “offensive” and “meaningful human control” open up more definitional challenges. The fact that the most recent letter doesn’t even go that far suggests that there is no consensus among signatories on the solution, even if there is agreement that autonomous weapons are a risk.
As one of the letter’s organizers explained, one of the purposes of the letter was to sound the alarm to try to spur the UN into action. Last year, countries participating in the Convention on Certain Conventional Weapons agreed to form a Group of Governmental Experts (GGE), a more formal body to deliberate autonomous weapons. Meetings have been delayed this year because some countries haven’t paid their dues, and the letter is a mix of chastising the UN and encouraging them to do more.
The sluggish pace of diplomacy is a marked contrast to rapid developments in autonomous technology. It is notable that over 100 robotics and AI company CEOs are concerned about potentially harmful uses of the technologies they are developing. If this letter helps to accelerate international talks, that would be a good thing; countries could benefit from further discussion to better understand the appropriate roles for autonomy and human control in weapons. It’s also a positive development that as interest has grown, many of those raising the alarm about autonomous weapons appreciate that there are no easy solutions.