Discussions on artificial intelligence (AI) too often revolve around concerns about the sensationalist threat of “killer robots,” usually featured in science-fiction films or computer games. Killing is depicted as easy and rapid, done by steel-clad monsters with super-human abilities.
Let me put this fear to rest. We do not yet have killer robots, though we must acknowledge that progress is being made with increasing sophistication of technology.
In this context, let me mention the positive aspects of AI, which are all around us, yet often overlooked in the debate. The many benefits AI has introduced are evident in the healthcare system, in the availability of data and information, in the day-to-day use of AI-powered devices, and in hazardous environment and search-and-rescue operations. And, perhaps most importantly, AI has the potential to address other global issues, such as disaster preparedness and response.
The progress made in recent years has been daunting. AI is not only limited to empowering machines with algorithms, but efforts are focused on trying to push the boundaries of AI by enabling machines to acquire cognitive ability by learning through observation and repetition. Video games are used to improve a program’s ability to learn and maximize the result: a skill that could be transferred to the real world. The transferability issue is strictly connected to the nature of AI as a dual-use technology, because civilian applications can also serve military purposes and vice versa. But before jumping to conclusions about the possible nefarious implications of AI, we must first acknowledge that we don’t yet fully know which solutions are possible within the existing limitations of AI algorithms. And without knowing which problems are currently unsolvable, it is impossible to predict how long it will take until researchers are able to develop artificial general intelligence, or human-like AI.
Let me now turn to one specific area: the military applications of AI. Wars used to be fought by soldiers on land, on water, and in the sky. Now we witness military actions at a remove: soldiers might sit at a computer and launch an attack through unmanned aerial vehicles, aka drones. They can sit thousands of miles away from the battlefield in a comfortable office and launch a lethal attack, based on intelligence obtained from aerial surveillance. The “boots on the ground” have been replaced by men and women operating machines from a distant location. Though I understand why politicians find the “no-boots-on-the-ground” argument to be compelling, the supposed minimization of casualties will most certainly benefit the attackers rather than those being attacked and figures of collateral damage to civilians are hard to come by, and difficult to estimate.
Still, even when a human is in the loop, errors can be made due to stress, inattention or other human failures. It has been argued that there is a positive effect of military AI applications: they could reduce human errors on the battlefield.
While this argument is hard to refute, we can also envision some worrisome scenarios. It is foreseeable that even a (debatably) “positive” aspect as the minimization of casualties could lower the threshold for the use of force, which in turn might exacerbate regional or even global instabilities. Some commentators have argued that these jus ad bellum concerns alone should be enough to discount the use of autonomous weapons—even with a human in the loop—as legitimate. And what complicates the picture even further is the proliferation risk of a technology that could easily end up in the hands of extremist non-state actors. Drones can be—and are—ordered via the internet; equipping them with arms or explosives is cheap and easy.
Yet the most worrisome scenario is the step towards Lethal Autonomous Weapons Systems, or LAWS. No man or woman in the loop, no human finger on the keyboard or the equipment that sets them off, but a device that is capable not only of destroying infrastructure and other devices but that autonomously plans and executes an action, even against human beings. Who has the accountability? The LAWS/robot, the manufacturer of the machine, the developer of the technology, the overseeing officer? How can decisions over life and death be relegated to a non-human?
These are the questions that need to be answered as a matter of urgency, even if these weapons do not yet exist. No consensus exists whether they are being developed in the near or long-term future, but these questions are being asked and addressed. Meetings are being held, concerns raised by organizations like the International Committee of the Red Cross, the United Nations Human Rights Council and civil society organizations like the recently-founded Campaign to Stop Killer Robots. The media has also increasingly focused attention on it.
At the UN Convention on Certain Conventional Weapons (CCW) in Geneva, the informal experts’ meetings on LAWS started consultations in 2014 on the moral and ethical implications of these weapons. They do not meet continuously, but their efforts have led to a heightened awareness of this issue among states. Talks are underway to establish a Governmental Group of Experts (GGE) or a larger, more inclusive open-ended working group, to be convened in New York, where more States have permanent missions than in Geneva. What is clear, however, is that any group must be sure to involve representatives from the industry. Governments and businesses must create a synergy to avoid an unintended arms race that could result from the lack of international regulation for the military applications of AI. It is encouraging to note in this context that the AI industry has itself put forward ethical concerns and created mechanisms to address these issues. They are ahead of the curve.
This then is the final and most important point regarding the overarching issue of the degree of human control over LAWS. There is broad agreement on the need to draw the line when it comes to decisions over life and death, which should under no circumstance be deferred to machines. Yet the international community is at the early stages of consideration. Efforts to negotiate a ban on LAWS have not begun, and delegations are still emphasizing the need to better understand the issue. States have not been able to agree on a definition of LAWS, and elements of a definition have not even been identified. Some States argue that agreement on a definition is premature, considering that LAWS do not yet exist, while others want to move ahead and identify at least the characteristics of LAWS for a working definition.
What is clear is the urgency that some sort of precautionary measures must be put in place as soon as possible. If the international community cannot stop the arms race and prevent the development of autonomous weapons, it must build on the humanitarian principles and ethics applicable to them as a matter of priority.