Bots on the ground: can the rise of “killer robots” be halted?

Such weapons are already being developed by national militaries and terrorist groups.

Sign Up

Get the New Statesman's Morning Call email.

Killer robots – the phrase sounds as if it were lifted from a sci-fi film. But this is no mere fantasy.

On 20 August, 116 of the world’s leading artificial intelligence (AI) and robotics researchers signed an open letter urging the United Nations to ban the development and use of killer robots. The letter, signed by tech leaders such as Tesla’s Elon Musk and Mustafa Suleyman of Alphabet’s DeepMind, cited fears that the robots would unleash a “third revolution in warfare” and start a new arms race.

Public discussion of robotics and AI has become widespread after recent technological advances. Even then, the idea of murdering machines seems far-fetched. Yet such weapons are already being developed by national militaries as well as terrorist groups. In the absence of international action, the world risks becoming yet more hazardous.

A killer robot in the vein of RoboCop is still a distant reality. Depending on the definition, a landmine, which is technically the oldest automatically triggered weapon, could fall under the umbrella. Yet in their current iterations, killer robots, or lethal autonomous weapons (Laws), are specific military robots that have the ability to identify and attack targets without human intervention.

Most Laws still maintain a human “on the loop”, but many experts have predicted that fully functional ones could become a reality as soon as in 2040. Laws come in many guises, both for attack and defence purposes, and they are already being used by countries with advanced military capabilities, such as the US, Germany and India.

Israel’s Iron Dome is a defensive Law, with autonomous targeting and firing capabilities, and is renowned for its efficiency in destroying incoming rockets. The US navy has used radar-guided guns to identify and attack predators since 1970. A 2016 Wired article revealed that the Pentagon is currently researching at least 21 projects involving autonomous weapons. The UK, meanwhile, is working with arms company BAE Systems to develop the Taranis drone, which would be almost invisible and could incorporate full autonomy by the end of testing.

The Campaign to Stop Killer Robots (CSKR) has been calling for a pre-emptive ban on Law development and use since 2013. The weapons, the group argued, should be added to the list of those banned in 1983 by the UN Convention on Certain Conventional Weapons. Leading tech companies, researchers and organisations such as Human Rights Watch have also been successful in drawing international attention to the issue of killer robots.

In 2015, CSKR organised an open letter, also signed by Musk and Stephen Hawking, which served as the catalyst for the agreement of UN talks. The group was instrumental in the publication of the latest letter to coincide with the International Joint Conference on Artificial Intelligence in Melbourne.

Not all tech experts believe that a ban on Laws is justified. In a research paper for Stanford University, Kenneth Anderson and Matthew Waxman argued, “Just as increased automation in many fields is inevitable, automation in weapons will occur, and is occurring, incrementally” – echoing the sentiment of many other legal researchers.

Military experts have highlighted the difficulty of distinguishing autonomous weapons from others. Similarly, the Ministry of Defence responded to the recent open letter by stating that the British government does not support a pre-emptive ban on Laws (though it does not intend to develop the technology). Yet others maintain that a comprehensive ban is the only safe course.

The legal, ethical and humanitarian implications of Laws are profound. Experts have warned that some harmful consequences may not be clear until it is too late. As the latest open letter stated: “They [the weapons] will permit armed conflict to be fought at a scale greater than ever and at timescales faster than humans can comprehend.”

Human Rights Watch has warned of the difficulty of ascribing individual responsibility in the event of an accidental death or a misfire, as well as the danger of “undesirable hacking”. The weapons also risk being deployed by terrorists and extremists of all kinds.

Some of those fears have already been confirmed. A lengthy report on drones by the New America think tank stated that Isis created an “Unmanned Aircraft of the Mujahedeen” unit in 2017. Other terrorist groups such as Hezbollah have long used drones and other unmanned vehicles to carry out surveillance and counter-attacks.

At the end of 2016, 123 countries agreed to hold talks on Laws under the auspices of the UN Convention on Conventional Weapons. But the meetings have yet to take place; the most recent was delayed owing to committee members’ unpaid fees. Time is short to halt the rise of “killer robots”. As the open letter warned: “Once this Pandora’s Box is opened, it will be hard to close.” 

This article first appeared in the 24 August 2017 issue of the New Statesman, Sunni vs Shia