Should scientists be prosecuted for killings carried out by their armed robots?

Using technology about to be approved for medical use, we can now program computers to identify a possible target and decide whether to fire weapons at it.

Sign Up

Get the New Statesman's Morning Call email.

Should scientists be prosecuted for killings carried out by armed robots? If that sounds like the premise of a science-fiction film, don’t be fooled – the question came up at the UN in Geneva this month. The genre’s power to inspire innovation is well known. Recently, for instance, physicists unveiled a new kind of tractor beam. In sci-fi, this is the kind of pull that can bring a spaceship into docking position, but at the moment we can exert a significant pull on a centimetre-sized object only. Still, that’s quite an achievement for a technology using nothing more than sound.

Sound is a variation in air pressure, with regions of high and low pressure forming waves. Computer algorithms can shape these waves so that their energy exerts a pull. The sonic tractor beam is viewed as a means of moving medicines – a pill, say – around the body to target particular organs. Because sonic sources are already in the medical toolkit – tumours, for instance, are blasted with ultrasound – approval for medical use is expected to come quickly.

Probably not quickly enough to reach the market before the new “Luke” hand, however. The US Food and Drug Administration approved this Star Wars-style prosthetic limb for general sale on 9 May. The hand is significant because it is controlled by electrical signals taken from muscle contractions. This has allowed users to perform tasks that are impossible with standard prostheses, such as using key-operated locks and handling fragile objects such as eggs. These ultra-sensitive capabilities result from artificial intelligence: signal processing that learns how to translate the electrical signals from the muscles into the delicate operations that the wearer wants to perform.

The hand’s development was largely funded by the Defence Advanced Research Projects Agency (Darpa) and it was approved through the FDA’s “de novo” classification process, designed to speed up the system of bringing first-of-their-kind devices to market. This fast-track route is open to only “low-to-moderate-risk” medical devices. The autonomous, potentially lethal robots that Darpa also has coming off the drawing board are not eligible.

We can now program computers to identify a possible target and decide whether to fire weapons at it. In effect, it is the same programming that allows the Luke hand to decide what an amputee’s muscle twitches mean, or keeps the tractor beam pulling the pill towards your liver. How can we hold the scientists to account for potential misuse?

The question was raised at the UN’s first debate on “laws”: lethal autonomous weapons systems. While some experts want an outright ban, Ronald Arkin of the Georgia Institute of Technology pointed out that Pope Innocent II tried to ban the crossbow in 1139, and argued that it would be almost impossible to enforce such a ban. Much better, he argued, to develop these technologies in ways that might make war zones safer for non-combatants. In the meantime, Arkin suggests, if these robots are used illegally, the policymakers, soldiers, industrialists and, yes, scientists involved should be held accountable.

However, if these are the same scientists and the same basic algorithms used for humanitarian medical purposes, it’s going to be difficult to bring a case. And should we risk putting the brakes on innovation for fear of subsequent misuse? Maybe we should let the robots decide.

Michael Brooks holds a PhD in quantum physics. He writes a weekly science column for the New Statesman, and his most recent book is At the Edge of Uncertainty: 11 Discoveries Taking Science by Surprise.

This article appears in the 21 May 2014 issue of the New Statesman, Peak Ukip