The United Nations is due to debate killer robots later this week. However, "killer robots" do not exist at this moment in time in the sense that’s relevant to the UN - that is, robots that can independently choose to kill humans. This isn’t about drones, even if drones are very good at killing - it’s more about what happens when the decision to fire a fatal shot, in any context, moves out of the hands of humans and into the circuitry of a computer.
The discussion, to be held during the UN Convention on Conventional Weapons (CCW) in Geneva from 13 May, will take the form of “an informal meeting of experts”, reports the BBC. Its conclusions will be delivered in a report to the CCW committee in November.
Here is how the BBC lays out the issue :
A killer robot is a fully autonomous weapon that can select and engage targets without any human intervention. They do not currently exist but advances in technology are bringing them closer to reality.
Those in favour of killer robots believe the current laws of war may be sufficient to address any problems that might emerge if they are ever deployed, arguing that a moratorium, not an outright ban, should be called if this is not the case.
However, those who oppose their use believe they are a threat to humanity and any autonomous 'kill functions' should be banned."
That opposition is represented by the Campaign to Stop Killer Robots  - what a name for a campaign group! - which has already produced several of its own reports to argue against autonomous killing machines. The opening debate of the convention will feature the CSKR’s Noel Sharkey, a computer scientists from the University of Sheffield whom older readers may recall as a judge on Robot Wars (!); his opponent, from the Georgia Institute of Technology, is roboticist/roboethicist Ronald Arkin. (There’s a full itinerary for the convention available here .)
Why the worry, though? Because, on current trends, our ability to create autonomous machines will outpace our ability to program them. This is illustrated quite well by driverless cars, which seem to on track to enter real-world use sometime around the end of the decade.
A driverless car requires more than a mere ability to drive along paved roads, on a predetermined route, avoiding obstacles. There are myriad scenarios where a driverless car, like a human driver, will have to decide what to do in an emergency. That could mean killing itself. It could mean killing its own passengers.
Imagine driving down a high street and a child runs out into the road. A human might instinctively hit the brakes, trying to stop, even if the car isn’t physically able to do so. A computer, however, may well realise that a better course of action would be to steer a sharp turn around the kid, to narrowly avoid it. Or it may realise that clipping it at an angle will result in, say, a 40 percent chance of a serious injury, compared to 90 percent if hit straight on. (These statistics are hypothetical, but such things will be modelled by automobile manufacturers.)
Let’s say a driverless car is confronted with two choices: carry on straight, and plow into another vehicle with a family of four people in it; or turn off the road in either direction to avoid the other car, but in the process crash, possibly fatally for the passengers inside. What should it choose? What parameters should the car look to maximise? Total lives saved? Should it value two people, alive, but without the use of their legs, as better or worse than one person, alive, intact? What about three? Or four? Does it matter if the car is a Volvo or a convertible?
What if the car chooses to deliberately kill itself, and its own passenger, instead of risking the lives of those in the other car?
Philosophers have struggled for decades with issues like these - they’re known as trolley problem s, after the 1967 paper by Philippa Foot which introduced the concept. Imagine watching a runaway train carriage heading down a hill, towards a group of five men working on the tracks - they’re too far away to shout a warning, and the carriage will undoubtedly kill them all. However, you’re next to a signal switch. Flick it, and the train is diverted into a siding with only one man at work. He will die, but the five others will live. Do you flick the switch?
Like all good thought experiments, the trolley problem is useful for showing us the gap between the material reality of an ethical quandary and our gut emotional response to it. In brute utilitarian terms, flicking the switch is obviously the right thing to do - but that doesn’t mean we’re comfortable with it, and that sensation of being uncomfortable gives us pause to think things through more th
Other formulations of the trolley problem - like killing a fat man and pushing him onto the tracks to stop the train and save the other five men - make it clear that there’s more nuance to this, and that there’s something that feels wrong about choosing to kill someone compared to letting them die.
However, a driverless car heading at 70mph down a motorway doesn’t have the luxury of ruminating on the hypothetical - it has real lives to consider, and it has to make decisions that are defined by human choice. This introduces strangeness to our idea of responsibility, and to guilt - and as Patrik Lin writes at Wired in his excellent feature on ethical autonomous cars:
Programming a car to collide with any particular kind of object over another seems an awful lot like a targeting algorithm, similar to those for military weapons systems. And this takes the robot-car industry down legally and morally dangerous paths."
Which brings us back to killer robots. Emma Woollacott wrote last week in the NS  about the possible impossibility of teaching robots to understand ethics in the same way that humans do - that is, to feel emotional responses to ethical decisions, like feeling guilty for breaking a rule. In that absence, they have to be programmed to respond to situations like the trolley problem in a way that humans would accept.
As the debate at the UN will explore, maybe the only clean, acceptable way out of this entire debate - the most morally acceptable way out, if you will - would be to ban “Lethal Autonomous Weapons Systems” (as the UN calls them) altogether. Perhaps philosophers will find that they're suddenly employable, as arms and car manufacturers seek their advice out on acceptable moral frameworks to stick into new products. In science fiction, we can rely on Asimov's Three Laws of Robotics  - if only we could do the same with our robots.