View all newsletters
Sign up to our newsletters

Support 110 years of independent journalism.

12 May 2014

This week the UN is going to debate the ethics of killer robots

Machines that can choose who to kill independently of a human operator are coming, to the concern of ethicists and roboticists alike.

By Ian Steadman

The United Nations is due to debate killer robots later this week. However, “killer robots” do not exist at this moment in time in the sense that’s relevant to the UN – that is, robots that can independently choose to kill humans. This isn’t about drones, even if drones are very good at killing – it’s more about what happens when the decision to fire a fatal shot, in any context, moves out of the hands of humans and into the circuitry of a computer.

The discussion, to be held during the UN Convention on Conventional Weapons (CCW) in Geneva from 13 May, will take the form of “an informal meeting of experts”, reports the BBC. Its conclusions will be delivered in a report to the CCW committee in November.

Here is how the BBC lays out the issue:

A killer robot is a fully autonomous weapon that can select and engage targets without any human intervention. They do not currently exist but advances in technology are bringing them closer to reality.

Those in favour of killer robots believe the current laws of war may be sufficient to address any problems that might emerge if they are ever deployed, arguing that a moratorium, not an outright ban, should be called if this is not the case.

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com Our Thursday ideas newsletter, delving into philosophy, criticism, and intellectual history. The best way to sign up for The Salvo is via thesalvo.substack.com Stay up to date with NS events, subscription offers & updates. Weekly analysis of the shift to a new economy from the New Statesman's Spotlight on Policy team. The best way to sign up for The Green Transition is via spotlightonpolicy.substack.com
  • Administration / Office
  • Arts and Culture
  • Board Member
  • Business / Corporate Services
  • Client / Customer Services
  • Communications
  • Construction, Works, Engineering
  • Education, Curriculum and Teaching
  • Environment, Conservation and NRM
  • Facility / Grounds Management and Maintenance
  • Finance Management
  • Health - Medical and Nursing Management
  • HR, Training and Organisational Development
  • Information and Communications Technology
  • Information Services, Statistics, Records, Archives
  • Infrastructure Management - Transport, Utilities
  • Legal Officers and Practitioners
  • Librarians and Library Management
  • Management
  • Marketing
  • OH&S, Risk Management
  • Operations Management
  • Planning, Policy, Strategy
  • Printing, Design, Publishing, Web
  • Projects, Programs and Advisors
  • Property, Assets and Fleet Management
  • Public Relations and Media
  • Purchasing and Procurement
  • Quality Management
  • Science and Technical Research and Development
  • Security and Law Enforcement
  • Service Delivery
  • Sport and Recreation
  • Travel, Accommodation, Tourism
  • Wellbeing, Community / Social Services
Visit our privacy Policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

However, those who oppose their use believe they are a threat to humanity and any autonomous ‘kill functions’ should be banned.”

That opposition is represented by the Campaign to Stop Killer Robots – what a name for a campaign group! – which has already produced several of its own reports to argue against autonomous killing machines. The opening debate of the convention will feature the CSKR’s Noel Sharkey, a computer scientists from the University of Sheffield whom older readers may recall as a judge on Robot Wars(!); his opponent, from the Georgia Institute of Technology, is roboticist/roboethicist Ronald Arkin. (There’s a full itinerary for the convention available here.)

Why the worry, though? Because, on current trends, our ability to create autonomous machines will outpace our ability to program them. This is illustrated quite well by driverless cars, which seem to on track to enter real-world use sometime around the end of the decade.

A driverless car requires more than a mere ability to drive along paved roads, on a predetermined route, avoiding obstacles. There are myriad scenarios where a driverless car, like a human driver, will have to decide what to do in an emergency. That could mean killing itself. It could mean killing its own passengers.

Imagine driving down a high street and a child runs out into the road. A human might instinctively hit the brakes, trying to stop, even if the car isn’t physically able to do so. A computer, however, may well realise that a better course of action would be to steer a sharp turn around the kid, to narrowly avoid it. Or it may realise that clipping it at an angle will result in, say, a 40 percent chance of a serious injury, compared to 90 percent if hit straight on. (These statistics are hypothetical, but such things will be modelled by automobile manufacturers.)

Let’s say a driverless car is confronted with two choices: carry on straight, and plow into another vehicle with a family of four people in it; or turn off the road in either direction to avoid the other car, but in the process crash, possibly fatally for the passengers inside. What should it choose? What parameters should the car look to maximise? Total lives saved? Should it value two people, alive, but without the use of their legs, as better or worse than one person, alive, intact? What about three? Or four? Does it matter if the car is a Volvo or a convertible?

What if the car chooses to deliberately kill itself, and its own passenger, instead of risking the lives of those in the other car?

Philosophers have struggled for decades with issues like these – they’re known as trolley problems, after the 1967 paper by Philippa Foot which introduced the concept. Imagine watching a runaway train carriage heading down a hill, towards a group of five men working on the tracks – they’re too far away to shout a warning, and the carriage will undoubtedly kill them all. However, you’re next to a signal switch. Flick it, and the train is diverted into a siding with only one man at work. He will die, but the five others will live. Do you flick the switch?

Like all good thought experiments, the trolley problem is useful for showing us the gap between the material reality of an ethical quandary and our gut emotional response to it. In brute utilitarian terms, flicking the switch is obviously the right thing to do – but that doesn’t mean we’re comfortable with it, and that sensation of being uncomfortable gives us pause to think things through more th

Other formulations of the trolley problem – like killing a fat man and pushing him onto the tracks to stop the train and save the other five men – make it clear that there’s more nuance to this, and that there’s something that feels wrong about choosing to kill someone compared to letting them die.

However, a driverless car heading at 70mph down a motorway doesn’t have the luxury of ruminating on the hypothetical – it has real lives to consider, and it has to make decisions that are defined by human choice. This introduces strangeness to our idea of responsibility, and to guilt – and as Patrik Lin writes at Wired in his excellent feature on ethical autonomous cars:

Programming a car to collide with any particular kind of object over another seems an awful lot like a targeting algorithm, similar to those for military weapons systems. And this takes the robot-car industry down legally and morally dangerous paths.”

Which brings us back to killer robots. Emma Woollacott wrote last week in the NS about the possible impossibility of teaching robots to understand ethics in the same way that humans do – that is, to feel emotional responses to ethical decisions, like feeling guilty for breaking a rule. In that absence, they have to be programmed to respond to situations like the trolley problem in a way that humans would accept.

As the debate at the UN will explore, maybe the only clean, acceptable way out of this entire debate – the most morally acceptable way out, if you will – would be to ban “Lethal Autonomous Weapons Systems” (as the UN calls them) altogether. Perhaps philosophers will find that they’re suddenly employable, as arms and car manufacturers seek their advice out on acceptable moral frameworks to stick into new products. In science fiction, we can rely on Asimov’s Three Laws of Robotics – if only we could do the same with our robots.

Content from our partners
The promise of prevention
How Labour hopes to make the UK a leader in green energy
Is now the time to rethink health and care for older people? With Age UK

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com Our Thursday ideas newsletter, delving into philosophy, criticism, and intellectual history. The best way to sign up for The Salvo is via thesalvo.substack.com Stay up to date with NS events, subscription offers & updates. Weekly analysis of the shift to a new economy from the New Statesman's Spotlight on Policy team. The best way to sign up for The Green Transition is via spotlightonpolicy.substack.com
  • Administration / Office
  • Arts and Culture
  • Board Member
  • Business / Corporate Services
  • Client / Customer Services
  • Communications
  • Construction, Works, Engineering
  • Education, Curriculum and Teaching
  • Environment, Conservation and NRM
  • Facility / Grounds Management and Maintenance
  • Finance Management
  • Health - Medical and Nursing Management
  • HR, Training and Organisational Development
  • Information and Communications Technology
  • Information Services, Statistics, Records, Archives
  • Infrastructure Management - Transport, Utilities
  • Legal Officers and Practitioners
  • Librarians and Library Management
  • Management
  • Marketing
  • OH&S, Risk Management
  • Operations Management
  • Planning, Policy, Strategy
  • Printing, Design, Publishing, Web
  • Projects, Programs and Advisors
  • Property, Assets and Fleet Management
  • Public Relations and Media
  • Purchasing and Procurement
  • Quality Management
  • Science and Technical Research and Development
  • Security and Law Enforcement
  • Service Delivery
  • Sport and Recreation
  • Travel, Accommodation, Tourism
  • Wellbeing, Community / Social Services
Visit our privacy Policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU