New Times,
New Thinking.

  1. Science & Tech
13 July 2016updated 30 Jun 2021 9:26am

Killer police robots and AI drones will further distance us from the act of killing

A robot was used in Dallas to kill a suspected shooter. Is this the future of policing?

By Barbara Speed

When five Dallas police officers were fatally shot during a peaceful Black Lives Matter protest, officers at the scene took an unprecedented step. After a long stand-off with the alleged shooter in a parking garage, they loaded up a bomb-disposal robot with an explosive and sent it after him. The explosive detonated; the suspect died.

As far as we know, this is the first example of this kind of robot policing anywhere in the world. Bomb disposal robots, like the Northrop Grumman Remotec Andros reportedly used in Dallas, have been used by the military and police since the 1970s, but they have always been used to neutralise weapons –not administer them. The only remotely similar case, as noted by CNN, involved strapping tear gas capsules to a bomb disposal robot; but this, too, was in order to prevent more violence, rather than kill or injure.

In Dallas, police based their actions on their certainty that they had their perpetrator, and the fact they would put more police in danger if they approached the shooter themselves. Dallas police Chief David Brown told reporters:

“We saw no other option but to use our bomb robot and place a device on its extension for it to detonate where the suspect was.”

Seen from a wider perspective, though, the move seems a continuation of a much longer-running theme in policing, especially in the US. Professor Paul Hirschfield wrote for The Conversation in January 2016 that US law tends towards protecting police, rather than the people they serve. This, he argues, has led to the tensions we now see between ordinary US citizens and the police employed to protect them:

Select and enter your email address Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via saturdayread.substack.com The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via morningcall.substack.com
Visit our privacy Policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.
THANK YOU

 “[The Supreme Court] ruled that laws permitting deadly force to prevent grave and imminent harm from the perspective of a ‘reasonable officer on the scene’ are constitutional”.

Even under these laws, of course, many deaths of black Americans at the hands of police are surely not justified, especially for those who were unarmed. A “reasonable” officer would not assume anything about a civilian on the grounds of their race alone. Deadly force in many of these cases cases was the result of a total miscalculation of the situation at hand. And yet, over and over, these police are not disciplined or prosecuted as a member of the public would be. They have special status. 

We know that police mindset plays a huge role in the deaths of those in police custody. That mindset includes racial profiling, but it also must include the knowledge that police are entitled to use deadly force, and are unlikely to be asked to prove that this was “reasonable” or really preventing “grave and imminent harm”. Perhaps the most important word in Hischfield’s quote is “perspective”: as long as the police officer thinks they are in danger, they are entitled to use force.

So what next? Based on Dallas, it seems that to even further protect police officers, the killing of armed suspects may be deputised to machines. If we believed these machines may make more reasonable decisions than their human counterparts – look for a weapon, rather than the colour of someone’s skin, say – then this could be a progressive step. Yet these machines will still be wielded by humans, only this time, the humans will be even further away – and perhaps even more prone to mistakes. Here lies the problem.

There’s a parallel in the world of drones sent into war to conduct airstrikes. As with policing, there’s a positive case for it: it spares the soldiers themselves from entering warzones. Yet drones are directed by operators miles, even sometimes thousands of miles, away, and as with policing, this means that there is now a far greater gulf between the person making a decision to kill, and the person they are killing. At that distance, do we have enough information to know killing is the right option? And is the inevitable lack of empathy for a face hundreds of metres, or thousands of miles away, something we should be seeking out?

Interestingly, drones may soon be directed with little to no human input at all. “Sentient” drones directed by Artificial Intelligence (AI) algorithms identify different sites and calculate whether they should strike. This sounds terrifying, but there is evidence that AI can actually make better decisions than humans can, as they accurately weigh up conflicting factors to make a decision. Humans are more likely to make a mistake, let emotions override pragmatism, or shoot from a helicopter at a vehicle containing two children as though they’re playing a hilarious videogame.

Algorithms are still written by people, however. How do we programme a drone to decide whether to target a weapons factory half a mile from a school? Where are the lines drawn? Scores of children have died in drone strikes in Gaza and Pakistan within the past few years, showing that drones are neither as accurate, nor our strike decisions as foolproof, as we’d like to think.

Our lines of ethical acceptability in warfare and civil policing are still blurry and disputed. Adding a new factor, which threatens to dehumanise the target, and distance the killer form their actions, won’t clarify what are essentially moral, human questions. As in the old adage, robots and drones won’t really be killing in the future – it’ll still be plain old people doing it.

Content from our partners
The UK’s skills shortfall is undermining growth
<strong>What kind of tax reforms would stimulate growth?</strong>