Nothing like this one: a humanoid robot at a robotics fair in Lyon, 19 March. Photo: Getty
Show Hide image

Should scientists be prosecuted for killings carried out by their armed robots?

Using technology about to be approved for medical use, we can now program computers to identify a possible target and decide whether to fire weapons at it.

Should scientists be prosecuted for killings carried out by armed robots? If that sounds like the premise of a science-fiction film, don’t be fooled – the question came up at the UN in Geneva this month. The genre’s power to inspire innovation is well known. Recently, for instance, physicists unveiled a new kind of tractor beam. In sci-fi, this is the kind of pull that can bring a spaceship into docking position, but at the moment we can exert a significant pull on a centimetre-sized object only. Still, that’s quite an achievement for a technology using nothing more than sound.

Sound is a variation in air pressure, with regions of high and low pressure forming waves. Computer algorithms can shape these waves so that their energy exerts a pull. The sonic tractor beam is viewed as a means of moving medicines – a pill, say – around the body to target particular organs. Because sonic sources are already in the medical toolkit – tumours, for instance, are blasted with ultrasound – approval for medical use is expected to come quickly.

Probably not quickly enough to reach the market before the new “Luke” hand, however. The US Food and Drug Administration approved this Star Wars-style prosthetic limb for general sale on 9 May. The hand is significant because it is controlled by electrical signals taken from muscle contractions. This has allowed users to perform tasks that are impossible with standard prostheses, such as using key-operated locks and handling fragile objects such as eggs. These ultra-sensitive capabilities result from artificial intelligence: signal processing that learns how to translate the electrical signals from the muscles into the delicate operations that the wearer wants to perform.

The hand’s development was largely funded by the Defence Advanced Research Projects Agency (Darpa) and it was approved through the FDA’s “de novo” classification process, designed to speed up the system of bringing first-of-their-kind devices to market. This fast-track route is open to only “low-to-moderate-risk” medical devices. The autonomous, potentially lethal robots that Darpa also has coming off the drawing board are not eligible.

We can now program computers to identify a possible target and decide whether to fire weapons at it. In effect, it is the same programming that allows the Luke hand to decide what an amputee’s muscle twitches mean, or keeps the tractor beam pulling the pill towards your liver. How can we hold the scientists to account for potential misuse?

The question was raised at the UN’s first debate on “laws”: lethal autonomous weapons systems. While some experts want an outright ban, Ronald Arkin of the Georgia Institute of Technology pointed out that Pope Innocent II tried to ban the crossbow in 1139, and argued that it would be almost impossible to enforce such a ban. Much better, he argued, to develop these technologies in ways that might make war zones safer for non-combatants. In the meantime, Arkin suggests, if these robots are used illegally, the policymakers, soldiers, industrialists and, yes, scientists involved should be held accountable.

However, if these are the same scientists and the same basic algorithms used for humanitarian medical purposes, it’s going to be difficult to bring a case. And should we risk putting the brakes on innovation for fear of subsequent misuse? Maybe we should let the robots decide.

Michael Brooks holds a PhD in quantum physics. He writes a weekly science column for the New Statesman, and his most recent book is At the Edge of Uncertainty: 11 Discoveries Taking Science by Surprise.

This article first appeared in the 21 May 2014 issue of the New Statesman, Peak Ukip

Photo: Getty
Show Hide image

2017 is the year we realise we've been doing the Internet wrong

Networks can distribute power or they can centralise it.

A couple of years ago I visited Manchester tech start up Reason Digital. They were developing an app to help keep sex workers safe. The nature of sex work means workers are often vulnerable to crime, crimes which can be particularly difficult to solve because witnesses are reluctant to come forward and crime scenes often public and subject to interference.

Reason Digital thought if they could alert sex workers of relevant incidents in their vicinity – harassment, a foiled attack – that would help sex workers protect themselves.

So, of course, they created an app which tracked the location and habits of all sex workers in Manchester in a central database and sent out alerts based on where they were and what they were doing, right?

Did they hell.

They knew a real-time centralised, location database would immediately be a target for the very people they wanted to help protect sex workers from. Moreover, they wanted their app to empower sex workers, to put them in control. And they knew that sex workers would be reluctant to hand over any part of their hard fought privacy.

So the Reason Digital app kept the location and other data on the sex workers’ smart phone and let it decide which alerts were relevant and what information to share.

That is the kind of distributed, autonomous app putting people in control we just don’t see on the Net.  No, all the apps that improve our lives – from Facebook to Uber to match.com – cull the intelligence and data from the user and stick it in the vaults of a company or, occasionally, a government.

Thankfully, the majority of us are nowhere near as vulnerable as the majority of sex workers – to physical crime at least.

But we are increasingly vulnerable to cybercrime, a vulnerability which will increase exponentially once everything is connected to the Internet of Things.

And we are vulnerable to the exploitation of our data, whether through data mining or algorithmic determinism. Google’s search engine can be “gamed” by extremists, used to strengthen hatred and spread stereotypes. I have also been told one major dating site optimises it matchmaking algorithm for short term relationships – it means more return business. And Uber have admitted it knows you’re likely to pay more for a ride if your battery is low – which it also knows. Our data is what drives services and profits on the Net - but we’re unable to reap the rewards of the value we create.

That’s why 2017 will be the year we realise we got the Net wrong.

Not the underlying internet, designed by the public and third sectors in the seventies to be as distributed and autonomous as possible.

Or even the World Wide Web, invented in the nineties by the public and private sectors, again without central control.

But the apps developed in the last couple of decades to use the infrastructure of the internet to deliver services.

Networks can distribute power – like the electricity power grid –  or they can centralise it – like old boys networks.

Increasingly, I fear the Net is doing the latter.  And for three main reasons.

Firstly, a technical legacy of the early internet: in the days of slow broadband and unreliable devices it made sense to transmit as little as possible and control your user experience by centralising it. That problem is by and large history, but the centralisation remains.

Secondly these apps were mainly developed by a small group of privileged people – white, male, relatively well-off engineers. That’s why, for example, the biggest campaign of the early  internet pioneers was against porn filtering. Yes, for many years the most inspirational internet civil rights struggle was for rich western men to have absolutely untrammelled access to porn. So often I was the only woman at the conference table as this issue was raised again and again, thinking ‘is this really the biggest issue the tech community faces’?

But there is a seam of libertarianism in technology which sees it as above and beyond the state in general and regulation in particular. Even as a replacement for it. Who needs a public sector if you have dual core processing?  When tech was the poor relation in the global economy that could be interesting and disruptive. Now tech is the global economy, it is self-serving.

And thirdly these apps were developed in a time of neoliberal consensus. The state was beaten and bowed, shrunk to its role of uprooting barriers and getting out of the way of the brilliant, innovative, invisible hand of the private sector.  When I was at Ofcom in the 2000s we strove valiantly, day and night, to avoid any regulation of the internet, even where that included consumer rights and fairer power distribution.

As a consequence now the Net is distributing power but to the wrong people.

  • It’s not empowering the poor and dispossessed but the rich and self-possessed.
  • It’s not empowering sex workers in Manchester but criminal cartels in China.
  • It’s not empowering the  cabbie in Coventry but the $62Billion Uber everywhere.
  • It’s not empowering the plucky little startup in rural Hexhamshire but the global enterprise headquartered in Bermuda.
  • It’s not empowering the Nigerian market woman with a yam to sell but the Wall Street stockbroker with your data to market.
  • It’s not empowering the Iranian dissident but the Russian state.

That’s a betrayal of the power and original purpose of the net: for greater human empowerment.

To be sure some of that is happening. The Arab Spring, for example  Campaigns for the tampon tax and Black Lives Matter are enhanced by the web. Apps such as Pol.is and MassLBP look to make 

digital democracy work. Institutes like Newcastle’s Digital Civics Institute are working at systems to enable real democratic collaboration. Groups and enterprises such as Medical Confidential, MySociety, Cap Collectif and Delib try to deliver control back to the citizen consumer. European research project d-cent has helped develop tools that can make deliberative democracy work.

But against that we have the rapacious data centralisation of big companies and, at times, the state.

What we need is a government that is capable of leading and inspiring the tech sector to empower citizens and consumers. Ignoring the libertarian technocrats who say it’s for them to determine how tech power is distributed and remembering that the white heat of technology should be at the service of the people, not the other way round.  This government has neither the capacity nor the will to take on that mission. As part of our review of industrial strategy, Labour will be examining ways in which tech can be empowered to deliver the economy we want, and people empowered to make the best use of it.

Tech and politics are the twin drivers of progress, and I’m lucky enough to have worked in both. If there is one thing we have seen it is that as people become richer they have fewer children, more education and a greater sense of privacy and autonomy.  2017 is the year to start giving back to the people the data and control they should never have lost.

Chi Onwurah is the Labour MP for Newcastle upon Tyne Central, and the shadow minister for industrial strategy.