Would you have any ethical qualms about controlling a cockroach's brain?

The RoboRoach will be marketed to US kids from November. It has always seemed mystifying that researchers struggle to see the thorny side of their technologies.

Most people find it much easier to accept approval than to take the blame. It turns out that we don’t always weasel out of things deliberately – it’s just what human beings do.

This revelation comes from a study published this month by neuroscientists at University College London. Volunteers pressed a button that triggered a sound – a cheer, a note of disgust or something neutral – and then estimated the time that had elapsed between pressing the button and hearing the sound.

Though the elapsed time was always the same, the volunteers getting applause underestimated it and those getting a negative reaction after pressing the button made a gross overestimation.

Patrick Haggard, who led the research, interprets this distortion as showing that people feel more “agency” when things go right: they see a direct connection between their action and a positive result but unconsciously distance themselves from things that go wrong. When children and politicians say, “It wasn’t me,” they might not be lying: that could be their perception.

It is an interesting result to apply to people who put science and technology to work. Take the RoboRoach. From November, kids across the US will be able to buy a kit that allows them to feed a steering signal from a smartphone directly into a cockroach’s brain – creating, in effect, a remotecontrolled insect.

The inventors seem not to have any ethical qualms about the idea. Rather, they argue that it is a “great way to learn about neuro-technology”. It is certainly a good way to explore how scientists and engineers filter their sense of responsibility. At best, the RoboRoach encourages the oversimplification of neuroscience. The message is that you can make an electronic incursion into brain circuits and take control of actions. In the US, a few neuroscientists are already testifying in court that an image of a small region of the brain filling with blood can be interpreted to mean that an individual wasn’t responsible for a criminal action. If RoboRoach does create a new generation of neuroscientists, we really are in trouble.

There are deeper issues here. The technology for RoboRoach grew out of projects to co-opt insects as mobile sensor units. Researchers have already performed neurosurgery on beetles, grafting in electronics that make them take off and fly to a specific location. Put a camera, a microphone or a temperature sensor on their back and you have a new set of eyes and ears. It’s a wonderful idea, say its developers: cyborg beetles could help us find people trapped in collapsed buildings after earthquakes.

Similarly wonderful – superficially, at least – is the Robo Raven, developed at the University of Maryland. It is a rather beautiful drone that flaps its wings, performs aerobatics and was natural-looking enough in field trials to be mobbed by other birds. “This is just the beginning: the possibilities are virtually endless,” says S K Gupta, the lead researcher on the project. One clear possibility is that the Robo Raven will function as a surveillance drone that is almost undetectable in the natural world.

It has always seemed mystifying that researchers struggle to see the thorny side of their technologies. It’s not just a military issue – Google, Facebook and the NSA all think that they are making the world a better place and that any downsides of their operations are not their fault. Now we know why: they can’t help it.

Neuroscience and cockroaches: a match made in heaven? Image: Getty

Michael Brooks holds a PhD in quantum physics. He writes a weekly science column for the New Statesman, and his most recent book is At the Edge of Uncertainty: 11 Discoveries Taking Science by Surprise.

This article first appeared in the 17 October 2013 issue of the New Statesman, The Austerity Pope

Getty
Show Hide image

Marcus Hutchins: What we know so far about the arrest of the hero hacker

The 23-year old who stopped the WannaCry malware which attacked the NHS has been arrested in the US. 

In May, Marcus Hutchins - who goes by the online name Malware Tech - became a national hero after "accidentally" discovering a way to stop the WannaCry virus that had paralysed parts of the NHS.

Now, the 23-year-old darling of cyber security is facing charges of cyber crime following a bizarre turn of events that have left many baffled. So what do we know about his indictment?

Arrest

Hutchins, from Ilfracombe in Devon, was reportedly arrested by the FBI in Las Vegas on Wednesday before travelling back from cyber security conferences Black Hat and Def Con.

He is now due to appear in court in Las Vegas later today after being accused of involvement with a piece of malware used to access people's bank accounts.

"Marcus Hutchins... a citizen and resident of the United Kingdom, was arrested in the United States on 2 August, 2017, in Las Vegas, Nevada, after a grand jury in the Eastern District of Wisconsin returned a six-count indictment against Hutchins for his role in creating and distributing the Kronos banking Trojan," said the US Department of Justice.

"The charges against Hutchins, and for which he was arrested, relate to alleged conduct that occurred between in or around July 2014 and July 2015."

His court appearance comes after he was arraigned in Las Vegas yesterday. He made no statement beyond a series of one-word answers to basic questions from the judge, the Guardian reports. A public defender said Hutchins had no criminal history and had previously cooperated with federal authorities. 

The malware

Kronos, a so-called Trojan, is a kind of malware that disguises itself as legitimate software while harvesting unsuspecting victims' online banking login details and other financial data.

It emerged in July 2014 on a Russian underground forum, where it was advertised for $7,000 (£5,330), a relatively high figure at the time, according to the BBC.

Shortly after it made the news, a video demonstrating the malware was posted to YouTube allegedly by Hutchins' co-defendant, who has not been named. Hutchins later tweeted: "Anyone got a kronos sample."

His mum, Janet Hutchins, told the Press Association it is "hugely unlikely" he was involved because he spent "enormous amounts of time" fighting attacks.

Research?

Meanwhile Ryan Kalember, a security researcher from Proofpoint, told the Guardian that the actions of researchers investigating malware may sometimes look criminal.

“This could very easily be the FBI mistaking legitimate research activity with being in control of Kronos infrastructure," said Kalember. "Lots of researchers like to log in to crimeware tools and interfaces and play around.”

The indictment alleges that Hutchins created and sold Kronos on internet forums including the AlphaBay dark web market, which was shut down last month.

"Sometimes you have to at least pretend to be selling something interesting to get people to trust you,” added Kalember. “It’s not an uncommon thing for researchers to do and I don’t know if the FBI could tell the difference.”

It's a sentiment echoed by US cyber-attorney Tor Ekeland, who told Radio 4's Today Programme: "I can think of a number of examples of legitimate software that would potentially be a felony under this theory of prosecution."

Hutchins could face 40 years in jail if found guilty, Ekelend said, but he added that no victims had been named.

This article also appears on NS Tech, a new division of the New Statesman focusing on the intersection of technology and politics.

Oscar Williams is editor of the NewStatesman's sister site NSTech.