Would you have any ethical qualms about controlling a cockroach's brain?

The RoboRoach will be marketed to US kids from November. It has always seemed mystifying that researchers struggle to see the thorny side of their technologies.

Most people find it much easier to accept approval than to take the blame. It turns out that we don’t always weasel out of things deliberately – it’s just what human beings do.

This revelation comes from a study published this month by neuroscientists at University College London. Volunteers pressed a button that triggered a sound – a cheer, a note of disgust or something neutral – and then estimated the time that had elapsed between pressing the button and hearing the sound.

Though the elapsed time was always the same, the volunteers getting applause underestimated it and those getting a negative reaction after pressing the button made a gross overestimation.

Patrick Haggard, who led the research, interprets this distortion as showing that people feel more “agency” when things go right: they see a direct connection between their action and a positive result but unconsciously distance themselves from things that go wrong. When children and politicians say, “It wasn’t me,” they might not be lying: that could be their perception.

It is an interesting result to apply to people who put science and technology to work. Take the RoboRoach. From November, kids across the US will be able to buy a kit that allows them to feed a steering signal from a smartphone directly into a cockroach’s brain – creating, in effect, a remotecontrolled insect.

The inventors seem not to have any ethical qualms about the idea. Rather, they argue that it is a “great way to learn about neuro-technology”. It is certainly a good way to explore how scientists and engineers filter their sense of responsibility. At best, the RoboRoach encourages the oversimplification of neuroscience. The message is that you can make an electronic incursion into brain circuits and take control of actions. In the US, a few neuroscientists are already testifying in court that an image of a small region of the brain filling with blood can be interpreted to mean that an individual wasn’t responsible for a criminal action. If RoboRoach does create a new generation of neuroscientists, we really are in trouble.

There are deeper issues here. The technology for RoboRoach grew out of projects to co-opt insects as mobile sensor units. Researchers have already performed neurosurgery on beetles, grafting in electronics that make them take off and fly to a specific location. Put a camera, a microphone or a temperature sensor on their back and you have a new set of eyes and ears. It’s a wonderful idea, say its developers: cyborg beetles could help us find people trapped in collapsed buildings after earthquakes.

Similarly wonderful – superficially, at least – is the Robo Raven, developed at the University of Maryland. It is a rather beautiful drone that flaps its wings, performs aerobatics and was natural-looking enough in field trials to be mobbed by other birds. “This is just the beginning: the possibilities are virtually endless,” says S K Gupta, the lead researcher on the project. One clear possibility is that the Robo Raven will function as a surveillance drone that is almost undetectable in the natural world.

It has always seemed mystifying that researchers struggle to see the thorny side of their technologies. It’s not just a military issue – Google, Facebook and the NSA all think that they are making the world a better place and that any downsides of their operations are not their fault. Now we know why: they can’t help it.

Neuroscience and cockroaches: a match made in heaven? Image: Getty

Michael Brooks holds a PhD in quantum physics. He writes a weekly science column for the New Statesman, and his most recent book is At the Edge of Uncertainty: 11 Discoveries Taking Science by Surprise.

This article first appeared in the 17 October 2013 issue of the New Statesman, The Austerity Pope

Getty
Show Hide image

How virtual reality pigs could change the justice system forever

Lawyers in Canda are aiming to defend their client by asking the judge to don a virtual reality headset and experience the life of a pig.

“These are not humans, you dumb frickin' broad.”

Those were the words truck driver Jeffrey Veldjesgraaf said to animal rights activist Anita Krajnc on 22 June 2015 as she gave water to some of the 190 pigs in his slaughterhouse-bound truck. This week, 49-year-old Kranjc appeared at the Ontario Court of Justice charged with mischief for the deed, which she argues was an act of compassion for the overheated animals. To prove this, her lawyers hope to show a virtual reality video of a slaughterhouse to the judge, David Harris. Pigs might not be humans, but humans are about to become pigs.

“The tack that we’ve taken recognises that Anita hasn’t done anything wrong,” said one of her lawyers, James Silver. Along with testimony from environmental and animal welfare experts, her defence hope the virtual reality experience, which is planned for when the trial resumes in October, will allow Harris to understand Kranjc’s point of view. Via the pigs’ point of view.

It’s safe to say that the simulated experience of being a pig in a slaughterhouse will not be a pleasant one. iAnimal, an immersive VR video about the lives of farm animals, launched earlier this year and has already changed attitudes towards meat. But whether or not Harris becomes a vegetarian after the trial is not the most pressing aspect of this case. If the lawyers get their wish to bring a VR headset into the courtroom, they will make legal history.

“Virtual reality is a logical progression from the existing ways in which technology is used to illustrate and present evidence in court,” says Graham Smith, a technology lawyer and partner at the international law firm Bird & Bird.

“Graphics, charts, visualisations, simulations and reconstructions, data-augmented video and other technology tools are already used to assist courts in understanding complex data and sequences of events.”

Researchers have already been looking into the ways VR can be used in courts, with particular focus on recreating crime scenes. In May, Staffordshire University launched a project that aims to “transport” jurors into virtual crime scenes, whilst in 2014 researchers at the Institute of Forensic Medicine in Switzerland created a 3D reconstruction of a shooting, including the trajectory of a bullet. Although this will help bring to life complex evidence that might be hard to understand or picture in context, the use of VR in this way is not without its flaws.

“Whether a particular aid should be admitted into evidence can give rise to argument, especially in criminal trials involving a jury,” says Smith. “Does the reconstruction incorporate factual assumptions or inferences that are in dispute, perhaps based on expert evidence? Does the reconstruction fairly represent the underlying materials? Is the data at all coloured by the particular way in which it is presented? 

“Would immersion aid a jury's understanding of the events or could it have a prejudicial impact? At its core, would VR in a particular case add to or detract from the court's ability objectively to assess the evidence?”

The potential for bias is worrying, especially if the VR video was constructed from witness testimony, not CCTV footage or other quantitative data. To avoid bias, feasibly both the defence and prosecution could recreate an event from different perspectives. If the jury or judge experience the life of a distressed pig on its way to be slaughtered, should they also be immersed in the life of a sweaty trucker, just trying to do his job and panicked by a protester feeding his pigs an unknown substance from a bottle?

“These are not new debates,” says Smith. “Lawyers are used to tackling these kinds of issues with the current generation of illustrative aids. Before too long they will find themselves doing so with immersive VR.”

It seems safe to trust, then, that legal professionals will readily come up with failsafe guidelines for the use of VR in order to avoid prejudice or bias. But beyond legal concerns, there is another issue: ethics.

In 2009, researchers at the University of Leicester discovered that jurors face trauma due to their exposure to harrowing evidence. “The research confirms that jury service, particularly for crimes against people, can cause significant anxiety, and for a vulnerable minority it can lead to severe clinical levels of stress or the symptoms of post traumatic stress disorder,” they wrote.

It’s easy to see how this trauma could be exacerbated by being virtually transported to a scene and watching a crime play out before your eyes. Gamers have already spoken about panic attacks as a result of VR horror games, with Denny Unger, creative director of Cloudhead Games, speculating they could cause heart attacks. A virtual reality murder, however virtual, is still real, and could easily cause similar distress.

Then there is the matter of which crimes get the VR treatment. Would courts allow the jury to be immersed in a VR rape? Despite how harrowing and farfetched that sounds, a virtual reality sexual assault was already screened at the 2015 Sundance Film Festival.

For now, legal professionals have time to consider these issues. By October, Kranjc’s lawyers may or may not have been allowed to use VR in court. If they are, they may change legal history. If they’re not, Kranjc may be found guilty, and faces six months in jail or a $5,000 fine. 

Amelia Tait is a technology and digital culture writer at the New Statesman.