Would you have any ethical qualms about controlling a cockroach's brain?

The RoboRoach will be marketed to US kids from November. It has always seemed mystifying that researchers struggle to see the thorny side of their technologies.

Most people find it much easier to accept approval than to take the blame. It turns out that we don’t always weasel out of things deliberately – it’s just what human beings do.

This revelation comes from a study published this month by neuroscientists at University College London. Volunteers pressed a button that triggered a sound – a cheer, a note of disgust or something neutral – and then estimated the time that had elapsed between pressing the button and hearing the sound.

Though the elapsed time was always the same, the volunteers getting applause underestimated it and those getting a negative reaction after pressing the button made a gross overestimation.

Patrick Haggard, who led the research, interprets this distortion as showing that people feel more “agency” when things go right: they see a direct connection between their action and a positive result but unconsciously distance themselves from things that go wrong. When children and politicians say, “It wasn’t me,” they might not be lying: that could be their perception.

It is an interesting result to apply to people who put science and technology to work. Take the RoboRoach. From November, kids across the US will be able to buy a kit that allows them to feed a steering signal from a smartphone directly into a cockroach’s brain – creating, in effect, a remotecontrolled insect.

The inventors seem not to have any ethical qualms about the idea. Rather, they argue that it is a “great way to learn about neuro-technology”. It is certainly a good way to explore how scientists and engineers filter their sense of responsibility. At best, the RoboRoach encourages the oversimplification of neuroscience. The message is that you can make an electronic incursion into brain circuits and take control of actions. In the US, a few neuroscientists are already testifying in court that an image of a small region of the brain filling with blood can be interpreted to mean that an individual wasn’t responsible for a criminal action. If RoboRoach does create a new generation of neuroscientists, we really are in trouble.

There are deeper issues here. The technology for RoboRoach grew out of projects to co-opt insects as mobile sensor units. Researchers have already performed neurosurgery on beetles, grafting in electronics that make them take off and fly to a specific location. Put a camera, a microphone or a temperature sensor on their back and you have a new set of eyes and ears. It’s a wonderful idea, say its developers: cyborg beetles could help us find people trapped in collapsed buildings after earthquakes.

Similarly wonderful – superficially, at least – is the Robo Raven, developed at the University of Maryland. It is a rather beautiful drone that flaps its wings, performs aerobatics and was natural-looking enough in field trials to be mobbed by other birds. “This is just the beginning: the possibilities are virtually endless,” says S K Gupta, the lead researcher on the project. One clear possibility is that the Robo Raven will function as a surveillance drone that is almost undetectable in the natural world.

It has always seemed mystifying that researchers struggle to see the thorny side of their technologies. It’s not just a military issue – Google, Facebook and the NSA all think that they are making the world a better place and that any downsides of their operations are not their fault. Now we know why: they can’t help it.

Neuroscience and cockroaches: a match made in heaven? Image: Getty

Michael Brooks holds a PhD in quantum physics. He writes a weekly science column for the New Statesman, and his most recent book is At the Edge of Uncertainty: 11 Discoveries Taking Science by Surprise.

This article first appeared in the 17 October 2013 issue of the New Statesman, The Austerity Pope

Google Allo
Show Hide image

Google Allo: a chat app like WhatsApp – but with only a cursory consideration for your privacy

When will we stop sacrificing security for stickers of muscular bulls wiggling their butts? 

The world already has enough chat apps. When Google’s latest messaging service Allo launched this morning, a cursory glance showed us it had much the same features as Snapchat, WhatsApp, and Facebook Messenger before it. You can doodle on your pictures! Here’s an emoji with heart eyes! Look at this sticker of a bull twerking! Oh-by-the-way-we’re-reading-your-messages-hope-that’s-not-a-problem-bye!

Just like Facebook, Instagram, Skype, and Snapchat, the messages you send on Google Allo are not automatically end-to-end encrypted. This type of encryption – which Whatsapp began using in April – means that only you and the recipient of your message can read it and nobody in between. Messaging apps without end-to-end encryption can store your messages on their servers and access them at any time, as well as hand them over to the government if required by law. The technology academic and author John Naughton has likened it to “sending your most intimate secrets via holiday postcards” and Edward Snowden went as far as too call Google Allo “dangerous”.

But Google has a reason for not using end-to-end encryption (whether it’s a good one or not is up to you). The app includes Google Assistant, a tool which can answer your questions within any chat. In order for this to work, Google naturally needs to access your messages. Its new “Smart Reply” feature also means it reads and analyses your conversations to give you personalised auto-reply suggestions. Despite originally promising that it would only store your chat history for a limited amount of time, Google has now admitted that it will retain the data unless you personally choose to delete it. The app is actively trying to learn as much about you as possible, and then storing the data. 

But while Google Allo doesn’t automatically offer end-to-end encryption, it is receiving praise for the ability to opt in via “Incognito mode”. Once this mode is selected, you have end-to-end encryption on your messages, and you can set them to expire after a certain period of time. Wonderful. Brilliant. Article over. No more worries.

Except by placing the onus on the user to opt in to privacy (rather than opt in to Google Assistant) Google has played a trick that many companies have played before. Amazon recently launched a UK version of Echo, a “constantly listening” smart device that records and stores all of your questions, and gave users the option to mute the machine if they were concerned about privacy. But by its very nature, no one who desires this device is concerned about privacy.

And so too with Google Allo. Anyone worried about Snowden’s warning won’t download it, and those who do download it are unconcerned about, or unaware of, the lack of end-to-end encryption. Even the name, “Incognito mode” makes it sound like something that should be used for shady or saucy goings-on, instead of accepting that, by default, all of your private conversations should stay private.

Which begs the question: why don’t most of us care? Allo’s opt-in encryption is actually a vast improvement on Facebook Messenger’s complete disregard for this privacy measure, and that app has one billion active users. Are we truly so distracted by stickers and emojis that we don’t spare a thought for security? Our general apathy towards personal privacy sets a precedent for a future in which – and really, no tinfoil hats are needed here – none of our conversations are ever private.

You probably don’t care because your conversations are boring (no offence). It doesn’t worry us that the government or the police or big businesses are listening because all we’re talking about is whether to meet the lads in Nando’s at six or six-thirty. But no matter how inane our conversations, we should always protect ourselves from eavesdropping.

This is because, as the way Google search histories are used in court shows, your personal data can easily be misconstrued. If you ever did get in trouble with the police, can you really trust them to understand the private jokes between you and your friends, and not construe malicious meanings in your messages? What if third parties accessed your conversations? Companies already use our social media profiles to target advertisements towards us, but what if they scanned our messages to understand us better? Could your offhand conversation about how sick you’re feeling affect your health insurance claims? Will your message about money trouble prevent you from getting a loan?

These are all hypothetical questions, yes, but they are a path our apathy is driving us down. We’d much rather skip through the Terms and Conditions to get a new flashy feature than really scrutinise the data we’re giving away and how it’s used. Companies know this, which is why they hide behind opt-in features like “Incognito mode” and the “delete chat history” button. They can defend themselves by saying the option is there while simultaneously knowing that most people will never actually use it.

There is no easy way to get the wider world to care about privacy, but thankfully there’s probably no way to get them to care about Allo either. It’s not certain whether the messaging app will fail, but given the success of Google's previous chat apps (Talk or Hangouts, anyone?), it seems likely. Then again, none of those had a sticker of a muscular bull wiggling its butt.

Amelia Tait is a technology and digital culture writer at the New Statesman.