On manipulating memories, we're not as far behind Hollywood as you might think

Deep brain stimulation is racing ahead, and the ethical issues associated with it are starting to be debated.

Remember Total Recall? When the film came out in 1990, its premise, in which people take virtual holidays using memory manipulation, seemed farfetched. But on 20 August President Obama’s commission on bioethics debated what we ought to do about memory manipulation. That’s because it is just one of many invasive actions we are beginning to perform on the brain.
 
This month, the first trials of a new technique for controlling Parkinson’s disease began. A German sufferer has had a “deep brain stimulation” device, essentially a pair of electrodes, implanted in his brain. It will monitor the brain’s activity to deliver electrical currents designed to combat tremors and muscle rigidity. A similar technique has been shown, in a few cases, to reverse the shrinkage of brain tissues associated with Alzheimer’s disease. This reversal was not only about the neural tissue’s physical appearance: it led to improved brain functioning. No one knows how it works; the best guess is that it stimulates the growth of neurons.
 
Deep brain stimulation is also a treatment option if you have obsessive compulsive disorder. OCD appears to arise when electrical circuits conveying signals between the emotional and the decision-making parts of the brain become stuck in feedback loops. That leads to people compulsively repeating actions because the anxieties associated with not having done the task don’t get erased. A jolt of electricity seems to clear the brain jam, however. Similar treatments seem to be a cure for depression in some people.
 
And, true to Hollywood, we are now manipulating memories. We’re not yet at the virtual holiday stage, but mice are starting to have some strange experiences. Last month it was reported that electricity delivered to a mouse’s hippocampus gave it a memory of receiving a shock to the foot.
 
Hence the need for ethical review: it is easy to see how this could eventually be used to create a tool for controlling errant prisoners, say, or mental-health patients. Perhaps you remember the electroconvulsive “therapy” punishment in One Flew Over the Cuckoo’s Nest? It’s still seen as a treatment option for depression but some think it’s too blunt an instrument. Deep brain stimulation is far less blunt – yet who decides just how blunt is acceptable?
 
There are many other issues to face. As we begin our assault on the brain, we will begin to gather information that might turn out to be problematic. Brain experiments are already suggesting that some people have naturally poor control over impulsive actions, and are more prone to criminal or antisocial behaviour. It is important that such information should not get thrown casually into the public sphere.
 
For all the appropriate caution, let’s acknowledge that some of the things we’re learning to do to the brain are really rather exciting. Having a virtual holiday might sound like a bore, but what about having razor-sharp focus at the flick of a switch? The US military is piloting a scheme that is mind-bendingly futuristic: a DC electrical current applied to the brain that in effect puts you into a high-concentration zone. With “transcranial direct current stimulation”, learning is accelerated and performance in tasks that require mental focus is significantly enhanced.
 
The Americans are using it to improve sniper training but that won’t be the only application. One day soon you might unplug yourself and utter the immortal words: “I know kung fu.” Hollywood races ahead, but we’re not as far behind as you might think.
Jack Nicholson in the film version of "One Flew Over the Cuckoo's Nest".

Michael Brooks holds a PhD in quantum physics. He writes a weekly science column for the New Statesman, and his most recent book is At the Edge of Uncertainty: 11 Discoveries Taking Science by Surprise.

This article first appeared in the 26 August 2013 issue of the New Statesman, How the dream died

Show Hide image

YouTube announces new measures against extremism – but where do they leave the far right?

Videos by alt-right commentators have arguably radicalised many online. Will Google's latest policies do anything to change this?

Within hours of the terrorist attack in Finsbury Park, Tommy Robinson was trending on Twitter. The former leader of the English Defence League accused the Finsbury Park mosque of “creating terrorists” in a series of tweets on his personal account.

More than 17,400 people have now tweeted about the 34-year-old, with many theorising he could have radicalised the attacker who allegedly shouted “I’m going to kill all Muslims” at the scene. At present, there is no evidence that the man arrested by police on suspicion of attempted murder is a fan of Robinson.

“People are saying I’m inciting hate,” said Robinson in a video uploaded to Twitter and YouTube after the attack. “I just tell the facts and the truth and I’m not going to apologise for that…

“If giving you quotes from the Quran that incite murder and war against us is inciting hate, I’m guilty. If telling you all the problematic problems that come from the teachings and scriptures of Islam, I’m guilty. But these are just facts.”

After describing the country as being at “war”, he goes on to say: “Please one person, just one, give me one example of me inciting hate.”

When we talk about radicalisation and terrorism, we are finally to understand that this extends beyond the work of Isis.

Just over a year ago, Labour MP Jo Cox was murdered by a white supremacist. This morning, Harry Potter author JK Rowling used Twitter to accuse columnist Katie Hopkins of contributing to radicalisation. The New Statesman’s own Media Mole notes how right-wing tabloids incite hate.

In particular, it is now evident how the far right radicalises online. In December 2016, a man fired three shots in a Washington DC pizza parlour that the alt-right (on 4Chan and YouTube) had accused of being at the centre of a paedophile ring.

The internet arguably allowed Anders Breivik, the Norwegian far right white supremacist who killed 77 people in 2011, to cultivate his extreme views. Alexandre Bissonnette, the white nationalist who murdered six men at a Québec City mosque in January, was described by many as an “internet troll”.

Earlier this year, a report by the Commons home affairs committee accused social media giants of not doing enough to tackle terrorism online. In response to this – and following a series of high-profile brands pulling their advertising from YouTube after it was featured on or by terrorism-related videos – Google, which owns the video-sharing site, has now announced four steps it is taking to fight online terror. But do these reflect the reality that there are many forms of extremism?

Google’s new guidelines speak of “terrorism” and “extremism” in broad terms. This means that videos glorifying or inciting terrorism will be treated the same whether they are from the far right, far left, or pro-Isis organisations.

Google’s four steps for tackling such videos include: using machine learning to identify videos glorifying violence, using a team of human flaggers to identify problematic videos, and using a "redirect method" to send potential Isis recruits towards anti-terror videos. Each of these steps is concerned with content that either breaks the law or violates YouTube’s policies.

The fourth step (or rather the third, as it is ordered in Google’s blogpost) is focused on non-illegal, non-policy violating content. For example, this could include videos that don’t directly incite terrorism, but arguably incite hate, such as those denying the Holocaust.

According to Kent Walker, Google’s general counsel, these could also be “videos that contain inflammatory religious or supremacist content”. Rather than being removed like the other offending videos, these will be hidden behind a warning, not have adverts on them (therefore preventing their creators from making money), and will not be eligible for comments. Essentially, as Walker writes, “that means these videos will have less engagement and be harder to find”.

It remains to be seen whether – or how – this will apply to the content of Tommy Robinson. YouTube’s steps will be taken on a video-by-video basis, meaning no far right commentator will be banned outright. Instead, YouTube simply won’t promote any offending videos, meaning they will not appear in their subscribers’ recommended feeds and will be difficult to find on the site.

In this way, Google has remained committed to free speech while doing more to tackle extremism on YouTube. Those like Robinson who claim to just “tell the facts” could arguably now be held to account for their actions. Many on the far right are careful to not explicitly advocate violence. Nevertheless, the loaded language used in their videos could arguably incite hate.

Paul Joseph Watson, a right-wing conspiracy theorist YouTuber with nearly one million subscribers, has never advocated terrorism, but has videos entitled “Islam is NOT a Religion of Peace” and “Chuck Johnson: Muslim Migrants Will Cause Collapse of Europe”.

In the past I have argued that allowing Google and YouTube to censor us in the name of “extremism” and “terrorism” is a troubling trend, but with these new promises, the company has walked the delicate line between the law and free speech. By allowing hateful, but not illegal, content to be hosted on its site and yet restricted from a wider audience, YouTube is taking a stand against extremists of all kinds.

Amelia Tait is a technology and digital culture writer at the New Statesman.

0800 7318496