Sir Malcolm Rifkind, the head of the ISC, has said companies like Facebook offer terrorists a "safe haven". Photo: Getty Images
Show Hide image

Making Facebook an arm of MI5 won't be a guarantee against terrorism

The security services want social networks like Facebook to be more forthcoming with material posted by users that might indicate a threat to national security. But the root causes of terrorism will never be fixed with data alone.

Today sees the publication of proposed new anti-terrorism bill, the Counter-Terrorism and Security Bill 2014-15, ahead of its scheduled second reading in Parliament tomorrow, on 27 November. Yesterday, meanwhile, saw the Commons Select Intelligence and Security Committee (ISC) publish its findings into the murder of Lee Rigby in May 2013 by Michael Adebolajo and Michael Adebowale, with specific blame laid at the feet of social media companies for not doing enough to prevent it. 

The report itself doesn't name the single company it considers most at fault, but it didn't take long for journalists to work out that Facebook was the alleged culprit in question. Most of the major papers today carry the damning verdict on their front pages with reference to Mark Zuckerberg's company, with the Mail reporting Rigby's sister stating Facebook had "blood on their hands" for their inaction, and that she held them "partly responsible" for his death. According to the report, some of Adebowale's activity on Facebook had come to the attention of site moderators, who had deleted some of his posts (and even his account) on more than a few occasions. According to Sir Malcolm Rifkind, the chair of the ISC, had this information been passed on to MI5 the attack on Rigby could have been prevented.

Is this true? Well, yes, but also no, with caveats for either. This is the old conflict between what the state wants to know and what the individual wishes to keep private, but with the added complications that come with the internet, and social networks, and national sovereignty. 

Treating this as a clear binary between privacy and security isn't enough, as evidenced by Facebook making people unhappy on both sides of the divide for what it does: it actively monitors everything that its users do (so that it can make money off them and/or remove inappropriate/illegal content, upsetting privacy advocates in the process), but it also doesn't necessarily report posts that could signify an illegal activity to the relevant authorities (which annoys bodies like MI5, who would find that kind of content extremely useful). 

Put this in the context of the British government's attitude towards data collection over the last 15 or so years, and the conclusions of the ISC report make send. The UK political establishment, in all three main parties, tends heavily towards a policy of better safe than sorry, and of expanding data collection whenever a blind spot in intelligence gathering is exposed. For example, when the European Court of Justice ruled earlier this year that the EU-wide directive mandating the collection and retention of customer metadata by ISPs violated the human right to privacy, Parliament's response (with Tory, Labour and Lib Dem backing) was to rush through new legislation to reintroduce the mandate into British law as soon as possible. (Other countries, like Sweden, went the other way, even going so far as to opt not to charge those ISPs who had ignored the original directive.) 

Facebook, then, is a ripe, low-hanging fruit which, if plucked, would make the lives of spooks hugely easier. However, they can't - because Facebook is an American company which is only legally obligated to obey American warrants. That's the problem identified by Rifkind. Facebook may not have the legal obligation to hand over data like the type it had on Adebowale, but it certainly may have had the moral obligation. What's more troubling, though, is that the demand here is that Facebook chooses to hand over possible evidence of a user planning for a terror attack before a body like MI5 asks for it. (MI5 didn't ask Facebook about Adebowale's messages until after the attack, as far as we know.)

Follow this chain of thought any distance, however, and the problematic consequences are obvious. If Facebook accepted that any national government could demand the personal information of any user for matters of national security, that opens up all kinds of precedents for human rights abuses in nations where activists or persecuted groups use Facebook as a communication medium. (Arguably, that doesn't just mean, for example, Russia. It can also mean the UK, where many groups face surveillance from the police.)

This doesn't stop the British government from repeatedly trying to act as if US-based tech companies should listen to these kinds of requests, though, as evidenced by the Guardian's report on Silicon Valley's response to the ISC findings. In techno-libertarian California, not only is this kind of attitude seen as undemocratic, it's seen as part of a wider campaign against web freedom altogether. ("Nice fucking timing," said one executive, referring to the new bill published today.)

Privacy and civil rights campaigners tend to talk about this issue using the metaphor of physical mail - we can accept the need for opening the letters of someone under suspicion of plotting a terrorist attack, given the oversight of a judge and a warrant, but we otherwise expect nobody but the sender and the receiver to know what's in each envelope. That isn't a perfect analogy for what social networks are, though, and that's crucial here. Facebook, like every other social network of any notable size, has a business model based almost entirely on eavesdropping on the conversations its users are having with each other so they can try to show relevant ads. At the same time, things posted on them can be intended for an audience of a few people, but there are mechanisms in place that can mean a psuedo-private conversation can quickly spread among millions of other people.

The constant surveillance, even of private messages, is currently the focus of a large EU-wide civil suit, and the problem of corporate surveillance - which we legally, if not morally, consent to - is its own problem altogether. But I would argue that, when it comes to state surveillance, we should think of a site like Facebook in terms less like a postal service and more like a pub or cafe, with the staff coming around to pick up the empty mugs taking the role of the algorithms that listen in on our passing conversations. It's a service as much as a medium, and one which puts up ads on the walls to pay the rent.

In that kind of scenario, what should someone do if they hear a customer use the word "kill"? Or "explosion"? Facebook has 1.3 billion users each sending upwards of 500 billion messages a day - and while the site does have some sophisticated scanning methods for trying to keep on top of, for example, potential cases of child grooming, it is limited in its ability to parse nuance, or humour. The vast daily stream of data which comes from social media sites that has been flagged as needing human moderation has created, in turn, a huge outsourced moderation industry that is both horrible to those who have to work within it and still far from perfect at catching everything abhorrent (as ongoing issues with online harassment of women and people of colour have shown).

And don't forget the Twitter joke trial, which came about because a human being - not an algorithm - passed a tweet that was clearly a joke onto the police, just in case. Knowing how fallible these systems already are, it isn't reasonable to think that any solution to an intelligence gap should be piling yet more data onto the heap that is already there. Facebook may act like a private surveillance agency, but it doesn't necessarily follow that the best thing is to then combine its abilities with the state's - it won't double the innaccuracy, but it could well double the margin of error.

In the ISC's ideal world, Facebook would be an extension of the surveillance state - taking on the cost of identifying and assessing likely leads from among billions of posts each day, with the most promising passed on down the line. This is a fallacy that affects so much of the tech industry, that technology will fix everything, and everything in the physical world is reflected in how people act online. But MI5 still doesn't know how Adebolajo and Adebowale planned their attack, even now. The data was the missing link here, but it's dangerous to think it'll always be.

The ISC report makes it clear that both Adebolajo and Adebowale were under MI5 surveillance for years for a range of other real-world links to other suspected extremists, but Facebook gets the blame because the single message about killing a soldier could have alerted the authorities that he was a more serious risk. Yet what's also clear is that the government's approach to tackling the root causes of terrorism - and in trying to stop young men and women choosing to commit acts of terror - is failing. What's the better investment here: the removal of yet another layer of privacy the average citizen can expect, or spending some time and money on something which gets to the root of the issue?

Ian Steadman is a staff science and technology writer at the New Statesman. He is on Twitter as @iansteadman.

Getty
Show Hide image

Don’t shoot the messenger: are social media giants really “consciously failing” to tackle extremism?

MPs today accused social media companies of failing to combat terrorism, but just how accurate is this claim? 

Today’s home affairs committee report, which said that internet giants such as Twitter, Facebook, and YouTube are “consciously failing” to combat extremism, was criticised by terrorism experts almost immediately.

“Blaming Facebook, Google or Twitter for this phenomenon is quite simplistic, and I'd even say misleading,” Professor Peter Neumann, an expert on radicalisation from Kings College London, told the BBC.

“Social media companies are doing a lot more now than they used to - no doubt because of public pressure,” he went on. The report, however, labels the 14 million videos Google have removed in the last two years, and the 125,000 accounts Twitter has suspended in the last one, a “drop in the ocean”.

It didn’t take long for the sites involved to refute the claims, which follow a 12-month inquiry on radicalisation. A Facebook spokesperson said they deal “swiftly and robustly with reports of terrorism-related content”, whilst YouTube said they take their role in combating the spread of extremism “very seriously”. This time last week, Twitter announced that they’d suspended 235,000 accounts for promoting terrorism in the last six months, which is incidentally after the committee stopped counting in February.

When it comes to numbers, it’s difficult to determine what is and isn’t enough. There is no magical number of Terrorists On The Internet that experts can compare the number of deletions to. But it’s also important to judge the companies’ efforts within the realm of what is actually possible.

“The argument is that because Facebook and Twitter are very good at taking down copyright claims they should be better at tackling extremism,” says Jamie Bartlett, Director of the Centre for the Analysis of Social Media at Demos.

“But in those cases you are given a hashed file by the copyright holder and they say: ‘Find this file on your database and remove it please’. This is very different from extremism. You’re talking about complicated nuanced linguistic patterns each of which are usually unique, and are very hard for an algorithm to determine.”

Bartlett explains that a large team of people would have to work on building this algorithm by trawling through cases of extremist language, which, as Thangam Debonnaire learned this month, even humans can struggle to identify.  

“The problem is when you’re dealing with linguistic patterns even the best algorithms work at 70 per cent accuracy. You’d have so many false positives, and you’d end up needing to have another huge team of people that would be checking all of it. It’s such a much harder task than people think.”

Finding and deleting terrorist content is also only half of the battle. When it comes to videos and images, thousands of people could have downloaded them before they were deleted. During his research, Bartlett has also discovered that when one extremist account is deleted, another inevitably pops up in its place.

“Censorship is close to impossible,” he wrote in a Medium post in February. “I’ve been taking a look at how ISIL are using Twitter. I found one user name, @xcxcx162, who had no less than twenty-one versions of his name, all lined up and ready to use (@xcxcx1627; @xcxcx1628, @xcxcx1629, and so on).”

Beneath all this, there might be another, fundamental flaw in the report’s assumptions. Demos argue that there is no firm evidence that online material actually radicalises people, and that much of the material extremists view and share is often from mainstream news outlets.

But even if total censorship was possible, that doesn’t necessarily make it desirable. Bartlett argues that deleting extreme content would diminish our critical faculties, and that exposing people to it allows them to see for themselves that terrorists are “narcissistic, murderous, thuggish, irreligious brutes.” Complete censorship would also ruin social media for innocent people.

“All the big social media platforms operate on a very important principal, which is that they are not responsible for the content that is placed on their platforms,” he says. “It rests with the user because if they were legally responsible for everything that’s on their platform – and this is a legal ruling in the US – they would have to check every single thing before it was posted. Given that Facebook deals with billions of posts a day that would be the end of the entire social media infrastructure.

“That’s the kind of trade off we’d be talking about here. The benefits of those platforms are considerable and you’d be punishing a lot of innocent people.”

No one is denying that social media companies should do as much as they can to tackle terrorism. Bartlett thinks that platforms can do more to remove information under warrant or hand over data when the police require it, and making online policing 24/7 is an important development “because terrorists do not work 9 to 5”. At the end of the day, however, it’s important for the government to accept technological limitations.

“Censorship of the internet is only going to get harder and harder,” he says. “Our best hope is that people are critical and discerning and that is where I would like the effort to be.” 

Amelia Tait is a technology and digital culture writer at the New Statesman.