Doing science the Wonga way

The model used by the payday loans company might finally make science work for everyone. Could we be about to enter the age of Wonga science?

Occasionally a corporate entity tries to get science done its way. Google, for instance, sponsors various researchers to find out whether their worthy, planet-improving idea can work. But what would we get if the payday loans company Wonga.com sponsored science?

It’s not an idle question. Just recently, up in committee room 17 of the House of Commons, Chi Onwurah, Labour’s science minister, gathered academics and asked for thoughts on the public role of science and how we should fund it. The responses weren’t terribly conclusive or enlightening. But one interesting thing came up – the origins of Wonga.

Wonga’s eye-watering prices (borrowing £400 for 28 days will cost you £117.48, for example) have been the subject of questions downstairs in the Commons and the Lords. Stella Creasy MP is trying to get the Financial Services Authority to cap the rate of interest a company can charge. She is supported in the other chamber by the future archbishop of Canterbury, who has called Wonga’s business model “morally wrong”.

Apparently the algorithm behind Wonga.com was originally developed to detect banking fraud. The subtext in Onwurah’s meeting was clear – Wonga is an evil application of perfectly good algorithms, and if someone had said those algorithms could lead to Wonga questions would have been asked of those funding their development. Especially, perhaps, if Onwurah were in charge. When Wonga ploughed £24m into Newcastle Football Club in exchange for on-shirt advertising, Onwurah, MP for Newcastle Central, expressed outrage. She called Wonga a source of “debt and misery”.

There are two reasons to take issue with this. First, many people are clearly happy to pay hundreds of pounds for a short-term loan. Wonga’s reported customer satisfaction is above Apple’s and far above that recorded by any of the high-street banks. Second, Onwurah’s remit is innovation, science and digital infrastructure and yet she slurs a company that has used science and digital infrastructure to innovate. The firm is expanding into the US and is on course to become a billion-dollar company next year.

The good news is that the government will soon have a Wonga-friendly chief scientific adviser. Mark Walport is at present the director of the Wellcome Trust, the UK’s largest scientific and medical research charity and an investor in Wonga. When Creasy challenged Walport about this, he replied that he finds Wonga “extremely engaging”, with a good business model and a willingness to listen to feedback.

This bodes extremely well for Walport’s stint as the UK’s most influential scientist. Clearly, he’s not populist, he’s not swayed by conflicts with authority and he’s not averse to a bit of level-headed thinking.

Coming round

If Onwurah comes round, she and Walport might even usher in the age of Wonga science. This would be open to no-fuss funding of projects and people that are currently considered unfundable, ending the pyramid scheme that makes life easy for established professors and near-impossible for those trying to become established. It would reward people who cross disciplines to achieve optimum productivity (one of Wonga’s co-founders, Jonty Hurwitz, trained as a mathematician and physicist and then became a software engineer and entrepreneur). Pursuing interdisciplinary research is widely known as a fast track to the funding wilderness.

Wonga science would present straight-talking science advice to government and pursue research that has no useful application in sight. It would also encourage scientists to take things we already have and find entirely new purposes for them. Most appealing, it might show us gaps in our scientific research that no one even realised were there. The Wonga model might finally make science work for everyone.

 

Michael Brooks holds a PhD in quantum physics. He writes a weekly science column for the New Statesman, and his most recent book is At the Edge of Uncertainty: 11 Discoveries Taking Science by Surprise.

This article first appeared in the 17 December 2012 issue of the New Statesman, Will Europe ever go to war again?

Getty
Show Hide image

Don’t shoot the messenger: are social media giants really “consciously failing” to tackle extremism?

MPs today accused social media companies of failing to combat terrorism, but just how accurate is this claim? 

Today’s home affairs committee report, which said that internet giants such as Twitter, Facebook, and YouTube are “consciously failing” to combat extremism, was criticised by terrorism experts almost immediately.

“Blaming Facebook, Google or Twitter for this phenomenon is quite simplistic, and I'd even say misleading,” Professor Peter Neumann, an expert on radicalisation from Kings College London, told the BBC.

“Social media companies are doing a lot more now than they used to - no doubt because of public pressure,” he went on. The report, however, labels the 14 million videos Google have removed in the last two years, and the 125,000 accounts Twitter has suspended in the last one, a “drop in the ocean”.

It didn’t take long for the sites involved to refute the claims, which follow a 12-month inquiry on radicalisation. A Facebook spokesperson said they deal “swiftly and robustly with reports of terrorism-related content”, whilst YouTube said they take their role in combating the spread of extremism “very seriously”. This time last week, Twitter announced that they’d suspended 235,000 accounts for promoting terrorism in the last six months, which is incidentally after the committee stopped counting in February.

When it comes to numbers, it’s difficult to determine what is and isn’t enough. There is no magical number of Terrorists On The Internet that experts can compare the number of deletions to. But it’s also important to judge the companies’ efforts within the realm of what is actually possible.

“The argument is that because Facebook and Twitter are very good at taking down copyright claims they should be better at tackling extremism,” says Jamie Bartlett, Director of the Centre for the Analysis of Social Media at Demos.

“But in those cases you are given a hashed file by the copyright holder and they say: ‘Find this file on your database and remove it please’. This is very different from extremism. You’re talking about complicated nuanced linguistic patterns each of which are usually unique, and are very hard for an algorithm to determine.”

Bartlett explains that a large team of people would have to work on building this algorithm by trawling through cases of extremist language, which, as Thangam Debonnaire learned this month, even humans can struggle to identify.  

“The problem is when you’re dealing with linguistic patterns even the best algorithms work at 70 per cent accuracy. You’d have so many false positives, and you’d end up needing to have another huge team of people that would be checking all of it. It’s such a much harder task than people think.”

Finding and deleting terrorist content is also only half of the battle. When it comes to videos and images, thousands of people could have downloaded them before they were deleted. During his research, Bartlett has also discovered that when one extremist account is deleted, another inevitably pops up in its place.

“Censorship is close to impossible,” he wrote in a Medium post in February. “I’ve been taking a look at how ISIL are using Twitter. I found one user name, @xcxcx162, who had no less than twenty-one versions of his name, all lined up and ready to use (@xcxcx1627; @xcxcx1628, @xcxcx1629, and so on).”

Beneath all this, there might be another, fundamental flaw in the report’s assumptions. Demos argue that there is no firm evidence that online material actually radicalises people, and that much of the material extremists view and share is often from mainstream news outlets.

But even if total censorship was possible, that doesn’t necessarily make it desirable. Bartlett argues that deleting extreme content would diminish our critical faculties, and that exposing people to it allows them to see for themselves that terrorists are “narcissistic, murderous, thuggish, irreligious brutes.” Complete censorship would also ruin social media for innocent people.

“All the big social media platforms operate on a very important principal, which is that they are not responsible for the content that is placed on their platforms,” he says. “It rests with the user because if they were legally responsible for everything that’s on their platform – and this is a legal ruling in the US – they would have to check every single thing before it was posted. Given that Facebook deals with billions of posts a day that would be the end of the entire social media infrastructure.

“That’s the kind of trade off we’d be talking about here. The benefits of those platforms are considerable and you’d be punishing a lot of innocent people.”

No one is denying that social media companies should do as much as they can to tackle terrorism. Bartlett thinks that platforms can do more to remove information under warrant or hand over data when the police require it, and making online policing 24/7 is an important development “because terrorists do not work 9 to 5”. At the end of the day, however, it’s important for the government to accept technological limitations.

“Censorship of the internet is only going to get harder and harder,” he says. “Our best hope is that people are critical and discerning and that is where I would like the effort to be.” 

Amelia Tait is a technology and digital culture writer at the New Statesman.