Just how full of fakes is Twitter? Photo: Getty
Show Hide image

Why fake Twitter accounts are a political problem

The rise in the use of Twitter bots and automated accounts, particularly by politicians and campaigns, is skewing what we see as trends.

In recent years, the phrase “trending on Twitter” has become shorthand for any issue that’s capturing public interest on a massive scale. Journalists and politicians cite popular hashtags as evidence of grassroots support.

Increasingly, though, this chatter isn’t coming from real people at all. Along with the rise in Twitter use has come a boom in so-called “Twitter bots” – automated accounts whose tweets are generated entirely by computer.

Many users, for example, have been surprised to encounter beautiful women lurking in chat rooms who seem unaccountably keen to discuss porn and recommend their favourite sites. Such bots exist entirely to entice other users to click on promotional links, generating revenue for their controllers.

Some bots are harmless, or even funny: @StealthMountain, for example, automates the pedant in all of us by replying: “I think you mean ‘sneak peek’” to tweets that include the phrase ‘sneak peak’.

It’s not clear just how many of Twitter’s 255m active users are fake – but it’s a lot. According to the company itself, the figure is about five per cent, kept down by a team of 30 people who spend their days weeding out the bots. However, two Italian researchers last year calculated that the true figure was 10 per cent, and other estimates have placed the figure even higher.

Now, researchers at Indiana University have created a new tool, BotOrNot, designed to identify Twitter bots from their patterns of activity.

“Part of the motivation of our research is that we don’t really know how bad the problem is in quantitative terms,” says Professor Fil Menczer, director of the university’s Centre for Complex Networks and Systems Research.

“Are there thousands of social bots? Millions? We know there are lots of bots out there, and many are totally benign. But we also found examples of nasty bots used to mislead, exploit and manipulate discourse with rumors, spam, malware, misinformation, political astroturf and slander.”

BotOrNot analyses over 1,000 features of an account – from its friend network to the content of messages and the times of day they’re sent – to deduce the likelihood that an account is fake, with 95 percent accuracy, says the team.

Meanwhile, a tool developed by social media analytics firm Socialbakers uses similar criteria to discover what percentage of a user’s followers are fake. These include the proportion of followers to followed accounts and the number of retweets and links.

Tools such as these are now starting to quantify a trend noticed by researchers over the last two or three years: the use of bots for political purposes. Having thousands of followers retweeting their every word makes politicians look popular, and can turn a pet cause into a top trend worldwide. The practice is known as astroturfing – the creation of fake grass-roots support.

Three years ago, for example, it was alleged that over 90 per cent of Newt Gingrich’s followers showed all the hallmarks of being fake; more recently, during the 2012 Mexican elections, researchers found that the Institutional Revolutionary Party was using tens of thousands of bots to push its messages onto Twitter’s list of top trends.

This month’s elections in India have attracted their fair share of bot activity, too. During India’s last visit to the polls, only one politician had a Twitter account, boasting just 6,000 followers. This time round, more than 56m election-related tweets were sent between 1 January and polling day on 12 May. During the same period, prime ministerial candidate Narendra Modi boosted his follower count by 28 per cent, hitting nearly four million.

However, according to SocialBakers, all is not what it seems: nearly half Modi’s followers look suspicious. Modi has form here: late last year, when Time started monitoring Twitter for its Person of the Year award, local media soon spotted a pattern. Thousands of Modi’s followers were tweeting “I think Narendra Modi should be #TIMEPOY” at regular intervals, 24 hours a day – while a rival army of bots was tweeting the opposite.

And don't think it can’t happen here. Bots are easily and cheaply bought, with the going rate around a thousand followers for a dollar; more if you want them to like or share your posts. In 2012, Respect candidate for Croyden North Lee Jasper admitted that his by-election campaigners had been using Twitter bots to boost his apparent popularity in the same way: “It’s all part of modern campaigning,” he said.

Meanwhile, applying the SocialBakers tool to leading UK political accounts, it appears that most have a preponderance of genuine followers. One notable exception is @Number10gov, the prime minister's official account: as many as half the followers of this account appear to be bots, with names such as “@vsgaykjppvw”, “@zekumovuvuc” and “@zong4npp”.

Still, it's possible that @Number10gov doesn't mind this too much: the BotOrNot tool calculates there’s a 72 per cent chance that it's a bot itself. Maybe we should just leave them to talk amongst themselves. . .

Getty
Show Hide image

Did your personality determine whether you voted for Brexit? Research suggests so

The Online Privacy Foundation found Leave voters were significantly more likely to be authoritarian and conscientious. 

"Before referendum day, I said the winners would be those who told the most convincing lies," Paul Flynn, a Labour MP, wrote in these pages. "Leave did." The idea that those who voted for Brexit were somehow manipulated is widely accepted by the Remain camp. The Leave campaign, so the argument goes, played on voters' fears and exploited their low numeracy. And new research from the Online Privacy Foundation suggests this argument may, in part at least, be right. 

Over the last 18 months the organisation have researched differences in personality traits, levels of authoritarianism, numeracy, thinking styles and cognitive biases between EU referendum voters. The organisation conducted a series of studies, capturing over 11,000 responses to self-report psychology questionnaires and controlled experiments, with the final results scheduled to be presented at the International Conference on Political Psychology in Copenhagen in October 2017.

The researchers questioned voters using the "Five Factor Model" which consists of five broad personality traits - Openness, Conscientiousness, Extraversion, Agreeableness and Neuroticism. They also considered the disposition of authoritarianism (it is not considered a personality trait). Authoritarians have a more black and white view of the world around them, are more concerned with the upkeep of established societal traditions and have a tendency to be less accepting of outsiders. 

So what did they uncover? Participants expressing an intent to vote to leave the EU reported significantly higher levels of authoritarianism and conscientiousness, and lower levels of openness and neuroticism than voters expressing an intent to vote to remain. (Conscientiousness is associated with dependability, dutifulness, focus and adherence to societal norms in contrast to disorganisation, carelessness and impulsivity.)

Immigration in particular seems to have affected voting. While authoritarians were much more likely to vote Leave to begin with, those who were less authoritarian became increasingly likely to vote Leave if they expressed high levels of concern over immigration. These findings chime with research by the Professors Marc Hetherington and Elizabeth Suhay, which found that Americans became susceptible to "authoritarian thinking" when they perceived a grave threat to their safety. 

Then there's what you might call the £350m question - did Leave voters know what they were voting for? When the Online Privacy Foundation researchers compared Leave voters with Remain voters, they displayed significantly lower levels of numeracy, reasoning and appeared more impulsive. In all three areas, older voters performed significantly worse than young voters intending to vote the same way.

Even when voters were able to interpret statistics, their ability to do so could be overcome by partisanship. In one striking study, when voters were asked to interpret statistics about whether a skin cream increases or decreases a rash, they were able to interpret them correctly roughly 57 per cent of the time. But when voters were asked to interpret the same set of statistics, but told they were about whether immigration increases or decreases crime, something disturbing happened. 

If the statistics didn't support a voter's view, their ability to correctly interpret the numbers dropped, in some cases, by almost a half. 

Before Remoaners start to crow, this study is not an affirmation that "I'm smart, you're dumb". Further research could be done, for example, on the role of age and education (young graduates were far more likely to vote Remain). But in the meantime, there is a question that needs to be answered - are political campaigners deliberately exploiting these personality traits? 

Chris Sumner, from the Online Privacy Foundation, warns that in the era of Big Data, clues about our personalities are collected online: "In the era of Big Data, these clues are aggregated, transformed and sold by a burgeoning industry."

Indeed, Cambridge Analytica, a data company associated with the political right in the UK and US, states on its website that it can "more effectively engage and persuade voters using specially tailored language and visual ad combinations crafted with insights gleaned from behavioral understandings of your electorate". It will do so through a "blend of big data analytics and behavioural psychology". 

"Given the differences observed between Leave and Remain voters, and irrespective of which campaign, it is reasonable to hypothesize that industrial-scale psychographic profiling would have been a highly effective strategy," Sumner says. By identifying voters with different personalities and attitudes, such campaigns could target "the most persuadable voters with messages most likely to influence their vote". Indeed, in research yet to be published, the Online Privacy Foundation targeted groups with differing attitudes to civil liberties based on psychographic indicators associated with authoritarianism. The findings, says Sumner, illustrate "the ease with which individuals' inherent differences could be exploited". 

Julia Rampen is the digital news editor of the New Statesman (previously editor of The Staggers, The New Statesman's online rolling politics blog). She has also been deputy editor at Mirror Money Online and has worked as a financial journalist for several trade magazines. 

0800 7318496