Just how full of fakes is Twitter? Photo: Getty
Show Hide image

Why fake Twitter accounts are a political problem

The rise in the use of Twitter bots and automated accounts, particularly by politicians and campaigns, is skewing what we see as trends.

In recent years, the phrase “trending on Twitter” has become shorthand for any issue that’s capturing public interest on a massive scale. Journalists and politicians cite popular hashtags as evidence of grassroots support.

Increasingly, though, this chatter isn’t coming from real people at all. Along with the rise in Twitter use has come a boom in so-called “Twitter bots” – automated accounts whose tweets are generated entirely by computer.

Many users, for example, have been surprised to encounter beautiful women lurking in chat rooms who seem unaccountably keen to discuss porn and recommend their favourite sites. Such bots exist entirely to entice other users to click on promotional links, generating revenue for their controllers.

Some bots are harmless, or even funny: @StealthMountain, for example, automates the pedant in all of us by replying: “I think you mean ‘sneak peek’” to tweets that include the phrase ‘sneak peak’.

It’s not clear just how many of Twitter’s 255m active users are fake – but it’s a lot. According to the company itself, the figure is about five per cent, kept down by a team of 30 people who spend their days weeding out the bots. However, two Italian researchers last year calculated that the true figure was 10 per cent, and other estimates have placed the figure even higher.

Now, researchers at Indiana University have created a new tool, BotOrNot, designed to identify Twitter bots from their patterns of activity.

“Part of the motivation of our research is that we don’t really know how bad the problem is in quantitative terms,” says Professor Fil Menczer, director of the university’s Centre for Complex Networks and Systems Research.

“Are there thousands of social bots? Millions? We know there are lots of bots out there, and many are totally benign. But we also found examples of nasty bots used to mislead, exploit and manipulate discourse with rumors, spam, malware, misinformation, political astroturf and slander.”

BotOrNot analyses over 1,000 features of an account – from its friend network to the content of messages and the times of day they’re sent – to deduce the likelihood that an account is fake, with 95 percent accuracy, says the team.

Meanwhile, a tool developed by social media analytics firm Socialbakers uses similar criteria to discover what percentage of a user’s followers are fake. These include the proportion of followers to followed accounts and the number of retweets and links.

Tools such as these are now starting to quantify a trend noticed by researchers over the last two or three years: the use of bots for political purposes. Having thousands of followers retweeting their every word makes politicians look popular, and can turn a pet cause into a top trend worldwide. The practice is known as astroturfing – the creation of fake grass-roots support.

Three years ago, for example, it was alleged that over 90 per cent of Newt Gingrich’s followers showed all the hallmarks of being fake; more recently, during the 2012 Mexican elections, researchers found that the Institutional Revolutionary Party was using tens of thousands of bots to push its messages onto Twitter’s list of top trends.

This month’s elections in India have attracted their fair share of bot activity, too. During India’s last visit to the polls, only one politician had a Twitter account, boasting just 6,000 followers. This time round, more than 56m election-related tweets were sent between 1 January and polling day on 12 May. During the same period, prime ministerial candidate Narendra Modi boosted his follower count by 28 per cent, hitting nearly four million.

However, according to SocialBakers, all is not what it seems: nearly half Modi’s followers look suspicious. Modi has form here: late last year, when Time started monitoring Twitter for its Person of the Year award, local media soon spotted a pattern. Thousands of Modi’s followers were tweeting “I think Narendra Modi should be #TIMEPOY” at regular intervals, 24 hours a day – while a rival army of bots was tweeting the opposite.

And don't think it can’t happen here. Bots are easily and cheaply bought, with the going rate around a thousand followers for a dollar; more if you want them to like or share your posts. In 2012, Respect candidate for Croyden North Lee Jasper admitted that his by-election campaigners had been using Twitter bots to boost his apparent popularity in the same way: “It’s all part of modern campaigning,” he said.

Meanwhile, applying the SocialBakers tool to leading UK political accounts, it appears that most have a preponderance of genuine followers. One notable exception is @Number10gov, the prime minister's official account: as many as half the followers of this account appear to be bots, with names such as “@vsgaykjppvw”, “@zekumovuvuc” and “@zong4npp”.

Still, it's possible that @Number10gov doesn't mind this too much: the BotOrNot tool calculates there’s a 72 per cent chance that it's a bot itself. Maybe we should just leave them to talk amongst themselves. . .

Show Hide image

Age verification rules won't just affect porn sites – they'll harm our ability to discuss sex

Relying on censorship to avoid talking about sex lets children down.

The British have a long history of censoring sex. In 1580, politician William Lambarde drafted the first bill to ban "licentious" and "hurtful... books, pamphlets, ditties, songs, and other works that promote the art of lascivious ungodly love". Last week, the UK government decided to have another crack at censorship, formally announcing that age verification for all online pornographic content will be mandatory from April 2018.

It is unclear at this point what this mandatory check will entail, but it's expected that you will need to submit your credit card details to a site before being allowed to access adult content (credit cards can’t be issued to under-18s).

The appointed regulator will almost certainly be the British Board of Film Classification who will have the authority to levy fines of up to £250,000 or shut down sites that do not comply. These measures are being directly linked to research conducted by the NSPCC, the Children’s Commissioner and the University of Middlesex in 2016, which surveyed more than 1,000 11 to 16-year-olds about viewing online pornography and found over half had accessed it. 

Digital minister Matt Hancock said age verification "means that while we can enjoy the freedom of the web, the UK will have the most robust internet child protection measures of any country in the world". And who can argue with that? No sane adult would think that it’s a good idea for children to watch hardcore pornography. And because we all agree kids should be watching Peppa Pig rather than The Poonies, the act has been waved through virtually unchallenged.

So, let’s put the issue of hardcore pornography to one side, because surely we are all in agreement. I’m asking you to look at the bigger picture. It’s not just children who will be censored and it’s not just Pornhub and Redtube which will be forced to age check UK viewers. This act will potentially censor any UK site that carries adult content, which is broadly defined by the BBFC as "that it was produced solely or principally for the purposes of sexual arousal".

I am a UK academic and research the history of sexuality. I curate the online research project www.thewhoresofyore.com, where academics, activists, artists and sex workers contribute articles on all aspects of sexuality in the hope of joining up conversations around sex that affect everyone. The site also archives many historical images; from the erotic brothel frescoes of Pompeii to early Victorian daguerreotypes of couples having sex. And yet, I do not consider myself to be a porn baron. These are fascinating and important historical documents that can teach us a great deal about our own attitudes to sex and beauty.

The site clearly signposts the content and asks viewers to click to confirm they are over 18, but under the Digital Economy Act this will not be enough. Although the site is not for profit and educational in purpose, some of the historical artefacts fit the definition of  "pornographic’" and are thereby liable to fall foul of the new laws.

And I’m not the only one; erotic artists, photographers, nude models, writers, sex shops, sex education sites, burlesque sites, BDSM sites, archivists of vintage erotica, and (of course) anyone in the adult industry who markets their business with a website, can all be termed pornographic and forced to buy expensive software to screen their users or risk being shut down or fined. I have contacted the BBFC to ask if my research will be criminalised and blocked, but was told "work in this area has not yet begun and so we are not in a position to advice [sic] you on your website". No one is able to tell me what software will need to be purchased if I am to collect viewers' credit card details, how I would keep them safe, or how much this would all cost. The BBFC suggested I contact my MP for further details. But, she doesn’t know either.

Before we even get into the ethical issues around adults having to enter their credit card details into a government database in order to look at legal content, we need to ask: will this work? Will blocking research projects like mine make children any safer? Well, no. The laws will have no power over social media sites such as Twitter, Snapchat and Periscope which allow users to share pornographic images. Messenger apps will still allow users to sext, as well as stream, send and receiving pornographic images and videos. Any tech savvy teenager knows that Virtual Private Network (VPN) software will circumvent UK age verification restrictions, and the less tech savvy can always steal their parents' credit card details.

The proposed censorship is unworkable and many sites containing nudity will be caught in the crossfire. If we want to keep our children "safe" from online pornography, we need to do something we British aren’t very good at doing; we need to talk openly and honestly about sex and porn. This is a conversation I hope projects like mine can help facilitate. Last year, Pornhub (the biggest porn site in the world) revealed ten years of user data. In 2016, Brits visited Pornhub over 111 million times and 20 per cent of those UK viewers are women. We are watching porn and we need to be open about this. We need to talk to each other and we need to talk to our kids. If you’re relying on government censorship to get you out of that tricky conversation, you are letting your children down.

The NSPCC report into children watching online pornography directly asked the participants about the effectiveness of age verification, and said the children "pointed out its limitations". When asked what intervention would most benefit them, this was the overwhelming response: "Whether provided in the classroom, or digitally, young people wanted to be able to find out about sex and relationships and about pornography in ways that were safe, private and credible." I suggest we listen to the very people we are trying to protect and educate, rather than eliminate. 

Dr Kate Lister researches the history of sexuality at Leeds Trinity University