Just how full of fakes is Twitter? Photo: Getty
Show Hide image

Why fake Twitter accounts are a political problem

The rise in the use of Twitter bots and automated accounts, particularly by politicians and campaigns, is skewing what we see as trends.

In recent years, the phrase “trending on Twitter” has become shorthand for any issue that’s capturing public interest on a massive scale. Journalists and politicians cite popular hashtags as evidence of grassroots support.

Increasingly, though, this chatter isn’t coming from real people at all. Along with the rise in Twitter use has come a boom in so-called “Twitter bots” – automated accounts whose tweets are generated entirely by computer.

Many users, for example, have been surprised to encounter beautiful women lurking in chat rooms who seem unaccountably keen to discuss porn and recommend their favourite sites. Such bots exist entirely to entice other users to click on promotional links, generating revenue for their controllers.

Some bots are harmless, or even funny: @StealthMountain, for example, automates the pedant in all of us by replying: “I think you mean ‘sneak peek’” to tweets that include the phrase ‘sneak peak’.

It’s not clear just how many of Twitter’s 255m active users are fake – but it’s a lot. According to the company itself, the figure is about five per cent, kept down by a team of 30 people who spend their days weeding out the bots. However, two Italian researchers last year calculated that the true figure was 10 per cent, and other estimates have placed the figure even higher.

Now, researchers at Indiana University have created a new tool, BotOrNot, designed to identify Twitter bots from their patterns of activity.

“Part of the motivation of our research is that we don’t really know how bad the problem is in quantitative terms,” says Professor Fil Menczer, director of the university’s Centre for Complex Networks and Systems Research.

“Are there thousands of social bots? Millions? We know there are lots of bots out there, and many are totally benign. But we also found examples of nasty bots used to mislead, exploit and manipulate discourse with rumors, spam, malware, misinformation, political astroturf and slander.”

BotOrNot analyses over 1,000 features of an account – from its friend network to the content of messages and the times of day they’re sent – to deduce the likelihood that an account is fake, with 95 percent accuracy, says the team.

Meanwhile, a tool developed by social media analytics firm Socialbakers uses similar criteria to discover what percentage of a user’s followers are fake. These include the proportion of followers to followed accounts and the number of retweets and links.

Tools such as these are now starting to quantify a trend noticed by researchers over the last two or three years: the use of bots for political purposes. Having thousands of followers retweeting their every word makes politicians look popular, and can turn a pet cause into a top trend worldwide. The practice is known as astroturfing – the creation of fake grass-roots support.

Three years ago, for example, it was alleged that over 90 per cent of Newt Gingrich’s followers showed all the hallmarks of being fake; more recently, during the 2012 Mexican elections, researchers found that the Institutional Revolutionary Party was using tens of thousands of bots to push its messages onto Twitter’s list of top trends.

This month’s elections in India have attracted their fair share of bot activity, too. During India’s last visit to the polls, only one politician had a Twitter account, boasting just 6,000 followers. This time round, more than 56m election-related tweets were sent between 1 January and polling day on 12 May. During the same period, prime ministerial candidate Narendra Modi boosted his follower count by 28 per cent, hitting nearly four million.

However, according to SocialBakers, all is not what it seems: nearly half Modi’s followers look suspicious. Modi has form here: late last year, when Time started monitoring Twitter for its Person of the Year award, local media soon spotted a pattern. Thousands of Modi’s followers were tweeting “I think Narendra Modi should be #TIMEPOY” at regular intervals, 24 hours a day – while a rival army of bots was tweeting the opposite.

And don't think it can’t happen here. Bots are easily and cheaply bought, with the going rate around a thousand followers for a dollar; more if you want them to like or share your posts. In 2012, Respect candidate for Croyden North Lee Jasper admitted that his by-election campaigners had been using Twitter bots to boost his apparent popularity in the same way: “It’s all part of modern campaigning,” he said.

Meanwhile, applying the SocialBakers tool to leading UK political accounts, it appears that most have a preponderance of genuine followers. One notable exception is @Number10gov, the prime minister's official account: as many as half the followers of this account appear to be bots, with names such as “@vsgaykjppvw”, “@zekumovuvuc” and “@zong4npp”.

Still, it's possible that @Number10gov doesn't mind this too much: the BotOrNot tool calculates there’s a 72 per cent chance that it's a bot itself. Maybe we should just leave them to talk amongst themselves. . .

Getty
Show Hide image

How virtual reality pigs could change the justice system forever

Lawyers in Canda are aiming to defend their client by asking the judge to don a virtual reality headset and experience the life of a pig.

“These are not humans, you dumb frickin' broad.”

Those were the words truck driver Jeffrey Veldjesgraaf said to animal rights activist Anita Krajnc on 22 June 2015 as she gave water to some of the 190 pigs in his slaughterhouse-bound truck. This week, 49-year-old Kranjc appeared at the Ontario Court of Justice charged with mischief for the deed, which she argues was an act of compassion for the overheated animals. To prove this, her lawyers hope to show a virtual reality video of a slaughterhouse to the judge, David Harris. Pigs might not be humans, but humans are about to become pigs.

“The tack that we’ve taken recognises that Anita hasn’t done anything wrong,” said one of her lawyers, James Silver. Along with testimony from environmental and animal welfare experts, her defence hope the virtual reality experience, which is planned for when the trial resumes in October, will allow Harris to understand Kranjc’s point of view. Via the pigs’ point of view.

It’s safe to say that the simulated experience of being a pig in a slaughterhouse will not be a pleasant one. iAnimal, an immersive VR video about the lives of farm animals, launched earlier this year and has already changed attitudes towards meat. But whether or not Harris becomes a vegetarian after the trial is not the most pressing aspect of this case. If the lawyers get their wish to bring a VR headset into the courtroom, they will make legal history.

“Virtual reality is a logical progression from the existing ways in which technology is used to illustrate and present evidence in court,” says Graham Smith, a technology lawyer and partner at the international law firm Bird & Bird.

“Graphics, charts, visualisations, simulations and reconstructions, data-augmented video and other technology tools are already used to assist courts in understanding complex data and sequences of events.”

Researchers have already been looking into the ways VR can be used in courts, with particular focus on recreating crime scenes. In May, Staffordshire University launched a project that aims to “transport” jurors into virtual crime scenes, whilst in 2014 researchers at the Institute of Forensic Medicine in Switzerland created a 3D reconstruction of a shooting, including the trajectory of a bullet. Although this will help bring to life complex evidence that might be hard to understand or picture in context, the use of VR in this way is not without its flaws.

“Whether a particular aid should be admitted into evidence can give rise to argument, especially in criminal trials involving a jury,” says Smith. “Does the reconstruction incorporate factual assumptions or inferences that are in dispute, perhaps based on expert evidence? Does the reconstruction fairly represent the underlying materials? Is the data at all coloured by the particular way in which it is presented? 

“Would immersion aid a jury's understanding of the events or could it have a prejudicial impact? At its core, would VR in a particular case add to or detract from the court's ability objectively to assess the evidence?”

The potential for bias is worrying, especially if the VR video was constructed from witness testimony, not CCTV footage or other quantitative data. To avoid bias, feasibly both the defence and prosecution could recreate an event from different perspectives. If the jury or judge experience the life of a distressed pig on its way to be slaughtered, should they also be immersed in the life of a sweaty trucker, just trying to do his job and panicked by a protester feeding his pigs an unknown substance from a bottle?

“These are not new debates,” says Smith. “Lawyers are used to tackling these kinds of issues with the current generation of illustrative aids. Before too long they will find themselves doing so with immersive VR.”

It seems safe to trust, then, that legal professionals will readily come up with failsafe guidelines for the use of VR in order to avoid prejudice or bias. But beyond legal concerns, there is another issue: ethics.

In 2009, researchers at the University of Leicester discovered that jurors face trauma due to their exposure to harrowing evidence. “The research confirms that jury service, particularly for crimes against people, can cause significant anxiety, and for a vulnerable minority it can lead to severe clinical levels of stress or the symptoms of post traumatic stress disorder,” they wrote.

It’s easy to see how this trauma could be exacerbated by being virtually transported to a scene and watching a crime play out before your eyes. Gamers have already spoken about panic attacks as a result of VR horror games, with Denny Unger, creative director of Cloudhead Games, speculating they could cause heart attacks. A virtual reality murder, however virtual, is still real, and could easily cause similar distress.

Then there is the matter of which crimes get the VR treatment. Would courts allow the jury to be immersed in a VR rape? Despite how harrowing and farfetched that sounds, a virtual reality sexual assault was already screened at the 2015 Sundance Film Festival.

For now, legal professionals have time to consider these issues. By October, Kranjc’s lawyers may or may not have been allowed to use VR in court. If they are, they may change legal history. If they’re not, Kranjc may be found guilty, and faces six months in jail or a $5,000 fine. 

Amelia Tait is a technology and digital culture writer at the New Statesman.