New Times,
New Thinking.

How Facebook’s algorithm could influence the US election result

Fake political news is on the rise – and reveals the power that social media has over our choices.

By Amelia Tait

“But why would you lie on the internet?” This is what my mother said when I told her that she had shared a fake news story on Facebook . . . again. The answer to her question is simple: to make money. The two Canadian teenagers behind the hoax website Hot Global News happily admit that they have earned as much as $900 for a single article from advertising. The more important question is: why does Facebook host their lies?

It is one thing to stumble on a news story on Facebook (as 66 per cent of its users do) and quite another for the site to push news, or what purports to be news, on to your timeline. Facebook was criticised recently for doing the latter, after it sacked as many as 26 contractors who used to curate its “trending” sidebar: a list of the most popular news stories on the network.

The contractors were replaced by an algorithm (overseen by a smaller team), which went on to promote a fake story claiming that the Fox News journalist Megyn Kelly had been fired for supporting Hillary Clinton (61,000 people posted about it). According to the Guardian, this showed that Facebook had “lost the fight against fake news”. In reality, it never took up arms at all.

Although Facebook began in 2004 as a place for friends to connect, in 2007 it began to allow businesses and media outlets to create “pages” to promote themselves. These quickly became popular. Nine years later, Facebook is the biggest source of traffic to news websites, ahead of Google.

In January 2015 the site updated its algorithm to stop displaying posts that people reported as hoaxes. The system relies on users – who often respond positively to stories, true or false, that align with their beliefs – to flag dubious content. This is a problem when it comes to political stories. The New York Times reported last month that there is an abundance of political websites “made specifically for Facebook and cleverly engineered to reach audiences exclusively in the context of the newsfeed”. Most seem to care less about the truth than their advertising revenues.

Facebook could blacklist these sites, but it argues that this would be a form of censorship, so it is reluctant to do so. Meanwhile, its profits are tied up with how engaged its users are, and given that fake political websites are so appealing (many have millions of “likes” on their pages), it is not in its best interests to remove them. With the new algorithm seeking out content seemingly based on popularity alone, Facebook might even begin to promote them.

Nor is it in Facebook’s power to create an algorithm that can accurately detect false stories. “Algorithms are good at processing data at a massive scale,” says Alex Krasodomski-Jones, a researcher at Demos’s Centre for the Analysis of Social Media. “But at least, for the moment, they aren’t able to replicate a human’s intelligence. Deciphering a sarcastic tweet or recognising a fake news story requires a level of human investigation that is beyond a machine.”

Give a gift subscription to the New Statesman this Christmas from just £49

This doesn’t absolve Facebook but the solution isn’t as simple as bringing back the humans. The decision to fire the trending topics team came after the network was accused of bias – critics complained that the website had failed to promote conservative news stories in its sidebar. It’s now in a quandary: human beings are considered too biased, but robots not biased enough.

Ultimately, Facebook wants to shift the onus on to users. In a recent Q&A, its co-founder and CEO, Mark Zuckerberg, insisted that his business was “a tech company, not a media company”. His conviction doesn’t make the statement true, but it does show that the site will continue to avoid responsibility for disseminating fake news.

Where does that leave us? In an environment where fake or misleading news posts often get several times as many likes, shares and comments as the posts that later debunk them, can we trust people – not the 26 former Facebook employees, but the network’s 1.17 billion monthly active users – to care about the truth?

Yet Facebook’s attitude may turn out to be a blessing. The coverage of the US presidential election and the EU referendum has led many to suggest that we are living in an era of “post-truth politics”, in which emotions have more power than facts. Relying on algorithms to weed out lies could have the effect of further weakening our critical thinking faculties. If Facebook continues as it is, people will be forced to scrutinise stories on the site for themselves. This could eventually diminish the power of political lies and help to end the post-truth era. Forget artificial intelligence: it’s time to use our own. Sorry, Mum.

Content from our partners
How to solve the teaching crisis
Pitching in to support grassroots football
Putting citizen experience at the heart of AI-driven public services

This article appears in the 07 Sep 2016 issue of the New Statesman, The Three Brexiteers