Over the past couple of years, there have been a spate of celebrity death hoaxes online. Macaulay Culkin allegedly died, as did Chris Brown, and the Queen. Of course, you’d hope that most web users are savvy enough not to trust the kind of sites that featured the stories. But what tripped up far more people sites themselvse was the fact that they appeared on Facebook’s “trending topics” bar, thereby appearing legitimate, and, crucially, true.
At the time, I wrote a piece in frustration at the fact that Facebook wanted to have its cake and eat it: it wanted to act as a news source, without actually being one. Verifying stories is the most basic role of a news organisation. Even if Facebook simply chose certain sites it was willing to believe, the outcome would be far better than allowing popular posts to reign, wherever they may have come from.
This belief is why my reaction to Facebook’s recent “trending topics” scandal differed from most other journalists’. Tech site Gizmodo reported allegations from ex-Facebook employees that the site was “suppressing” right wing news stories. The story is still rolling on, but Facebook’s explanation – that it removes some topics or stories from certain sites for “consistancy and neutrality” – seems convincing.
The ex-employees claimed that stories from right-wing sites including Breitbart were not included unless “mainstream” sites like the New York Times also covered them, but it’s hard to prove that this is due to a political bias as opposed to an attempt to promote verifable stories over unverified ones.
As a rule, the report claims, stories were removed from the “trending” bar if they didn’t have at least three “traditional” news outlets behind them. Gizmodo also appears to lambast the fact that Facebook was curating what it was displaying at all, with this supposedly damning conclusion: “In other words, Facebook’s news section operates like a traditional newsroom.”
The political allegation notwithstanding, Gizmodo’s other accusations imply not that Facebook is an irresponsible news source, but that it is striving to become a better one. News sources and stories have a duty to prove to viewers, and arbitrators like Facebook, that they are trustworthy and verifiable. The fact that they haven’t demonstrated this doesn’t necessarily mean Facebook has a bias – it proves that Facebook cares, at least a little, about the spread of misinformation and false reports online.
A survey released by pollsters Morning Consult this week confirmed my suspicions that the “scandal” is far less damaging than many journalists would have you believe. Republican voters were slightly less likely than the average to distrust news from social media sites, but overall more than half were comfortable with the news provided.
Only 11 per cent, meanwhile, thought that the government should intervene on social media sites (tell that to the US senators who have taken Facbeook to task over the allegations). And in terms of curation within these social media companies, 31 per cent thought news stories should be determined by reader interest only, 11 per cent thought editors at the company should do it, while 29 per cent wanted a mix.
That breakdown also roughly matched how the respondants believed companies were already conducting their news operations (20 per cent, 23 per cent and 24 per cent respectively). The fact that they were keen to have slightly less human editing than the perceived status quo is interesting, but it’s possible that they wouldn’t include the deletion of false or unverified stories in this definition.
An interest-driven algorithm, with humans checking and tweaking its output, seems to be what customers want. Can we really blame Facebook for providing it?