“Are we in a post-Brexit PUB BOOM?” asked the Express last week, before declaring we Brits have been returning to pubs in our “droves” since the referendum.
It’s a heartwarming tale of post-Brexit redemption: our divided nation coming together to drown its sorrows and celebrate at the pub. Unfortunately, it’s also nonsense. Bartenders in fact pulled 19m fewer pints, while consumers bought more beer from supermarkets between April and June. The claim spread across social media, becoming old news before anyone could point out the facts.
This is a classic example of our “post-truth” era, ie. porky pies dominating public debate without consequence. But it’s wrong to blame newspapers like the Express and politicians like The Donald for the new state of affairs. While they are some of the worst offenders, the underlying problem is that we’ve lost any ability to contain the spread of misinformation. To adapt Mark Twain, “a tweet travels halfway around the world well before the truth can get its boots on”.
The tech world has recognised this and is now reframing the post-truth era as a technological problem as much as a political one. Twitter and Facebook are force multipliers for misinformation, allowing content to mutate across different media within seconds, from stump speech to viral video to meme.
But if false claims that circulate on social media can be debunked straight away and at source, computer scientists and factcheckers say, then lying will become a far less lucrative business.
Computers are the only way of keeping up with the scale and scope of content online. As Full Fact, a factchecking organisation, says: “A computer can read the entire content of a newspaper in less time than it takes a person to read this sentence.”
This is why organisations across the world, including Full Fact, Le Monde and PolitiFact (US) are building tools capable of analysing vast amounts of information in real-time. The underlying technology is called “natural language processing” – the field of computer science and artificial intelligence which relates to the interaction of human and computer languages.
The aim is to develop algorithms which monitor TV, social media, news websites, and parliamentary debates 24/7 – and not only to spot when something important is said, but also to provide an automatic verdict on it.
So when a politician makes a spurious claim in a speech, an app instantly alerts users, discouraging them from sharing it and journalists from uncritically reporting it.
Experimental products are already delivering impressive results. ClaimBuster, designed by the University of Texas at Arlington, is using machine learning to spot important claims in this year’s presidential debates. Pheme tracks the propagation of online rumours. The key is integrating existing experimental natural language processing and statistical analysis machinery into a working product.
“We are months, not years away” from delivering functional products, says a report on automated factchecking released by Full Fact this week. Early models will catalogue data on the source and spread of misinformation, and correct basic incorrect claims. But automated factchecking apps will soon be capable of correcting more conceptually and statistically sophisticated material.
The aim is not to replace humans. We are better than computers at analysing context-laden or vague statements and value judgements. But computers have the competitive advantage in scouring the web, checking simple claims and spotting things we’ve analysed before. In other words, they can do black and white, allowing humans to focus on shades of grey.
Nor is anyone saying tech alone can solve post-truth politics. Modern campaigning will still be about conveying a simple and emotive message.
Consider, though, the power of an app that instantly tells journalists during PMQs or a press conference when a false claim is made, or a bar on BBC News flashing red if something inaccurate is said, or a social media bot notifying users when tweets contain misinformation. This doesn’t stop politicians from lying. It just makes lying a less effective strategy.
This is about equipping the good guys with tools to scale up the fight against the post-fact era, and using tech to help truth get its boots on.
Gabriel Pogrund is a political writer and 2016 Google News Lab Fellow at Full Fact. This piece is written in a personal capacity. He tweets @Gabriel_Pogrund.