A psychologist at the University of Cambridge has developed a vaccine against fake news. It comes in the form of an online game called Bad News, which aims to inoculate its users against disinformation by giving them the opportunity to create some. You can make your own news site (mine reads “Honest Truth Online: what they don’t want you to read!”), design memes attacking climate change science, and post alarming tweets from a fake presidential account.
The game’s designer, Sander van der Linden, hopes that by showing people how it’s done, the game will generate “mental anti-bodies” in those who play it, which will then protect them against disinformation.
It’s often assumed that if Twitter can delete all the Russian bots, or if Facebook can write algorithms that distinguish truth from lies, the fake news problem will be solved. But the uncomfortable truth is that, like drug addiction, fake news is a demand-side problem as much as a supply-side one. If we’re ever going to fix it, we will need to fix our brains. Specifically, we will need to learn, or relearn, how to think.
The dominant explanation for why people believe fake news is that they want to. The theory of “motivated cognition” proposes that people are reasoning backwards: voters take a side and then believe anything that seems amenable to their side, while dismissing any news that does not. That makes sense to me, but then maybe I want it to. It sounds like something other people do.
David Rand, a professor at MIT, has a different theory, one that puts the emphasis on how we think, rather than what we want. In a paper on fake news, co-authored with Gordon Pennycook, Rand argues that people who fall for online lies are “lazy, not biased”. He suggests that the biggest factor distinguishing those who believe blatantly inaccurate information from those who don’t is not partisanship, but the care they take over reaching conclusions. Those who react intuitively and quickly to new information are more gullible than those who exercise analytical reasoning, or what Rand calls “cognitive control”. In one of Rand and Pennycook’s experiments, the same people who fell for fake news headlines were more likely to deem Deepak Chopra quotes profound.
Cognitive control is required to engage in reasoned argument, or make plans, or do maths. It is hard. Most of us, most of the time, avoid it. The psychologist Daniel Kahneman likens our capacity for analytical reason to the ability of cats to swim. We can do it, but only with effort and discomfort, and we’d rather not unless we have to.
The internet makes us less likely to think about the news we see, simply because it facilitates instantaneous reactions. Headlines, pictures and memes slide quickly across our visual field, spraying emotionally charged messages at us as they go. Technologists like to talk about creating “frictionless” user experiences. In the context of news, frictionless means thoughtless.
There is an interesting irony here, out of which Rand has constructed a grand theory of history. The internet itself is a product of cognitive control. It took many minds, applying effortful thinking over many years, to invent and implement the electronic infrastructure that now encourages us to abandon all critical faculties. I’m reminded of a famous Reddit answer to the question of what would be the most difficult thing to explain to someone who arrived here from the 1950s: “I possess a device in my pocket that is capable of accessing the entirety of information known to man. I use it to look at pictures of cats and get into arguments with strangers.”
Cognitive control underpins humanity’s greatest achievements, from irrigation systems to railways, the separation of powers to the symphony. Given the riches it generates, says Rand, you might imagine that we would do more and more of it, resulting in inexorable human progress – yet you do not need to be John Gray to see that this isn’t so. Societies that invented sophisticated technologies have collapsed into decay and ruin, and they often do so precisely because of what those technologies enable: the machine gun being an obvious example.
Since you can’t run experiments in history, Rand constructed a mathematical model of a society in which agents of different cognitive styles interacted with each other. Some were high in cognitive control, while others were “automatic” processors which acted quickly, without thinking. He found that controlled agents succeeded up until the point at which they made it easier for automatic agents to flourish.
Rand’s hypothesis is that in an inhospitable, difficult world, controlled thinking has a high reward, because you need to think carefully about how to survive – for instance, how to plan your food consumption when food is scarce. Controlled thinking then produces technologies to solve such problems, like the fridge. These technologies benefit everyone, even those who do not act with control, which then erodes the advantage of controlled thinking – for instance by the over-consumption of food. Thus does history cycle erratically through periods of rationality and forethought followed by conflict and chaos.
You can’t help but wonder where we are in the cycle. After the Second World War, nations built a complex system of international treaties and trading agreements that formed the basis of an unprecedented period of prosperity, allowing billions of us to live in relative comfort. It’s hard, these days, to imagine that war or mass chaos might return. Voters and politicians seem increasingly happy to be led by what they feel. A world built on the hard mental labour of cognitive control may be freeing us from the need to think at all.
So what comes next?
This article appears in the 10 Oct 2018 issue of the New Statesman, How austerity broke Britain