If you want to understand the great psychological manipulation experiment that is Facebook, try leaving it. A couple of years ago, I briefly suspended my account in case a particularly nasty group of internet nutters I’d just written about hacked into it and stole all the photos of me guzzling Smirnoff Ice from the mid-2000s. Before being allowed to deactivate, I was shown a gallery of my friends’ faces. “Are you sure? Schoolfriend 1 will miss you,” the website trilled, with the same algorithmic guilt trip the Ocado checkout uses to foist another punnet of microscopically discounted raspberries on me. It was startlingly effective.
This time, though, I was more clever. Or so I thought. What if I just… didn’t use Facebook? Turns out this makes Facebook almost pathetically needy, sending you near-daily emails saying “Friend 1 left a comment on Friend 2’s status!” but declining to tell you what that might be.
This is weaponised Fomo – fear of missing out – and for social animals like us, it’s an exceptionally powerful force. Every time I’ve given in and looked at the status, it’s turned out to be a) people discussing the Labour Party; b) a picture of someone’s child dressed as a pirate. My life is not diminished by a minuscule reduction in the frequency of experiencing either of these things.
Still, at least I tried to quit – and maybe next time I will succeed. When Facebook’s chief executive, Mark Zuckerberg, testified in front of US Congress after the Cambridge Analytica scandal, I didn’t expect much fallout, because politicians are useless at interrogating internet titans. The 44 lawmakers who questioned Zuckerberg had an average age of 62; one asked him about “emailing over WhatsApp”, which is like enquiring if you’ve ever texted on a fax machine.
However, the hearing had one important consequence. It planted in people’s minds the idea that Facebook is “creepy” – and because Facebook tries to act like your friend, that is incredibly dangerous for its continued success. We all know what it’s like when a person is creepy. Now, to me and millions of others, Facebook is a creepy person. And so it will have to embrace change – or at least appear to do so.
Regulation of tech giants is long overdue, now that we are aware of their incredible power and influence. (Former Google “design ethicist” Tristan Harris has shown how tech subtly manipulates us by, for example, controlling the menu from which we make a “free” choice, or by turning our smartphones into virtual slot machines, constantly offering us tiny, variable rewards through notifications.)
Tech is now where junk food was in the early 2000s, around the time of Morgan Spurlock’s Super Size Me and Eric Schlosser’s Fast Food Nation. We love our nuggets of ground-up information, because they’re delicious, but we hate them too, because we know they’re not a proper meal. We also know that we can’t resist junk information on our own – we need the manufacturers to put the digital equivalent of the calorie count on the packaging.
A timely new book by Jamie Bartlett of Demos, The People vs Tech, spells out why it’s vital that politicians seize this moment to reform our attitudes to technology and its most ardent boosters. Following Cambridge University’s David Runciman, he argues that Western liberal democracy is under grave strain, but that its erosion will not take a form we recognise from history. Bartlett suggests that the rise of automation could create a more divided society – as with immigration, the rich could benefit from the cheap labour of robots, while the traditional working class see their job security and wages further eroded by them.
This higher inequality would be toxic if combined with a social media-driven climate of inflammatory partisan shouting, conspiracy theories and casual abuse. What then if “polarised, divided and angry citizens” lose their trust in our institutions? We might see the rise of demagogues who promise order and stability – new-style dictators who ask citizens to trade their privacy and freedom for algorithms and AIs controlling their lives: what health treatment they receive, who goes to university, how severely criminals are punished.
“In the hands of a techno-authoritarian, all the digital tools of liberation could easily become powerful tools of subtle coercion that might make society run more smoothly but wouldn’t make us more free or hold the powerful to account,” writes Bartlett. “The idea of democracy won’t disappear… but it would be little more than a shell system, where real power and authority was increasingly centralised and run by a small group of techno-wizards.”
There are grounds for optimism. On 6 April, Zuckerberg announced that all political and “issue” adverts on Facebook would now have to be authenticated. Identifying and vetting buyers “will make it a lot harder for anyone to do what the Russians did during the 2016 election and use fake accounts and pages to run ads,” he wrote. All political adverts will be clearly labelled, and by June there should be a public database of all of them, recording the amount spent and the targeted demographic.
All of this is sensible. It also follows my law of regulating the internet: the best practice is usually to treat it like “real life”, because now, it is real life. In Britain, electoral law dictates the labelling of party political broadcasts, imposes spending limits and demands transparency from donors. We should recreate that climate online.
Bartlett’s prediction is a bleak one, and it can only be resisted by first acknowledging the scale of the challenge. Facebook isn’t evil – but it is delicious, and I do keep creeping back to look at yet more of my friends’ toddlers dressed as pirates. I need help. We need help. That’s what politicians are supposed to be for.
This article appears in the 25 Apr 2018 issue of the New Statesman, The Corbyn ultimatum