Last month, Apple unveiled the latest version of its watch, featuring new health-monitoring features such as alerts for unusually low or high heart rates, and a way to sense when the wearer has fallen over and, if so, call the emergency services. In itself, that sounds pretty cool, and might even help save lives. But it’s also another nail in the coffin of social solidarity.
Why? Because shortly after the Apple announcement, one of America’s biggest insurance companies, John Hancock, announced it would stop selling traditional life insurance, and would now offer only “interactive” policies that required customers to wear a health-monitoring device – such as an Apple Watch or Fitbit. But such personalised insurance plans undermine the social spreading of risk that makes insurance a public good. Knowing every little dirty secret about our lifestyles, such an insurer will be heavily incentivised to make the riskier customers pay more in premiums than the healthy-livers. Eventually, the fortunate will subsidise the less fortunate to a far smaller degree than they do on traditional insurance models. For those who get sick, this will literally add insult to injury.
This happened too late to be mentioned in Martin Moore’s excellent new book, but he wouldn’t be surprised, having devoted an alarming section to the race into data-mining health applications by Apple, Google and Amazon. As he explains, “the big tech platforms – and many of their investors – can imagine a future in which each of them becomes our main gateway to health care.” This will, of course, undermine the NHS and cost us our biomedical privacy.
Silicon Valley’s dream of “disrupting” or “reimagining” health care is just one example of the way the tech giants long to muscle their way in to, and extract large profits from, social institutions they don’t understand. Tech CEOs know nothing in particular about education, for another thing, but are canny enough to see that it is a huge potential revenue centre, if only they could persuade schools to use their software and computers.
Actually, Google is already doing a very good job of that. By mid-2017, the majority of schoolchildren in America were using Google’s education apps, which of course track the activity of every child, creating a store of data that – who knows? – might come in useful when those children grow up to be attractive targets for advertising. In the near future, Moore points out, we might no longer have a choice: “It will be a brave parent who chooses to opt out of a data-driven system, if by opting out it means their child has less chance of gaining entry to the college of their choice, or of entering the career their aspire to.”
If, practically speaking, you can’t opt out of a health care platform, or switch from the education platform your local school uses, then unaccountable corporate monopolies have usurped the functions of government. Moore calls this “platform democracy”. You might equally suggest it as a new meaning for “technocracy”, which up till now has meant rule by experts. Soon, technocracy might mean rule by people who don’t understand anything, but think that data alone constitutes expertise; people who glory in the “engineering ethos” of rapid prototyping and deployment; or, as Facebook’s old motto had it, “move fast and break things”. This is fine when you are building a trivial app; it’s not so fine if the things you are breaking are people and social institutions.
It already seems a long time ago that people were hailing the so-called Facebook and Twitter revolutions in the Middle East, and that hacker-pranksters such as the Anonymous collective chose targets such as Scientology. Now these have been replaced by Russian bot-farms and total surveillance. Moore’s book is an investigation of how we got here from there, and a troubling warning about how the future might unfold.
He begins by bringing the reader up to speed, in lucid detail, on Steve Bannon and the Breitbart website, as well as the story of Cambridge Analytica. He explains what we know about Russian interference in the 2016 US presidential election, while making the important point that such operations are not at all new. During the Cold War, the USSR and its puppet regimes ran energetic fake-news operations against the West. The only difference now is that modern technology makes disinformation operations much more effective, as falsehoods can go viral around the globe in a matter of minutes. Putin now has his own social-media sock-puppet farm, hidden in plain sight under the bland name of the “Internet Research Agency”. (It does about as much research as Jacob Rees-Mogg’s “European Research Group” for hard Brexiteers.)
This leads directly into Moore’s larger argument, which is that for reasons of profit the tech platforms actively turned themselves into machines perfectly suited to the dissemination of anarcho-nationalist hatred and untruth. Until recently, Moore notes, Facebook rarely thought about politics, and if it did “it tended to assume the platform was by its nature democratising”. But ahead of its 2012 stock-market floating, it went “all out to create an intelligent, scalable, global, targeted advertising machine” that gave advertisers granular access to users. And so it created the most efficient delivery system for targeted political propaganda the world had ever seen.
It wasn’t just the bad guys who noticed this. In 2012, Barack Obama’s blog director Sam Graham-Felsen enthused: “If you can figure out how to leverage the power of friendship, that opens up incredible possibilities.” The possibilities that Facebook has since opened up would have seemed incredible six years ago. A member of the Trump campaign team openly described one aspect of their Facebook campaign as “voter suppression operations” aimed at Democrats, using something called “dark posts”. These allowed operators to conduct sophisticated testing comparing the effects of different kinds of adverts, creating, as Moore puts it, “a remarkably sophisticated behavioural response propaganda system”.
For its part, Google contributed to the global miasma of virtual bullshit through its innovations in advertising to create what is known as “ad tech”. Moore calls this “the poison at the heart of our digital democracy”, because “it cannot function without behavioural tracking, it does not work unless done at a gargantuan scale, and it is chronically and inherently opaque”. Famously, the Google founders, Larry Page and Sergey Brin, noted in 1998 that any search engine that depended on advertising revenue would be biased and not serve its users well. But then they, too, realised that they wanted to make tons of money, and advertising would be how. It was Google’s innovations in selling online advertising, Moore argues, that created the obsession with clicks that came to dominate the internet and drive the commissioning of ever more trivial click bait by terrified publishers. Apart from that it represents a terrible waste of formidable talent: as a former Facebook engineer, Jeff Hammerbacher, said in 2011: “The best minds of my generation are thinking about how to make people click ads. That sucks.”
Because Google’s “theology of engineering” placed a premium on removing friction – “friction for the most part meaning people”, Moore observes sharply – the system was designed to be automated and accessible to everyone. Google didn’t care whether you were a hawker of vacuum cleaners or a neo-Nazi. But you’d get the personal touch if you had a lot of money to spend. Remarkably, Moore reports, Google as well as Facebook sent employees to work with the Trump campaign in 2016, to help them optimise and create “engagement” with their propaganda (Facebook offered to do the same for the Clinton campaign). The metric of engagement (meaning clicks) also created an inbuilt bias even in the standard automated system, Moore points out: “Thanks to the way the ad tech model prioritised ads that were engaging, incendiary political advertisements were cheaper to post than more measured ones.”
Moore’s chapter about Twitter is really about the death of local journalism and the decline of national newsrooms, and the void of political accountability that has opened up because of it. Twitter has its own well-documented problems with toxic trolls and bots, but the slow death of news isn’t its fault. More clearly culpable is Google. On 9/11 Google employees were instructed to simply copy the text and code of news websites and display it on Google’s homepage. As Google’s former communications man Douglas Edwards relates in his memoir, I’m Feeling Lucky: “No one asked whether it was within our legal rights to appropriate others’ content.” That innovation became Google News. Now, in the US, there are four PR people for every journalist.
Moore also limns an ever-more-intense “surveillance democracy”, to be enabled by new forms of compulsory computerised ID and the shiny networked gewgaws of what is sold as the “smart city”. “By 2020,” Moore notes, “every car in Singapore has to have a built-in GPS that communicates location and speed not just to the driver but to authorities,” while already in one housing development, officials have access to real-time data about energy, water, and waste usage. “In layman’s terms,” Moore explains, “this translates to the local authority knowing when you have just flushed the toilet.” The Black Mirror-style “social credit” scheme already under way in China, meanwhile, gives citizens a trust score based on their communication and purchasing behaviour. If you have a low score you might not be able to book a train ticket. In Moore’s view, such advances amount to “reimagining the state as a digital platform”, and this is even more dangerous than giving pieces of the state over to the existing tech platforms.
So what can we do? There are some green shoots of resistance, and they all share the general idea that our creaking institutions of democracy need to be brought into the modern age, partly so as to resist the threat of “for-profit platform democracy”, and partly so as to renew public trust. (In one Journal of Democracy study, only 40 per cent of millennials in the UK and the US were wholly committed to living in a democracy.) Emmanuel Macron’s much-vaunted “citizens’ consultations” have not as yet amounted to much, but at least, Moore says, he “acknowledged the scale of the challenge”. In 2017, Paris mayor Anne Hidalgo let schoolchildren vote on how their budget should be allocated: this and other experiments in direct mass consultation show that it’s now much easier to know exactly what the people want, if you sincerely care to find out.
The best example of a dynamic democracy that is technologically literate enough not to be in danger of a takeover by the corporate giants is Estonia. There, the digital infrastructure was built with democracy and public accountability in mind. ID is electronic, but the data the state holds on each citizen is held in separate subject-area “silos” that can’t be amalgamated, and the citizen has the right not only to see it all, but to be notified whenever the state looks at it. It is a transparent system that Estonians themselves are rightly proud of. And its example ought to remind us that if we don’t follow their lead and design digital democracy ourselves, there is no shortage of rapacious corporations that will line up to do it for us.
Democracy Hacked: Political Turmoil and Information Warfare in the Digital Age
Martin Moore
Oneworld, 320pp, £16.99
This article appears in the 03 Oct 2018 issue of the New Statesman, The fury of the Far Right