We should be told, clearly and comprehensively, the rules that tech firms use to regulate political debate.
Twitter’s CEO Jack Dorsey told Congress this week that Twitter’s algorithms had been “unfairly filtering 600,000 accounts, including some members of Congress”, from users’ timelines and search results. This followed last month’s news that the conspiracy theorist Alex Jones had been banned from YouTube and Facebook – which Mr Jones greeted with his customary balance and poise.
How should we react when certain voices are shut out of public debate, whether inadvertently or deliberately? In short, by asking for transparency. We should be told, clearly and comprehensively, the rules that tech firms use to regulate political debate.
Some historical context is helpful.
In ancient Athens, it’s said, citizens deliberated together publicly and in the same place. Debates were raucous, but there were certain rules and conventions known to all. It was forbidden, for instance, to attend in disguise. Only one person was supposed to speak at a time. No one had more speaking rights than others.
Nowadays, political speech increasingly takes place not in physical spaces, but on private online platforms owned and moderated by tech firms. Dorsey yesterday described Twitter as a “digital public square”. As he accepted, tech firms’ algorithms now determine the audience for everything that is said on their platforms, promoting and demoting contributions according to their timing, relevance, popularity, or other criteria. Tech firms also dictate the form of what may be said – for instance, that a tweet may only be 280 characters. And as Jones found out, they even determine the content of what may be said, banning speech deemed to breach their guidelines.
It’s as if the Athenian forum has been parcelled up and sold off in private auction. In place of a single public forum, we now have multiple private debating clubs, each with their own rules and standards. (Twitter, for instance, chose not to exclude Jones.)
It’s obviously necessary for platforms to impose basic rules of conduct on users. And it’s also fine in principle for them to use algorithms, which try to improve the quality of debate. But participants ought to be told the rules. Given the importance of public deliberation to democracy, we shouldn’t have to rely on after-the-event admissions by the likes of Mr Dorsey – and before him, Mr Zuckerberg in relation to fake news, foreign meddling, and hate speech on Facebook – that things have gone badly wrong. As with any form of power, those subject to it should not have to rely entirely on the good faith of those in charge.
Dorsey, for instance, says that “impartiality” and “objective criteria” are at the heart of Twitter’s algorithm. Great. But there is no easy way for the public to verify whether these principles are actually reflected in Twitter’s algorithms (and as Dorsey accepted yesterday, mistakes happen).
And with no need to doubt Dorsey’s good intentions, any political philosopher would tell him that impartiality and objectivity are fiendishly difficult concepts that can cause all sorts of injustices even if properly implemented. For instance, Twitter’s website says that “tweets you are likely to care about most will show up first in your timeline. We choose them based on accounts you interact with most, tweets you engage with, and much more.” It’s possible to criticise this approach on its substance. For instance, it encourages us into bubbles of people we know and like, while blinding us to different perspectives. But the deeper problem lies in the words “…and much more”. We are only told some of the basic principles, and we can’t see the algorithm itself. That makes it hard for citizens to analyse the system sensibly or fairly.
Tech industry insiders say that transparency isn’t necessary because we have a marketplace for ideas in which tech firms compete freely for users through the quality of the debating experience they offer. Forums for debate are just like supermarkets or gyms. We choose between them according to our taste.
However, this way of thinking has four limitations. First, we often don’t know the rules that govern us, and it can be hard to discern them. I don’t know which tweets I am not shown. My decision as to which platform to use is necessarily not a fully-informed one.
Secondly, network effects mean that if a certain platform becomes dominant, then size can trump quality. There’s little point defecting from Twitter to a perfectly-moderated platform if the latter only has a handful of like-minded users. What you gain in civility or enlightenment, you lose in marginalisation or irrelevance.
Then there is fragmentation: our biases are reinforced when we debate only with others who share our views, which tends to happen when different platforms appear more hospitable to some groups than others.
Finally, what is seductive in the information economy – lurid fake news, conspiracy theories, vicious partisanship – is not the same as what is productive for rational deliberation as part of the political process.
For these reasons, major platforms have a duty not just to compete with each other, but to show us the underlying basis of that competition. The rules of political debate are too important to be kept in the dark.
Jamie Susskind is a barrister and author based in London. His book Future Politics: Living Together in a World Transformed by Tech (Oxford University Press) is published in the UK on 20 September 2018 and in the US on 1 October 2018.