Twitter’s decision to ban political adverts last week hasn’t delivered the sort of quick PR win its chief executive Jack Dorsey might have expected. The move, as Sarah Manavis wrote at the time, does little to address more pressing issues surrounding hate speech, fake news and the amplification of extremist political ideologies using bots. But it has succeeded in one crucial way: intensifying the pressure on Facebook and Google.
Almost as soon as Twitter announced the move, campaigners launched calls for its two rivals to either drop political ads altogether or create stricter rules around micro-targeting: the sort of tailored advertising at the centre of last year’s Cambridge Analytica scandal. But in a media briefing on Thursday afternoon, Facebook made it clear that, in the case of the UK election at least, it plans to do neither.
“As Mark Zuckerberg has said, we have considered whether we should ban political ads altogether,” Rebecca Stimson, Facebook’s head of UK public policy, told reporters on a call. “They account for just 0.5 per cent of our revenue and they’re always destined to be controversial. But we believe it’s important that candidates and politicians can communicate with their constituents and would be constituents.”
She added: “Online political ads are also important for both new challengers and campaigning groups to get their message out. Our approach is therefore to make political messages on our platforms as transparent as possible, not to remove them altogether.”
During the briefing, Facebook’s executives explained how they have tightened rules around the transparency of political ads in light of the Cambridge Analytica scandal. The incident raised questions at the time about the company’s approach to data privacy, but also around how political parties were able to target messages at specific groups with almost no transparency.
Under the revised rules, which were revealed last October, advertisers are obliged to register with Facebook and verify their identity if they want to run political campaigns, so that users know who has paid for the advert and members of the public can trawl through campaigns in ad libraries. This, Stimson claims, makes advertising across its site and Instagram “more transparent than other forms of election campaigning, whether that’s billboards, newspaper ads, direct mail, leaflets or targeted emails”.
But rogue political adverts are still falling through Facebook’s net. Earlier this week, the BBC reported that the site had been forced to remove an advert that sought to criticise Labour’s tax policy, without revealing it had been paid for by the Fair Tax Campaign, which is run by a former aide to Boris Johnson.
It’s difficult to see how the measures Facebook outlined yesterday would have stopped other such ads from appearing on the site. The system relies primarily on advertisers following its rules – the tax ad, for example, was only taken down after the BBC tipped off the tech giant – and if an existing advertiser wants to promote political messages, they could still do so without registering as a political campaign.
In a separate interview yesterday, Facebook communications chief Nick Clegg signalled that the company may consider revising its rules around micro-targeting ahead of next year’s US presidential election. But in the UK-focused briefing, Facebook announced only one genuine policy shift: a new rule to crack down on the rising number of Facebook pages set up in a way that conceals the identity of the people behind them. Pages which are found to have done so will have to be verified using a business phone number or email address. The move will be seen by many as a step in the right direction. But it won’t prevent political advertisers from using the page to conceal their identity in the first place.
The social media firm could, if it wished to, strengthen these rules. It could provide more detail in its ad library about the nature of micro-targeted campaigns, ask its existing fact-checking partners to assess political ads and force organisations to submit to its verification process before launching a page. It could also, from a financial perspective at least, follow in Dorsey’s footsteps and ban political advertising altogether.
But Facebook knows that doing so would be characterised as a political act in its own right. Many, on both the left and the right, believe that the critical distinction between the data harvesting regimes of the 2016 US presidential election and the Brexit referendum, and those that preceded them, was not Cambridge Analytica, psychographic profiling or Facebook’s lax approach to data privacy – concerning though they were – but the political events they supported. Removing political ads would be seen as a victory for Democrats at a time when Facebook’s market dominance is being questioned on both the US left and right in a way that Twitter’s is not. Donald Trump, meanwhile, has shown a willingness to punish tech companies run by those he sees as personal opponents.
Above all else, Facebook’s greatest fear is obsolescence. To maintain its status as one of the world’s largest advertising companies, its executives must create new policies which draw attention to the purported power of its advertising machine, attempt to show that they are taking their responsibilities to users more seriously and ensure that they keep a close working and financial relationship with politicians on both sides of the political divide. Could the company go further in protecting users from the spread of disinformation? Undoubtedly, but not without damaging some of those partnerships, hurting its bottom line and stoking the ire of free speech campaigners.