Should digital gerrymandering be illegal? Photo: Getty
Show Hide image

Facebook could decide an election without anyone ever finding out

The scary future of digital gerrymandering – and how to prevent it.

On 2 November, 2010, Facebook’s American users were subject to an ambitious experiment in civic-engineering: could a social network get otherwise-indolent people to cast a ballot in that day’s congressional midterm elections?

The answer was yes.

The prod to nudge bystanders to the voting booths was simple. It consisted of a graphic containing a link for looking up polling places, a button to click to announce that you had voted, and the profile photos of up to six Facebook friends who had indicated they’d already done the same. With Facebook’s cooperation, the political scientists who dreamed up the study planted that graphic in the newsfeeds of tens of millions of users. (Other groups of Facebook users were shown a generic get-out-the-vote message or received no voting reminder at all.) Then, in an awesome feat of data-crunching, the researchers cross-referenced their subjects’ names with the day’s actual voting records from precincts across the country to measure how much their voting prompt increased turnout.

Overall, users notified of their friends’ voting were 0.39 per cent more likely to vote than those in the control group, and any resulting decisions to cast a ballot also appeared to ripple to the behaviour of close Facebook friends, even if those people hadn’t received the original message. That small increase in turnout rates amounted to a lot of new votes. The researchers concluded that their Facebook graphic directly mobilised 60,000 voters, and, thanks to the ripple effect, ultimately caused an additional 340,000 votes to be cast that day. As they point out, George W Bush won Florida, and thus the presidency, by 537 votes – fewer than 0.01 per cent of the votes cast in that state.

Now consider a hypothetical, hotly contested future election. Suppose that Mark Zuckerberg personally favours whichever candidate you don’t like. He arranges for a voting prompt to appear within the newsfeeds of tens of millions of active Facebook users – but unlike in the 2010 experiment, the group that will not receive the message is not chosen at random. Rather, Zuckerberg makes use of the fact that Facebook “likes” can predict political views and party affiliation, even beyond the many users who proudly advertise those affiliations directly. With that knowledge, our hypothetical Zuck chooses not to spice the feeds of users unsympathetic to his views. Such machinations then flip the outcome of our hypothetical election. Should the law constrain this kind of behaviour?

The scenario imagined above is an example of digital gerrymandering. All sorts of factors contribute to what Facebook or Twitter present in a feed, or what Google or Bing show us in search results. Our expectation is that those intermediaries will provide open conduits to others’ content and that the variables in their processes just help yield the information we find most relevant. (In that spirit, we expect that advertiser-sponsored links and posts will be clearly labeled so as to make them easy to distinguish from the regular ones.) Digital gerrymandering occurs when a site instead distributes information in a manner that serves its own ideological agenda. This is possible on any service that personalises what users see or the order in which they see it, and it’s increasingly easy to effect.

There are plenty of reasons to regard digital gerrymandering as such a toxic exercise that no right-thinking company would attempt it. But none of these businesses actually promises neutrality in its proprietary algorithms, whatever that would mean in practical terms. And they have already shown themselves willing to leverage their awesome platforms to attempt to influence policy. In January 2012, for example, Google blacked out its home page “doodle” as a protest against the pending Stop Online Piracy Act (SOPA), said by its opponents (myself among them) to facilitate censorship. The altered logo linked to an official blog entry importuning Google users to petition Congress; SOPA was ultimately tabled, just as Google and many others had wanted. A social-media or search company looking to take the next step and attempt to create a favourable outcome in an election would certainly have the means.

So what’s stopping that from happening? The most important fail-safe is the threat that a significant number of users, outraged by a betrayal of trust, would adopt alternative services, hurting the responsible company’s revenue and reputation. But while a propagandistic Google doodle or similarly ideological alteration to a common home page lies in plain view, newsfeeds and search results have no baseline. They can be subtly tweaked without hazarding the same backlash. Indeed, in our get-out-the-vote hypothetical, the people with the most cause for complaint are those who won’t be fed the prompt and may never know it existed. Not only that, but the disclosure policies of social networks and search engines already state that the companies reserve the right to season their newsfeeds and search results however they like. An effort to sway turnout could be construed as being covered by the existing agreements and require no special notice to users.

At the same time, passing new laws to prevent digital gerrymandering would be ill advised. People may be due the benefits of a democratic electoral process. But in the United States, content curators appropriately have a First Amendment right to present their content as they see fit. Meddling with how a company gives information to its users, especially when no one’s arguing that the information in question is false, is asking for trouble. (That’s one reason why the European Court of Justice got it wrong when it opened the door to people censoring the search-engine results for their names, validating a so-called “right to be forgotten.”)

There’s a better solution available: enticing web companies entrusted with personal data and preferences to act as “information fiduciaries”. Champions of the concept include Jack Balkin of Yale Law School, who sees a precedent in the way that lawyers and doctors obtain sensitive information about their clients and patients – and are then not allowed to use that knowledge for outside purposes. Balkin asks: “Should we treat certain online businesses, because of their importance to people’s lives, and the degree of trust and confidence that people inevitably must place in these businesses, in the same way?”

As things stand, web companies are simply bound to follow their own privacy policies, however flimsy. Information fiduciaries would have to do more. For example, they might be required to keep automatic audit trails reflecting when the personal data of their users is shared with another company, or is used in a new way. (Interestingly, the kind of ledger that crypto-currencies like Bitcoin use to track the movement of money could be adapted to this function.) They would provide a way for users to toggle search results or newsfeeds to see how that content would appear without the influence of reams of personal data – that is, non-personalised. And, most important, information fiduciaries would forswear any formulas of personalisation derived from their own ideological goals. Such a system could be voluntary, in the way that businesspeople who make suggestions on buying and selling stocks and bonds can elect between careers as investment advisers or brokers: the “advisers” owe duties not to put their own interests above those of their clients, while the “brokers” have no such duty, even as they – confusingly – can go by such titles as financial adviser, financial consultant, wealth manager, and registered representative. (If someone’s telling you how to handle your nest egg, you might ask flat out whether he or she is your fiduciary and walk swiftly to the exit if the answer is no.)

Constructed correctly, the duties of the information fiduciary would be limited enough for the Facebooks and Googles of the world, while meaningful enough to the people who rely on the services, that the intermediaries could be induced to opt into them. To provide further incentive, the government could offer tax breaks or certain legal immunities for those willing to step up toward an enhanced duty to their users. My search results and newsfeed might still end up different from yours based on our political leanings, but only because the algorithm is trying to give me what I want – the way that an investment adviser may recommend stocks to the reckless and bonds to the sedate – and never because the search engine or social network is trying to covertly pick election winners.

Four decades ago, another emerging technology had Americans worried about how it might be manipulating them. In 1974, amid a panic over the possibility of subliminal messages in TV advertisements, the Federal Communications Commission strictly forbade that kind of communication. There was a foundation for the move; historically, broadcasters have accepted a burden of evenhandedness in exchange for licenses to use the public airwaves. The same duty of audience protection ought to be brought to today’s dominant medium. As more and more of what shapes our views and behaviors comes from inscrutable, artificial-intelligence-driven processes, the worst-case scenarios should be placed off limits in ways that don’t trip over into restrictions on free speech. Our information intermediaries can keep their sauces secret, inevitably advantaging some sources of content and disadvantaging others, while still agreeing that some ingredients are poison – and must be off the table.

Jonathan Zittrain is a professor of law and computer science at Harvard University, and author of “The Future of the Internet – and How to Stop It”. This article is adapted from remarks given at the 2014 Harvard Law Review Symposium on Freedom of the Press. A version will appear this month in the Harvard Law Review Forum.

This article first appeared on newrepublic.com

Getty
Show Hide image

The tale of Battersea power station shows how affordable housing is lost

Initially, the developers promised 636 affordable homes. Now, they have reduced the number to 386. 

It’s the most predictable trick in the big book of property development. A developer signs an agreement with a local council promising to provide a barely acceptable level of barely affordable housing, then slashes these commitments at the first, second and third signs of trouble. It’s happened all over the country, from Hastings to Cumbria. But it happens most often in London, and most recently of all at Battersea power station, the Thames landmark and long-time London ruin which I wrote about in my 2016 book, Up In Smoke: The Failed Dreams of Battersea Power Station. For decades, the power station was one of London’s most popular buildings but now it represents some of the most depressing aspects of the capital’s attempts at regeneration. Almost in shame, the building itself has started to disappear from view behind a curtain of ugly gold-and-glass apartments aimed squarely at the international rich. The Battersea power station development is costing around £9bn. There will be around 4,200 flats, an office for Apple and a new Tube station. But only 386 of the new flats will be considered affordable

What makes the Battersea power station development worse is the developer’s argument for why there are so few affordable homes, which runs something like this. The bottom is falling out of the luxury homes market because too many are being built, which means developers can no longer afford to build the sort of homes that people actually want. It’s yet another sign of the failure of the housing market to provide what is most needed. But it also highlights the delusion of politicians who still seem to believe that property developers are going to provide the answers to one of the most pressing problems in politics.

A Malaysian consortium acquired Battersea power station in 2012. Initially, it promised to build 636 affordable units. This was pretty meagre, but with four developers already having failed to develop the site, it was still enough for Wandsworth council to give planning consent. By the time I wrote Up In Smoke, this had been reduced to 565 units – around 15 per cent of the total number of new flats. Now the developers want to build only 386 affordable homes – around 9 per cent of the final residential offering, which includes expensive flats bought by the likes of Sting and Bear Grylls.

The developers say this is because of escalating costs and the technical challenges of restoring the power station – but it’s also the case that the entire Nine Elms area between Battersea and Vauxhall is experiencing a glut of similar property, which is driving down prices. They want to focus instead on paying for the new Northern Line extension that joins the power station to Kennington. The slashing of affordable housing can be done without need for a new planning application or public consultation by using a “deed of variation”. It also means Mayor Sadiq Khan can’t do much more than write to Wandsworth urging the council to reject the new scheme. There’s little chance of that. Conservative Wandsworth has been committed to a developer-led solution to the power station for three decades and in that time has perfected the art of rolling over, despite several excruciating, and occasionally hilarious, disappointments.

The Battersea power station situation also highlights the sophistry developers will use to excuse any decision. When I interviewed Rob Tincknell, the developer’s chief executive, in 2014, he boasted it was the developer’s commitment to paying for the Northern Line extension (NLE) that was allowing the already limited amount of affordable housing to be built in the first place. Without the NLE, he insisted, they would never be able to build this number of affordable units. “The important point to note is that the NLE project allows the development density in the district of Nine Elms to nearly double,” he said. “Therefore, without the NLE the density at Battersea would be about half and even if there was a higher level of affordable, say 30 per cent, it would be a percentage of a lower figure and therefore the city wouldn’t get any more affordable than they do now.”

Now the argument is reversed. Because the developer has to pay for the transport infrastructure, they can’t afford to build as much affordable housing. Smart hey?

It’s not entirely hopeless. Wandsworth may yet reject the plan, while the developers say they hope to restore the missing 250 units at the end of the build.

But I wouldn’t hold your breath.

This is a version of a blog post which originally appeared here.

0800 7318496