Should digital gerrymandering be illegal? Photo: Getty
Show Hide image

Facebook could decide an election without anyone ever finding out

The scary future of digital gerrymandering – and how to prevent it.

On 2 November, 2010, Facebook’s American users were subject to an ambitious experiment in civic-engineering: could a social network get otherwise-indolent people to cast a ballot in that day’s congressional midterm elections?

The answer was yes.

The prod to nudge bystanders to the voting booths was simple. It consisted of a graphic containing a link for looking up polling places, a button to click to announce that you had voted, and the profile photos of up to six Facebook friends who had indicated they’d already done the same. With Facebook’s cooperation, the political scientists who dreamed up the study planted that graphic in the newsfeeds of tens of millions of users. (Other groups of Facebook users were shown a generic get-out-the-vote message or received no voting reminder at all.) Then, in an awesome feat of data-crunching, the researchers cross-referenced their subjects’ names with the day’s actual voting records from precincts across the country to measure how much their voting prompt increased turnout.

Overall, users notified of their friends’ voting were 0.39 per cent more likely to vote than those in the control group, and any resulting decisions to cast a ballot also appeared to ripple to the behaviour of close Facebook friends, even if those people hadn’t received the original message. That small increase in turnout rates amounted to a lot of new votes. The researchers concluded that their Facebook graphic directly mobilised 60,000 voters, and, thanks to the ripple effect, ultimately caused an additional 340,000 votes to be cast that day. As they point out, George W Bush won Florida, and thus the presidency, by 537 votes – fewer than 0.01 per cent of the votes cast in that state.

Now consider a hypothetical, hotly contested future election. Suppose that Mark Zuckerberg personally favours whichever candidate you don’t like. He arranges for a voting prompt to appear within the newsfeeds of tens of millions of active Facebook users – but unlike in the 2010 experiment, the group that will not receive the message is not chosen at random. Rather, Zuckerberg makes use of the fact that Facebook “likes” can predict political views and party affiliation, even beyond the many users who proudly advertise those affiliations directly. With that knowledge, our hypothetical Zuck chooses not to spice the feeds of users unsympathetic to his views. Such machinations then flip the outcome of our hypothetical election. Should the law constrain this kind of behaviour?

The scenario imagined above is an example of digital gerrymandering. All sorts of factors contribute to what Facebook or Twitter present in a feed, or what Google or Bing show us in search results. Our expectation is that those intermediaries will provide open conduits to others’ content and that the variables in their processes just help yield the information we find most relevant. (In that spirit, we expect that advertiser-sponsored links and posts will be clearly labeled so as to make them easy to distinguish from the regular ones.) Digital gerrymandering occurs when a site instead distributes information in a manner that serves its own ideological agenda. This is possible on any service that personalises what users see or the order in which they see it, and it’s increasingly easy to effect.

There are plenty of reasons to regard digital gerrymandering as such a toxic exercise that no right-thinking company would attempt it. But none of these businesses actually promises neutrality in its proprietary algorithms, whatever that would mean in practical terms. And they have already shown themselves willing to leverage their awesome platforms to attempt to influence policy. In January 2012, for example, Google blacked out its home page “doodle” as a protest against the pending Stop Online Piracy Act (SOPA), said by its opponents (myself among them) to facilitate censorship. The altered logo linked to an official blog entry importuning Google users to petition Congress; SOPA was ultimately tabled, just as Google and many others had wanted. A social-media or search company looking to take the next step and attempt to create a favourable outcome in an election would certainly have the means.

So what’s stopping that from happening? The most important fail-safe is the threat that a significant number of users, outraged by a betrayal of trust, would adopt alternative services, hurting the responsible company’s revenue and reputation. But while a propagandistic Google doodle or similarly ideological alteration to a common home page lies in plain view, newsfeeds and search results have no baseline. They can be subtly tweaked without hazarding the same backlash. Indeed, in our get-out-the-vote hypothetical, the people with the most cause for complaint are those who won’t be fed the prompt and may never know it existed. Not only that, but the disclosure policies of social networks and search engines already state that the companies reserve the right to season their newsfeeds and search results however they like. An effort to sway turnout could be construed as being covered by the existing agreements and require no special notice to users.

At the same time, passing new laws to prevent digital gerrymandering would be ill advised. People may be due the benefits of a democratic electoral process. But in the United States, content curators appropriately have a First Amendment right to present their content as they see fit. Meddling with how a company gives information to its users, especially when no one’s arguing that the information in question is false, is asking for trouble. (That’s one reason why the European Court of Justice got it wrong when it opened the door to people censoring the search-engine results for their names, validating a so-called “right to be forgotten.”)

There’s a better solution available: enticing web companies entrusted with personal data and preferences to act as “information fiduciaries”. Champions of the concept include Jack Balkin of Yale Law School, who sees a precedent in the way that lawyers and doctors obtain sensitive information about their clients and patients – and are then not allowed to use that knowledge for outside purposes. Balkin asks: “Should we treat certain online businesses, because of their importance to people’s lives, and the degree of trust and confidence that people inevitably must place in these businesses, in the same way?”

As things stand, web companies are simply bound to follow their own privacy policies, however flimsy. Information fiduciaries would have to do more. For example, they might be required to keep automatic audit trails reflecting when the personal data of their users is shared with another company, or is used in a new way. (Interestingly, the kind of ledger that crypto-currencies like Bitcoin use to track the movement of money could be adapted to this function.) They would provide a way for users to toggle search results or newsfeeds to see how that content would appear without the influence of reams of personal data – that is, non-personalised. And, most important, information fiduciaries would forswear any formulas of personalisation derived from their own ideological goals. Such a system could be voluntary, in the way that businesspeople who make suggestions on buying and selling stocks and bonds can elect between careers as investment advisers or brokers: the “advisers” owe duties not to put their own interests above those of their clients, while the “brokers” have no such duty, even as they – confusingly – can go by such titles as financial adviser, financial consultant, wealth manager, and registered representative. (If someone’s telling you how to handle your nest egg, you might ask flat out whether he or she is your fiduciary and walk swiftly to the exit if the answer is no.)

Constructed correctly, the duties of the information fiduciary would be limited enough for the Facebooks and Googles of the world, while meaningful enough to the people who rely on the services, that the intermediaries could be induced to opt into them. To provide further incentive, the government could offer tax breaks or certain legal immunities for those willing to step up toward an enhanced duty to their users. My search results and newsfeed might still end up different from yours based on our political leanings, but only because the algorithm is trying to give me what I want – the way that an investment adviser may recommend stocks to the reckless and bonds to the sedate – and never because the search engine or social network is trying to covertly pick election winners.

Four decades ago, another emerging technology had Americans worried about how it might be manipulating them. In 1974, amid a panic over the possibility of subliminal messages in TV advertisements, the Federal Communications Commission strictly forbade that kind of communication. There was a foundation for the move; historically, broadcasters have accepted a burden of evenhandedness in exchange for licenses to use the public airwaves. The same duty of audience protection ought to be brought to today’s dominant medium. As more and more of what shapes our views and behaviors comes from inscrutable, artificial-intelligence-driven processes, the worst-case scenarios should be placed off limits in ways that don’t trip over into restrictions on free speech. Our information intermediaries can keep their sauces secret, inevitably advantaging some sources of content and disadvantaging others, while still agreeing that some ingredients are poison – and must be off the table.

Jonathan Zittrain is a professor of law and computer science at Harvard University, and author of “The Future of the Internet – and How to Stop It”. This article is adapted from remarks given at the 2014 Harvard Law Review Symposium on Freedom of the Press. A version will appear this month in the Harvard Law Review Forum.

This article first appeared on newrepublic.com

ELLIE FOREMAN-PECK FOR NEW STATESMAN
Show Hide image

Notes from a crime scene: what Seymour Hersh knows

Xan Rice meets the tireless Seymour Hersh to talk My Lai, pricey coffee and Bin Laden.

It’s late on a lazy Wednesday afternoon when Seymour Hersh comes bounding down the stairs. “Let’s find somewhere to sit,” the American investigative journalist says, striding over to the café area of the hotel in Bloomsbury where we meet.

Not quiet enough, Hersh decides, and he marches into an adjoining branch of Steak & Lobster, past a startled waiter who tries to explain that the restaurant isn’t open yet. “He’ll have a coffee,” Hersh tells the man laying the tables, gesturing in my direction. When the drink arrives, he remarks that, at £4.39, it’s the most expensive coffee he has bought in some time.

“I’m older and crankier than [Bernie] Sanders,” the 79-year-old says with a smile, leaning back in his seat, his tie loose and his top button undone. Hersh’s many notable stories include the My Lai Massacre and cover-up in Vietnam, which he exposed in 1969, and the Abu Ghraib prison scandal during the Iraq War. He’s in good health, relishing his speaking tour of London to promote his new book, The Killing of Osama Bin Laden, and hearing “how wonderful I am”.

“I come home from a trip like this,” he says, “and my wife can’t stand me. She says, ‘Get away, I don’t want to talk to you because you want everybody to bow and scrape.’”

Hersh never planned to be a journalist. After he was thrown out of law school for poor grades in 1959, he heard about an opening for a police reporter at a small news agency in Chicago. “I was reasonably coherent and could walk in a straight line, so they hired me,” he explains. Hersh learned on the job, covering his beat with a zeal that did not always impress his editors, one of whom liked to address him, without fondness, as “my good, dear, energetic Mr Hersh”.

“He saw me as a bleeding heart,” Hersh says, “who cared about people ‘of the Negro persuasion’ dying.”

Half a century later, he cannot say exactly what drove him to become an investigative reporter. “What defect did I have in my life that made me want to make everyone else look bad?” he wonders. “I almost viewed myself like a public defender: my job was to be there on the scene of a crime and to write about it in such a way that the police could not have the only call.”

Later, as his range widened, Hersh came to see his role as keeping in check “the nincompoops and criminals and fools running the world”.

He had been a journalist for ten years when he received a tip-off about an army officer being court-martialled for killing civilians in Vietnam. After investigating, he broke the story of the massacre at My Lai, in which a group of US soldiers murdered at least 347 people. The work earned him a Pulitzer Prize and soon afterwards he wrote his first piece for the prestigious New Yorker magazine. After sending in a draft, he was told that it would be read by the editor, William Shawn, and that he would receive a proof copy in the mail.

“Seven days later, the envelope comes and I’m terrified,” he recalls. “It was a writer’s magazine and any change they wanted, they asked you about. On the third page, I had some cliché or figure of speech. It was circled and in
the margin Mr Shawn had written: ‘Mr Hersh. Pls use words.’ I had a one-year course, a Master’s degree in journalism, in one sentence!”

Hersh has written regularly for the New Yorker over the years, though the relationship has recently come under strain. After researching the death of Osama Bin Laden, he became convinced that the Obama administration’s account of what happened before, during and after the raid in which Bin Laden was killed was a lie. He argued that the al-Qaeda leader had been captured by Pakistani intelligence in 2006 and held in Abbottabad until the US navy Seals operation five years later, which, Hersh claimed, was conducted with Pakistan’s assistance – rather than being a daring mission into hostile territory.

The New Yorker declined to run the story, so Hersh wrote it for the London Review of Books, which published it last year. The piece was read widely but attracted criticism from some American journalists who argued that it relied too heavily on a single, unnamed source and veered dangerously in the direction of conspiracy theories. Hersh is convinced that his version is correct and makes no apologies.

“I remember saying to my wife, ‘Don’t [these journalists] have mothers that tell them what to do better?’ . . . They insisted what they knew, what they wrote, had to be the story.”

Hersh’s mistrust of the official line is undiminished. His new book also questions whether it really was the Assad regime that carried out the chemical attacks in Ghouta, Syria, in 2013. Even the culprits of the recent Paris and Brussels massacres are not beyond doubt. “I don’t think Isis had a goddam thing to do with these kids,” he says. “The truth is, I don’t have any idea. I’m just telling you, heuristically, it’s an idea I would pursue if I was still a reporter.”

There is more to tell but Hersh has another interview. “Talk to me tomorrow,” he says, running back upstairs to collect his coat. “I’ll be around. I still have a lot of energy.” 

Xan Rice is Features Editor at the New Statesman.

This article first appeared in the 28 April 2016 issue of the New Statesman, The new fascism