Unlike: at Facebook, 85 per cent of the tech staff are men. Photo: Getty
Show Hide image

Silicon Valley sexism: why it matters that the internet is made by men, for men

From revenge porn to online harassment, online spaces are recreating the misogyny of the wider world.

During the past year there has been an explosion of research about, and public interest in, the tech industry’s persistent diversity problems. A story this week in Newsweek, for example, describes the industry as “savagely misogynistic”. At the same time, there has been increased awareness regarding the reality and effects of online abuse. The two issues are not separate, but gendered and dynamically related. Tech’s institutionalised male dominance, and the sex segregation and hierarchies of its workforce, have serious and harmful effects globally on women’s safety and free expression. Consider, for example, what is generally called online “harassment”.

Men and women are having very different online experiences. For women the spectrum of what we call “harassment” is much broader, multifaceted and sustained. One of the primary reasons so many social media companies struggle with responding to online abuse on their platforms is that reporting and complaint systems fail to appreciate these differences. Their founders, managers, engineers are not only not well-versed in these experiences, but, as stories that now regularly punctuate the news cycle show, are sometimes perpetrating the abuses. That systems reinforce the stereotypes and implicit biases of the people who are designing them is old news. What is new news, however, is that the internet makes the effects much more evident.

First, some baseline demographics. The industry is overwhelmingly male and labour is sex segregated. Some examples: Twitter’s staff is 70 per cent male, with men making up 79 per cent of leadership and a whopping 90 per cent of the engineering staff. Fifty-nine per cent of employees are white. There is a similar gender gap at Facebook, where 85 per cent of the tech staff are men. Overall, the company is 69 per cent male, 63 per cent white. At Google, men make up 70 per cent of the staff, but 83 per cent of the tech departments. Only 2 per cent of Google employees are black. At 40 per cent, Asians make up make up a large and growing percentage of people in the industry, however, this is primarily Asian-American men who, as industry expert Anil Dash explained last October, “are benefitting from tech’s systematic exclusion of women and non-Asian minorities”.

These statistics inform a profound epistemological imbalance that results in inadequate tech solutions to women’s user problems. This in turn affects the ways that men and women participate the public sphere. Online harassment of men is not as severe or sustained as that of women. It’s also less likely to be focused on their gender. It is most frequently name-calling and designed to embarrass. A Pew Research study identifying these differences recently described this kind of harassment as “a layer of annoyance so common that those who see or experience it say they often ignore it”. Women, on the other hand, cannot ignore their online abuse: they are more than three times as likely to report having been stalked online and more than twice as likely to be sexually harassed. They make up more than 90 per cent of victims of revenge porn and are overwhelming the subjects of rape videos. A report from Bytes for All in Pakistan last year documented the ways in which technology driven violence against women in social media is exacerbating real world violence. In India, police are grappling with what they call a “revenge porn economy”, being fuelled by gang-rape videos in social media used to extort women. In the United States, a law firm today announced a cyber civil rights project designed to help women whose partners abusively share photography without their consent. According to a survey conducted by the National Network to End Domestic Violence in the United States, 89 per cent of shelters report that victims are experiencing intimidation and threats by abusers via technology, including through cell phones, texts, and email. That women are having these experiences online in disproportion to men mirrors the offline realities of women’s daily calibrations to pervasive harm, the fact of which consistently surprises their male counterparts.

Women are far more likely to report electronic harassment as part of ongoing intimate partner violence and are more likely to report that their online harassment is sustained over longer periods of time. People who experience more sustained, invasive and physically threatening abuse online report higher levels of stress and emotional disturbance. In the Pew study, 38 per cent of women report being very upset by their most recent incident of online abuse, compared with 17 per cent of men. Necessarily more attuned to having to avoid violence or to living with it, women also incur greater costs dealing with harassment. The Pew research found that women are more than twice as likely to take multiple steps to try and address abuse. The toll on their lives can be steep and the actions necessary to address the problem take time, energy and money. Blithely unaware of these differences, male dominated corporate bodies tend towards thinking women are exaggerating their concerns, or are oversensitive “drama queens” who should, as many people think, either “grow up or get out of the kitchen if they can’t stand the heat”.

A reporting system that was designed to appreciate women’s experiences with harassment and discrimination would provide reporting tools that do, at the very least, six things: one, make it easy to report multiple incidents at the same time; two, provide a way for users to explain context or cross-platform harassment; three, have moderators who are trained to understand the reality of women’s safety needs; four, have guidelines that define “legitimate threat” in a way that isn’t only the threat of the kind of “imminent violence”, usually perpetrated by a stranger and most often experienced by a man, but the less visible, more pervasive, harms suffered by women at the hands of people they know; five, give users maximum privacy controls; and, lastly, provide options would allow users to designate surrogates or proxies who can step in to track and report incidents.

Instead, most current systems, almost without fail, do the opposite. Moderators responsible for content and complaints, regardless of gender, are making decisions based not just on the information they are reviewing, but on the way in which the information flows – linear, acontextual and isolated from other incidents. They are reliant, despite their best efforts, on technical systems that provide insufficient context, scale, frequency or scope. In addition, they lack specific training in trauma (their own or users) and in understanding gender-based violence. It’s no surprise that they appear to be tone-deaf to women’s needs when interpreting guidelines, the similarly, structurally, problematic.

Guidelines speak to a salient issue, namely, many companies are spending a great deal of time employing people, most frequently women, to work on community management and customer service, divorced – functionally, spatially, culturally, hierarchically – from systems engineers and senior team management. Moderation systems are overtaxed because of inadequately informed technology tools and business cultures. More egalitarian and empathetic systems architectures would probably obviate the need for a profusion of every-changing and frequently problematic guidelines.

Unfortunately, gender and racial imbalances are shared by the venture capitalists that fund tech, which means that women and minorities are also inhibited from access to the resources that would enable them to innovate alternate solutions. Fewer than 3 per cent of companies that get capitalised have women CEOs.

There is nothing particularly unique about this situation. We live in a world that, until very recently, was designed entirely by men. It affects everything from the way cars are built, jobs are chosen and bathrooms are designed, to how medicine is researched and implemented and laws are written and enforced. In tech, new products routinely reveal the invisibility of women to designers. However, this isn’t about one-off apps that can be tweaked and relaunched, and the potential outcomes of tech sexism, implicit or not, can’t be underestimated or rapidly fixed. Women, we are often told, tend to use social media sites slightly more than their male peers. However, today it is estimated that there are 200 million fewer women online than men. There are many reasons for that gap and the construction of internet platforms is not to blame. However, if these systemic biases are not addressed that gender gap will continue to grow, with long-lasting global economic and social effects. Funding summer tech camps for girls is a great idea, but, ultimately, it’s just scratching the surface.

Last year may have been a turning point in terms of public awareness and women coming forward with their experiences.Google has been training its staff to understand implicit bias. Intel announced a $300m initiative focused on increasing overall diversity and, specifically, the number of women in computer science (currently at a very backlash 39-year low). Facebook, Twitter and YouTube are working with advocacy groups to address harassment. These are positive signs that there is greater understanding of the idea that technology is socially constructed and can be socially de- and reconstructed. In the meantime, however, we have lost a generation of women’s innovative potential to a fully integrated, socially cultivated, self-perpetuating misogyny all suited up in progressive ingenuity.

Getty.
Show Hide image

Forget fake news on Facebook – the real filter bubble is you

If people want to receive all their news from a single feed that reinforces their beliefs, there is little that can be done.

It’s Google that vaunts the absurdly optimistic motto “Don’t be evil”, but there are others of Silicon Valley’s techno-nabobs who have equally high-flown moral agendas. Step forward, Mark Zuckerberg of Facebook, who responded this week to the brouhaha surrounding his social media platform’s influence on the US presidential election thus: “We are all blessed to have the ability to make the world better, and we have the responsibility to do it. Let’s go work even harder.”

To which the only possible response – if you’re me – is: “No we aren’t, no we don’t, and I’m going back to my flowery bed to cultivate my garden of inanition.” I mean, where does this guy get off? It’s estimated that a single message from Facebook caused about 340,000 extra voters to pitch up at the polls for the 2010 US congressional elections – while the tech giant actually performed an “experiment”: showing either positive or negative news stories to hundreds of thousands of their members, and so rendering them happier or sadder.

In the past, Facebook employees curating the site’s “trending news” section were apparently told to squash stories that right-wingers might “like”, but in the run-up to the US election the brakes came off and all sorts of fraudulent clickbait was fed to the denizens of the virtual underworld, much – but not all of it – generated by spurious alt-right “news sites”.

Why? Because Facebook doesn’t view itself as a conventional news provider and has no rubric for fact-checking its news content: it can take up to 13 hours for stories about Hillary Clinton eating babies barbecued for her by Barack Obama to be taken down – and in that time Christ knows how many people will have not only given them credence, but also liked or shared them, so passing on the contagion. The result has been something digital analysts describe as a “filter bubble”, a sort of virtual helmet that drops down over your head and ensures that you receive only the sort of news you’re already fit to be imprinted with. Back in the days when everyone read the print edition of the New York Times this sort of manipulation was, it is argued, quite impossible; after all, the US media historically made a fetish of fact-checking, an editorial process that is pretty much unknown in our own press. Why, I’ve published short stories in American magazines and newspapers and had fact-checkers call me up to confirm the veracity of my flights of fancy. No, really.

In psychology, the process by which any given individual colludes in the creation of a personalised “filter bubble” is known as confirmation bias: we’re more inclined to believe the sort of things that validate what we want to believe – and by extension, surely, these are likely to be the sorts of beliefs we want to share with others. It seems to me that the big social media sites, while perhaps blowing up more and bigger filter bubbles, can scarcely be blamed for the confirmation bias. Nor – as yet – have they wreaked the sort of destruction on the world that has burst from the filter bubble known as “Western civilisation” – one that was blown into being by the New York Times, the BBC and all sorts of highly respected media outlets over many decades.

Societies that are both dominant and in the ascendant always imagine their belief systems and the values they enshrine are the best ones. You have only to switch on the radio and hear our politicians blithering on about how they’re going to get both bloodthirsty sides in the Syrian Civil War to behave like pacifist vegetarians in order to see the confirmation bias hard at work.

The Western belief – which has its roots in imperialism, but has bodied forth in the form of liberal humanism – that all is for the best in the world best described by the New York Times’s fact-checkers, is also a sort of filter bubble, haloing almost all of us in its shiny and translucent truth.

Religion? Obviously a good-news feed that many billions of the credulous rely on entirely. Science? Possibly the biggest filter bubble there is in the universe, and one that – if you believe Stephen Hawking – has been inflating since shortly before the Big Bang. After all, any scientific theory is just that: a series of observable (and potentially repeatable) regularities, a bubble of consistency we wander around in, perfectly at ease despite its obvious vulnerability to those little pricks, the unforeseen and the contingent. Let’s face it, what lies behind most people’s beliefs is not facts, but prejudices, and all this carping about algorithms is really the howling of a liberal elite whose own filter bubble has indeed been popped.

A television producer I know once joked that she was considering pitching a reality show to the networks to be called Daily Mail Hate Island. The conceit was that a group of ordinary Britons would be marooned on a desert island where the only news they’d have of the outside world would come in the form of the Daily Mail; viewers would find themselves riveted by watching these benighted folk descend into the barbarism of bigotry as they absorbed ever more factitious twaddle. But as I pointed out to this media innovator, we’re already marooned on Daily Mail Hate Island: it’s called Britain.

If people want to receive all their news from a single feed that constantly and consistently reinforces their beliefs, what are you going to do about it? The current argument is that Facebook’s algorithms reinforce political polarisation, but does anyone really believe better editing on the site will return our troubled present to some prelap­sarian past, let alone carry us forward into a brave new factual future? No, we’re all condemned to collude in the inflation of our own filter bubbles unless we actively seek to challenge every piece of received information, theory, or opinion. And what an exhausting business that would be . . . without the internet.

Will Self is an author and journalist. His books include Umbrella, Shark, The Book of Dave and The Butt. He writes the Madness of Crowds and Real Meals columns for the New Statesman.

This article first appeared in the 24 November 2016 issue of the New Statesman, Blair: out of exile