Unlike: at Facebook, 85 per cent of the tech staff are men. Photo: Getty
Show Hide image

Silicon Valley sexism: why it matters that the internet is made by men, for men

From revenge porn to online harassment, online spaces are recreating the misogyny of the wider world.

During the past year there has been an explosion of research about, and public interest in, the tech industry’s persistent diversity problems. A story this week in Newsweek, for example, describes the industry as “savagely misogynistic”. At the same time, there has been increased awareness regarding the reality and effects of online abuse. The two issues are not separate, but gendered and dynamically related. Tech’s institutionalised male dominance, and the sex segregation and hierarchies of its workforce, have serious and harmful effects globally on women’s safety and free expression. Consider, for example, what is generally called online “harassment”.

Men and women are having very different online experiences. For women the spectrum of what we call “harassment” is much broader, multifaceted and sustained. One of the primary reasons so many social media companies struggle with responding to online abuse on their platforms is that reporting and complaint systems fail to appreciate these differences. Their founders, managers, engineers are not only not well-versed in these experiences, but, as stories that now regularly punctuate the news cycle show, are sometimes perpetrating the abuses. That systems reinforce the stereotypes and implicit biases of the people who are designing them is old news. What is new news, however, is that the internet makes the effects much more evident.

First, some baseline demographics. The industry is overwhelmingly male and labour is sex segregated. Some examples: Twitter’s staff is 70 per cent male, with men making up 79 per cent of leadership and a whopping 90 per cent of the engineering staff. Fifty-nine per cent of employees are white. There is a similar gender gap at Facebook, where 85 per cent of the tech staff are men. Overall, the company is 69 per cent male, 63 per cent white. At Google, men make up 70 per cent of the staff, but 83 per cent of the tech departments. Only 2 per cent of Google employees are black. At 40 per cent, Asians make up make up a large and growing percentage of people in the industry, however, this is primarily Asian-American men who, as industry expert Anil Dash explained last October, “are benefitting from tech’s systematic exclusion of women and non-Asian minorities”.

These statistics inform a profound epistemological imbalance that results in inadequate tech solutions to women’s user problems. This in turn affects the ways that men and women participate the public sphere. Online harassment of men is not as severe or sustained as that of women. It’s also less likely to be focused on their gender. It is most frequently name-calling and designed to embarrass. A Pew Research study identifying these differences recently described this kind of harassment as “a layer of annoyance so common that those who see or experience it say they often ignore it”. Women, on the other hand, cannot ignore their online abuse: they are more than three times as likely to report having been stalked online and more than twice as likely to be sexually harassed. They make up more than 90 per cent of victims of revenge porn and are overwhelming the subjects of rape videos. A report from Bytes for All in Pakistan last year documented the ways in which technology driven violence against women in social media is exacerbating real world violence. In India, police are grappling with what they call a “revenge porn economy”, being fuelled by gang-rape videos in social media used to extort women. In the United States, a law firm today announced a cyber civil rights project designed to help women whose partners abusively share photography without their consent. According to a survey conducted by the National Network to End Domestic Violence in the United States, 89 per cent of shelters report that victims are experiencing intimidation and threats by abusers via technology, including through cell phones, texts, and email. That women are having these experiences online in disproportion to men mirrors the offline realities of women’s daily calibrations to pervasive harm, the fact of which consistently surprises their male counterparts.

Women are far more likely to report electronic harassment as part of ongoing intimate partner violence and are more likely to report that their online harassment is sustained over longer periods of time. People who experience more sustained, invasive and physically threatening abuse online report higher levels of stress and emotional disturbance. In the Pew study, 38 per cent of women report being very upset by their most recent incident of online abuse, compared with 17 per cent of men. Necessarily more attuned to having to avoid violence or to living with it, women also incur greater costs dealing with harassment. The Pew research found that women are more than twice as likely to take multiple steps to try and address abuse. The toll on their lives can be steep and the actions necessary to address the problem take time, energy and money. Blithely unaware of these differences, male dominated corporate bodies tend towards thinking women are exaggerating their concerns, or are oversensitive “drama queens” who should, as many people think, either “grow up or get out of the kitchen if they can’t stand the heat”.

A reporting system that was designed to appreciate women’s experiences with harassment and discrimination would provide reporting tools that do, at the very least, six things: one, make it easy to report multiple incidents at the same time; two, provide a way for users to explain context or cross-platform harassment; three, have moderators who are trained to understand the reality of women’s safety needs; four, have guidelines that define “legitimate threat” in a way that isn’t only the threat of the kind of “imminent violence”, usually perpetrated by a stranger and most often experienced by a man, but the less visible, more pervasive, harms suffered by women at the hands of people they know; five, give users maximum privacy controls; and, lastly, provide options would allow users to designate surrogates or proxies who can step in to track and report incidents.

Instead, most current systems, almost without fail, do the opposite. Moderators responsible for content and complaints, regardless of gender, are making decisions based not just on the information they are reviewing, but on the way in which the information flows – linear, acontextual and isolated from other incidents. They are reliant, despite their best efforts, on technical systems that provide insufficient context, scale, frequency or scope. In addition, they lack specific training in trauma (their own or users) and in understanding gender-based violence. It’s no surprise that they appear to be tone-deaf to women’s needs when interpreting guidelines, the similarly, structurally, problematic.

Guidelines speak to a salient issue, namely, many companies are spending a great deal of time employing people, most frequently women, to work on community management and customer service, divorced – functionally, spatially, culturally, hierarchically – from systems engineers and senior team management. Moderation systems are overtaxed because of inadequately informed technology tools and business cultures. More egalitarian and empathetic systems architectures would probably obviate the need for a profusion of every-changing and frequently problematic guidelines.

Unfortunately, gender and racial imbalances are shared by the venture capitalists that fund tech, which means that women and minorities are also inhibited from access to the resources that would enable them to innovate alternate solutions. Fewer than 3 per cent of companies that get capitalised have women CEOs.

There is nothing particularly unique about this situation. We live in a world that, until very recently, was designed entirely by men. It affects everything from the way cars are built, jobs are chosen and bathrooms are designed, to how medicine is researched and implemented and laws are written and enforced. In tech, new products routinely reveal the invisibility of women to designers. However, this isn’t about one-off apps that can be tweaked and relaunched, and the potential outcomes of tech sexism, implicit or not, can’t be underestimated or rapidly fixed. Women, we are often told, tend to use social media sites slightly more than their male peers. However, today it is estimated that there are 200 million fewer women online than men. There are many reasons for that gap and the construction of internet platforms is not to blame. However, if these systemic biases are not addressed that gender gap will continue to grow, with long-lasting global economic and social effects. Funding summer tech camps for girls is a great idea, but, ultimately, it’s just scratching the surface.

Last year may have been a turning point in terms of public awareness and women coming forward with their experiences.Google has been training its staff to understand implicit bias. Intel announced a $300m initiative focused on increasing overall diversity and, specifically, the number of women in computer science (currently at a very backlash 39-year low). Facebook, Twitter and YouTube are working with advocacy groups to address harassment. These are positive signs that there is greater understanding of the idea that technology is socially constructed and can be socially de- and reconstructed. In the meantime, however, we have lost a generation of women’s innovative potential to a fully integrated, socially cultivated, self-perpetuating misogyny all suited up in progressive ingenuity.

Getty
Show Hide image

“Stinking Googles should be killed”: why 4chan is using a search engine as a racist slur

Users of the anonymous forum are targeting Google after the company introduced a programme for censoring abusive language.

Contains examples of racist language and memes.

“You were born a Google, and you are going to die a Google.”

Despite the lack of obscenity and profanity in this sentence, you have probably realised it was intended to be offensive. It is just one of hundreds of similar messages posted by the users of 4chan’s Pol board – an anonymous forum where people go to be politically incorrect. But they haven’t suddenly seen the error of their ways about using the n-word to demean their fellow human beings – instead they are trying to make the word “Google” itself become a racist slur.

In an undertaking known as “Operation Google”, some 4chan users are resisting Google’s latest artificial intelligence program, Conversation AI, by swapping smears for the names of Google products. Conversation AI aims to spot and flag offensive language online, with the eventual possibility that it could automatically delete abusive comments. The famously outspoken forum 4chan, and the similar website 8chan, didn’t like this, and began their campaign which sees them refer to “Jews” as “Skypes”, Muslims as “Skittles”, and black people as “Googles”.

If it weren’t for the utterly abhorrent racism – which includes users conflating Google’s chat tool “Hangouts” with pictures of lynched African-Americans – it would be a genius idea. The group aims to force Google to censor its own name, making its AI redundant. Yet some have acknowledged this might not ultimately work – as the AI will be able to use contextual clues to filter out when “Google” is used positively or pejoratively – and their ultimate aim is now simply to make “Google” a racist slur as revenge.


Posters from 4chan

“If you're posting anything on social media, just casually replace n****rs/blacks with googles. Act as if it's already a thing,” wrote one anonymous user. “Ignore the company, just focus on the word. Casually is the important word here – don't force it. In a month or two, Google will find themselves running a company which is effectively called ‘n****r’. And their entire brand is built on that name, so they can't just change it.”

There is no doubt that Conversation AI is questionable to anyone who values free speech. Although most people desire a nicer internet, it is hard to agree that this should be achieved by blocking out large swathes of people, and putting the power to do so in the hands of one company. Additionally, algorithms can’t yet accurately detect sarcasm and humour, so false-positives are highly likely when a bot tries to identify whether something is offensive. Indeed, Wired journalist Andy Greenberg tested Conversation AI out and discovered it gave “I shit you not” 98 out of 100 on its personal attack scale.

Yet these 4chan users have made it impossible to agree with their fight against Google by combining it with their racism. Google scores the word “moron” 99 out of 100 on its offensiveness scale. Had protestors decided to replace this – or possibly even more offensive words like “bitch” or “motherfucker” – with “Google”, pretty much everyone would be on board.

Some 4chan users are aware of this – and indeed it is important not to consider the site a unanimous entity. “You're just making yourselves look like idiots and ruining any legitimate effort to actually do this properly,” wrote one user, while some discussed their concerns that “normies” – ie. normal people – would never join in. Other 4chan users are against Operation Google as they see it as self-censorship, or simply just stupid.


Memes from 4chan

But anyone who disregards these efforts as the work of morons (or should that be Bings?) clearly does not understand the power of 4chan. The site brought down Microsoft’s AI Tay in a single day, brought the Unicode swastika (卐) to the top of Google’s trends list in 2008, hacked Sarah Palin’s email account, and leaked a large number of celebrity nudes in 2014. If the Ten Commandments were rewritten for the modern age and Moses took to Mount Sinai to wave two 16GB Tablets in the air, then the number one rule would be short and sweet: Thou shalt not mess with 4chan.

It is unclear yet how Google will respond to the attack, and whether this will ultimately affect the AI. Yet despite what ten years of Disney conditioning taught us as children, the world isn’t split into goodies and baddies. While 4chan’s methods are deplorable, their aim of questioning whether one company should have the power to censor the internet is not.

Google also hit headlines this week for its new “YouTube Heroes” program, a system that sees YouTube users rewarded with points when they flag offensive videos. It’s not hard to see how this kind of crowdsourced censorship is undesirable, particularly again as the chance for things to be incorrectly flagged is huge. A few weeks ago, popular YouTubers also hit back at censorship that saw them lose their advertising money from the site, leading #YouTubeIsOverParty to trend on Twitter. Perhaps ultimately, 4chan didn't need to go on a campaign to damage Google's name. It might already have been doing a good enough job of that itself.

Google has been contacted for comment.

Amelia Tait is a technology and digital culture writer at the New Statesman.