Tech has a white dude problem, and it doesn't get better by not talking about it

The organisers of the British Ruby Conference have cancelled the event due to their failure to invite a diverse speaker line-up.

The British Ruby Conference announced, last night, that the 2013 event would be cancelled, because of a furore stemming from one developer's reaction:

Ruby is a programming language, developed in the mid-1990s, which has gained a lot of popularity in recent years as the basis of a framework used for building web applications. As with programming in general, the Ruby community undoubtedly skews heavily male, and the conference – known as "BritRuby" – cites that in its defence.

In their official explanation for why the decision was made to not put on the 2013 event, the BritRuby organisers write:

We wanted innovative ideas and we whole-heartedly pushed everyone that submitted a proposal to think outside the box. Our selection process was the content and nothing more. Not the individuals gender, race, age or nationality. It’s about community…

The Ruby community has been battling with issues of race and gender equality. We at Brit Ruby were well aware of this fundamental and important issue. This was one of the reasons why we encouraged everyone to submit a speaker proposal.

It is often the case with situations like this that those under attack cite the belief that they picked the line-up based entirely on quality. For instance, it remains true that orchestras are dominated by men, and for years, explanations were given about how only men had the strength, or control, or innate musicality to play certain instruments, and so on.

Yet as orchestras gradually introduced blind auditions – actually picking the line-up based purely on quality – the gender balance shifted. And it appears much the same may be true of technology. Josh Susso, the developer whose tweet sparked the whole discussion which ended up leading to the conference being pulled, ran his own ruby conference in San Francisco, GoGaRuCo, which had a completely blind selection process.

As a result of that, and explicitly reaching out to women's programming groups, the slate of speakers was a quarter women. Even though it may be easier in a city like San Francisco, it is possible.

Sadly, the debate around BritRuby's monoculture led, according to the statement, to their sponsors getting spooked after accusations of sexism and racism threatened to toxify the brand. With uncertain sponsorship and personal liabilities, the organisers were forced to cancel.

They did not go out in a blaze of glory.

Sean Handley, who has run previous conventions with the BritRuby team but was not involved in this one, posted his own take on the situation which is slightly more self-pitying than the official one:

Yes, gender equality and racial equality are important. But the team's motives were to get the best speakers who were able to make it to Manchester. Turns out, a lot of the famous Rubyists are white guys and all of the ones who said they'd like to come were, indeed, white guys.

Making an issue out of that is, frankly, misguided. Adding a token minority speaker is offensive to that speaker, it says "You're here because you tick a box - not because you're skilled." It doesn't matter who speaks at a conference, as long as they're capable, interesting and relevant. That's what matters: content, not style.

Even that defence starts getting a bit uncomfortable in the end. If you are defending your all-white, all-male speaker line-up by saying that you only wanted the "best speakers", it's hard for non-white, non-male people to not infer that they are considered sub-par. Saying that the only way to fix the problem would be to add "token" speakers makes it sound like there are no non-token speakers worth inviting.

And saying that "it doesn't matter who speaks at a conference, as long as they're capable, interesting and relevant" is plainly untrue: it does matter, to a hell of a lot of people, and if you set out to be a leading voice in your community, you owe it to yourself and that community to try and make it a better group to be in.

Some – not all – elements of that community sorely need help, judging by the comments beneath Handley's post.

The whole event ruined for everyone but a few narrow minded individuals.

Yes. The people who want not all-white-male-speakers are narrow minded.

Next thing would be people complaining about the lack of Unicorns on the conferences.

Women in tech: Literally Imaginary, apparently.

[Quoting an earlier commenter] I feel this needs to happen more and more so Conference organizers are forced to start considering diversity from the beginning and initiate programs or reach out to more non-white-males to speak

While we're at it, let's make sure to throw in a few over-50s, a disabled woman and a couple of homosexuals. We need to focus on diversity after-all.

Where is the line?

Oh no! Gay people might be at the conference?!

Seriously, this whole equality crap is… crap! One thing is when there are cases where women are not treated fairly (not good) or abused (very bad), but equality is a non-issue for most of us in the Western world. In cases where exploitation or abuse are confirmed, society should act for sure, but the reality is men and women are not equal in many ways. It's not that one is better and the other is worse is that, quite simply, we're different. I see plenty of "Women Seminars" (not very "Men Seminars" I should add) and I don't see anyone rushing those asking for "equality" or "lack of men on these".

I'm done here.

Update: Changed the headline slightly, and corrected the reference to Sean Handley

Photograph: 2013.britruby.com

Alex Hern is a technology reporter for the Guardian. He was formerly staff writer at the New Statesman. You should follow Alex on Twitter.

Getty
Show Hide image

The internet makes writing as innovative as speech

When a medium acquires new functions, it will need to be adapted by means of creating new forms.

Many articles on how the internet has changed language are like linguistic versions of the old Innovations catalogue, showcasing the latest strange and exciting products of our brave new digital culture: new words (“rickroll”); new uses of existing words (“trend” as a verb); abbreviations (smh, or “shaking my head”); and graphic devices (such as the much-hyped “new language” of emojis). Yet these formal innovations are merely surface (and in most cases ephemeral) manifestations of a deeper change a change in our relationship with the written word.

I first started to think about this at some point during the Noughties, after I noticed the odd behaviour of a friend’s teenage daughter. She was watching TV, alone and in silence, while her thumbs moved rapidly over the keys of her mobile phone. My friend explained that she was chatting with a classmate: they weren’t in the same physical space, but they were watching the same programme, and discussing it in a continuous exchange of text messages. What I found strange wasn’t the activity itself. As a teenage girl in the 1970s, I, too, was capable of chatting on the phone for hours to someone I’d spent all day with at school. The strange part was the medium: not spoken language, but written text.

In 1997, research conducted for British Telecom found that face-to-face speech accounted for 86 per cent of the average Briton’s communications, and telephone speech for 12 per cent. Outside education and the (white-collar or professional) workplace, most adults did little writing. Two decades later, it’s probably still true that most of us talk more than we write. But there’s no doubt we are making more use of writing, because so many of us now use it in our social interactions. We text, we tweet, we message, we Facebook; we have intense conversations and meaningful relationships with people we’ve never spoken to.

Writing was not designed to serve this purpose. Its original function was to store information in a form that did not depend on memory for its transmission and preservation. It acquired other functions, of the social kind, among others; but even in the days when “snail mail” was less snail-like (in large cities in the early 1900s there were five postal deliveries a day), “conversations” conducted by letter or postcard fell far short of the rapid back-and-forth that ­today’s technology makes possible.

When a medium acquires new functions, it will need to be adapted by means of creating new forms. Many online innovations are motivated by the need to make written language do a better job of two things in particular: communicating tone, and expressing individual or group identity. The rich resources speech offers for these purposes (such as accent, intonation, voice quality and, in face-to-face contexts, body language) are not reproducible in text-based communication. But users of digital media have found ways to exploit the resources that are specific to text, such as spelling, punctuation, font and spacing.

The creative use of textual resources started early on, with conventions such as capital letters to indicate shouting and the addition of smiley-face emoticons (the ancestors of emojis) to signal humorous or sarcastic intent, but over time it has become more nuanced and differentiated. To those in the know, a certain respelling (as in “smol” for “small”) or the omission of standard punctuation (such as the full stop at the end of a message) can say as much about the writer’s place in the virtual world as her accent would say about her location in the real one.

These newer conventions have gained traction in part because of the way the internet has developed. As older readers may recall, the internet was once conceptualised as an “information superhighway”, a vast and instantly accessible repository of useful stuff. But the highway was a one-way street: its users were imagined as consumers rather than producers. Web 2.0 changed that. Writers no longer needed permission to publish: they could start a blog, or write fan fiction, without having to get past the established gatekeepers, editors and publishers. And this also freed them to deviate from the linguistic norms that were strictly enforced in print – to experiment or play with grammar, spelling and punctuation.

Inevitably, this has prompted complaints that new digital media have caused literacy standards to plummet. That is wide of the mark: it’s not that standards have fallen, it’s more that in the past we rarely saw writing in the public domain that hadn’t been edited to meet certain standards. In the past, almost all linguistic innovation (the main exception being formal or technical vocabulary) originated in speech and appeared in print much later. But now we are seeing traffic in the opposite direction.

Might all this be a passing phase? It has been suggested that as the technology improves, many text-based forms of online communication will revert to their more “natural” medium: speech. In some cases this seems plausible (in a few it’s already happening). But there are reasons to think that speech will not supplant text in all the new domains that writing has conquered.

Consider my friend’s daughter and her classmate, who chose to text when they could have used their phones to talk. This choice reflected their desire for privacy: your mother can’t listen to a text-based conversation. Or consider the use of texting to perform what politeness theorists call “face-threatening acts”, such as sacking an employee or ending an intimate relationship. This used to be seen as insensitive, but my university students now tell me they prefer it – again, because a text is read in private. Your reaction to being dumped will not be witnessed by the dumper: it allows you to retain your dignity, and gives you time to craft your reply.

Students also tell me that they rarely speak on the phone to anyone other than their parents without prearranging it. They see unsolicited voice calls as an imposition; text-based communication is preferable (even if it’s less efficient) because it doesn’t demand the recipient’s immediate and undivided attention. Their guiding principle seems to be: “I communicate with whom I want, when I want, and I respect others’ right to do the same.”

I’ll confess to finding this new etiquette off-putting: it seems ungenerous, unspontaneous and self-centred. But I can also see how it might help people cope with the overwhelming and intrusive demands of a world where you’re “always on”. (In her book Always On: Language in an Online and Mobile World, Naomi Baron calls it “volume control”, a way of turning down the incessant noise.) As with the other new practices I’ve mentioned, it’s a strategic adaptation, exploiting the inbuilt capabilities of technology, but in ways that owe more to our own desires and needs than to the conscious intentions of its designers. Or, to put it another way (and forgive me if I adapt a National Rifle Association slogan): technologies don’t change language, people do.

Deborah Cameron is Professor of Language and Communication at the University of Oxford and a fellow of Worcester College

This article first appeared in the 16 February 2017 issue of the New Statesman, The New Times