No, it's not the same. Photo: Ken Piorkowski / Flickr
Show Hide image

Study shows people prefer pain to their own thoughts – except it doesn’t

"A few bored students gave themselves an unpleasant tingle, but most preferred to sit around instead." Snappy or what?

Take a few dozen students, stick them alone in empty rooms and ask them to do nothing for fifteen minutes. Wait! First connect up electrodes to their ankles and give them the power to zap themselves when bored. This is what researchers in America did, in a study that’s been widely reported – because zap themselves those students did. It looks, on first glance, like proof that people would prefer anything – even pain – to boredom.

Writing in the journal Science, the authors concluded:

What is striking is that simply being alone with their own thoughts for 15 minutes was apparently so aversive that it drove many participants to self-administer an electric shock that they had earlier said they would pay to avoid."

Sounds bad, right? Students were so bored by their thoughts they decided to electrocute themselves, with a shock so painful they’d previously said they’d actually pay money (money!) not to receive it. They couldn’t even last 15 minutes with inside their own head. It makes for a bleak conclusion – except, it’s not really true. Let’s take a look at what actually happened.

The experiment had two stages. Firstly, the 42 students rated a series of external stimuli from one to nine on how pleasant they were. These ranged from gentle guitar music and a photo of a river scene to a cockroach picture and a mild electric shock. In Part 2, they were told to sit alone in a room and entertain themselves with their thoughts as best they could. They weren’t allowed to fall asleep or leave the chair, but they had the option of experiencing one of the previously-given stimuli.

Over the next 15 minutes, 18 of the 42 students gave themselves at least one shock. The psychologists from Harvard and the University of Virginia didn’t publish any data on how the electric shock – or any of the other stimuli – fared on the ‘pleasant’ scale in Part 1.

Let’s make this clear. 58 percent of the students did not press the button. And even of the ones who did, they didn’t do it often – excluding the one outlier who managed to squeeze in 190 shocks within the quarter-of-an-hour. The average number of shocks was 1.5 for men and just 1 for women.  

In addition, the intensity of the shock was pretty weak: 4 milliamperes (mA) for men and 2.3mA for women. Participants were told the shock is designed to be “unpleasant but not painful”. This chart shows from the Centre of Disease Control and Prevention gives a bit of perspective:

Lodged somewhere between a "faint tingle" and a "slight shock", you can see it's a bit of a stretch to claim painful electrocutions. And as for saying that the volunteers would pay to avoid that pain, exaggeration again. After the participants had experienced the shock, researchers asked how much of an imaginary $5 they’d spend to not receive the shock, to which most people answered about a dollar. The pain was valued at a meagre 58p.

So what does this all mean if you're locked in an empty room with just a zapper for entertainment? If we're going to extrapolate generic conclusions from a really small study, let's at least stick to the results. Chances are, you're not going to shock yourself. And if you did, once would be quite enough. Not because you're scared of your thoughts or you're unhappy in your own company, but because when you've got nothing else to do that big button screaming 'shock me' is just too tantalising to resist - and when is anyone ever in a situation like this in real life? If anything, it's a surprise so few people did actually press it. 

The whole thing might seem like a huge non-issue, but in fairness to the researchers there's a lot of interesting stuff going on here. For instance, take a look at the gender aspect: two thirds of the men shocked themselves but just a quarter of the women did – despite being subject to a weaker current. It's certainly worth further investigation. But don't be fooled by the attention-snatching headlines.

Getty
Show Hide image

The internet makes writing as innovative as speech

When a medium acquires new functions, it will need to be adapted by means of creating new forms.

Many articles on how the internet has changed language are like linguistic versions of the old Innovations catalogue, showcasing the latest strange and exciting products of our brave new digital culture: new words (“rickroll”); new uses of existing words (“trend” as a verb); abbreviations (smh, or “shaking my head”); and graphic devices (such as the much-hyped “new language” of emojis). Yet these formal innovations are merely surface (and in most cases ephemeral) manifestations of a deeper change a change in our relationship with the written word.

I first started to think about this at some point during the Noughties, after I noticed the odd behaviour of a friend’s teenage daughter. She was watching TV, alone and in silence, while her thumbs moved rapidly over the keys of her mobile phone. My friend explained that she was chatting with a classmate: they weren’t in the same physical space, but they were watching the same programme, and discussing it in a continuous exchange of text messages. What I found strange wasn’t the activity itself. As a teenage girl in the 1970s, I, too, was capable of chatting on the phone for hours to someone I’d spent all day with at school. The strange part was the medium: not spoken language, but written text.

In 1997, research conducted for British Telecom found that face-to-face speech accounted for 86 per cent of the average Briton’s communications, and telephone speech for 12 per cent. Outside education and the (white-collar or professional) workplace, most adults did little writing. Two decades later, it’s probably still true that most of us talk more than we write. But there’s no doubt we are making more use of writing, because so many of us now use it in our social interactions. We text, we tweet, we message, we Facebook; we have intense conversations and meaningful relationships with people we’ve never spoken to.

Writing was not designed to serve this purpose. Its original function was to store information in a form that did not depend on memory for its transmission and preservation. It acquired other functions, of the social kind, among others; but even in the days when “snail mail” was less snail-like (in large cities in the early 1900s there were five postal deliveries a day), “conversations” conducted by letter or postcard fell far short of the rapid back-and-forth that ­today’s technology makes possible.

When a medium acquires new functions, it will need to be adapted by means of creating new forms. Many online innovations are motivated by the need to make written language do a better job of two things in particular: communicating tone, and expressing individual or group identity. The rich resources speech offers for these purposes (such as accent, intonation, voice quality and, in face-to-face contexts, body language) are not reproducible in text-based communication. But users of digital media have found ways to exploit the resources that are specific to text, such as spelling, punctuation, font and spacing.

The creative use of textual resources started early on, with conventions such as capital letters to indicate shouting and the addition of smiley-face emoticons (the ancestors of emojis) to signal humorous or sarcastic intent, but over time it has become more nuanced and differentiated. To those in the know, a certain respelling (as in “smol” for “small”) or the omission of standard punctuation (such as the full stop at the end of a message) can say as much about the writer’s place in the virtual world as her accent would say about her location in the real one.

These newer conventions have gained traction in part because of the way the internet has developed. As older readers may recall, the internet was once conceptualised as an “information superhighway”, a vast and instantly accessible repository of useful stuff. But the highway was a one-way street: its users were imagined as consumers rather than producers. Web 2.0 changed that. Writers no longer needed permission to publish: they could start a blog, or write fan fiction, without having to get past the established gatekeepers, editors and publishers. And this also freed them to deviate from the linguistic norms that were strictly enforced in print – to experiment or play with grammar, spelling and punctuation.

Inevitably, this has prompted complaints that new digital media have caused literacy standards to plummet. That is wide of the mark: it’s not that standards have fallen, it’s more that in the past we rarely saw writing in the public domain that hadn’t been edited to meet certain standards. In the past, almost all linguistic innovation (the main exception being formal or technical vocabulary) originated in speech and appeared in print much later. But now we are seeing traffic in the opposite direction.

Might all this be a passing phase? It has been suggested that as the technology improves, many text-based forms of online communication will revert to their more “natural” medium: speech. In some cases this seems plausible (in a few it’s already happening). But there are reasons to think that speech will not supplant text in all the new domains that writing has conquered.

Consider my friend’s daughter and her classmate, who chose to text when they could have used their phones to talk. This choice reflected their desire for privacy: your mother can’t listen to a text-based conversation. Or consider the use of texting to perform what politeness theorists call “face-threatening acts”, such as sacking an employee or ending an intimate relationship. This used to be seen as insensitive, but my university students now tell me they prefer it – again, because a text is read in private. Your reaction to being dumped will not be witnessed by the dumper: it allows you to retain your dignity, and gives you time to craft your reply.

Students also tell me that they rarely speak on the phone to anyone other than their parents without prearranging it. They see unsolicited voice calls as an imposition; text-based communication is preferable (even if it’s less efficient) because it doesn’t demand the recipient’s immediate and undivided attention. Their guiding principle seems to be: “I communicate with whom I want, when I want, and I respect others’ right to do the same.”

I’ll confess to finding this new etiquette off-putting: it seems ungenerous, unspontaneous and self-centred. But I can also see how it might help people cope with the overwhelming and intrusive demands of a world where you’re “always on”. (In her book Always On: Language in an Online and Mobile World, Naomi Baron calls it “volume control”, a way of turning down the incessant noise.) As with the other new practices I’ve mentioned, it’s a strategic adaptation, exploiting the inbuilt capabilities of technology, but in ways that owe more to our own desires and needs than to the conscious intentions of its designers. Or, to put it another way (and forgive me if I adapt a National Rifle Association slogan): technologies don’t change language, people do.

Deborah Cameron is Professor of Language and Communication at the University of Oxford and a fellow of Worcester College

This article first appeared in the 16 February 2017 issue of the New Statesman, The New Times