Arthurian aliens in A Message From Mars. Photo courtesy of BFI Images
Show Hide image

Beware air pirates, be nice to Martians: lessons from the dawn of British sci-fi

Critics Notes by Mark Lawson.

In 1989, Martin Amis published a novel, London Fields, set ten years in the future in a world on the brink of a nuclear war. But the Berlin Wall fell as the book appeared, lessening the terror of millennium Armageddon, while another aspect of Amis’s 1999 – the restriction of mobile phones to a small super-cadre – also suggested an anti-Cassandra. While all art gambles on being overtaken by time, science fiction is most likely to lose the bet. Yet there is a fascination in predictive stories that have become historical period pieces, such as the two futuristic movies, more than a century old, screening in the BFI Southbank’s “The Birth of British Sci-Fi” event this month: Pirates of 1920 by David Aylott and A E Coleby, released nine years before its title date, and Wallett Waller’s A Message from Mars (1913).

Although, in the term “science fiction”, the second word qualifies the first, it’s tempting to tot up the success rate of guesses and Pirates of 1920 scores well. The silent, black-and-white short
imagines “air pirates” who use balloon-driven vessels to bomb ships, with the lofty brigands then sliding down ropes to take hostages. Within three years of the release date, there would be a world war in which the Germans used airships against ships, although this prophecy was not entirely the film-makers’ – H G Wells, the begetter of so much in this genre, had published a novel, The War in the Air, in 1908, anticipating the elevation of the battlefield.

The movie did show its own prescience, though with a longer perspective. The attackers from the earth’s atmosphere are a kind of hijacker and, in this sense, the film foresees a tactic of terrorists between the 1960s and, with a mass-suicidal-homicidal twist, 9/11. Modern viewers may also reflect that, with tighter aviation security in the 21st century, sea piracy and hostage-taking were revived as weapons of terror. The scenes in which the invaders threaten the captain eerily resemble those in a movie released more than a century later, Captain Phillips, with the exception that, whereas Paul Greengrass’s camera rarely stops moving, Aylott’s and Coleby’s hardly starts.

More substantial, at about an hour, A Message from Mars has also drawn on Wells, most obviously his 1897 Martian drama The War of the Worlds, although oddly combining that fantastical line with the social comedy of his earthbound books such as Kipps. Apart from a prologue and a coda set on Mars, where aliens dressed like Arthurian knights scrutinise events on earth through a goldfish bowl, the film takes place almost entirely in Edwardian London, where a Martian, having somehow broken the etiquette of the red planet, has been despatched to redeem himself by persuading Horace, an obnoxious, selfish boor, to be nicer to people.

In this element of an extraterrestrial on a mission of redemption, it combines the tenets of sci-fi and Christianity in an early example of a genre that would later include Erich von Däniken’s Chariots of the Gods?; Chris de Burgh’s song “A Spaceman Came Travelling”; Steven Spielberg’s ET; L Ron Hubbard’s Church of Scientology; and, according to recent reports, some modern school nativity plays in which aliens and angels are apparently largely interchangeable.

Though few scientists now believe that, if life exists on Mars, it will wear chain mail, capes and veils and be prone to camp hand gestures, A Message from Mars proves – as does Pirates of 1920 – that crystal-ball fiction can still be worth watching once it’s a dot in the rear-view mirror. Both films will be shown at the BFI, as part of their Days of Fear and Wonder sci-fi season, on 7 December with a live piano accompaniment, and A Message from Mars will be available to stream from 12 December on the BFI Player and BBC Arts Online.

Curators’ eggs

In most sports, the 30th birthday is a sign that the best years are over. Some have suggested that the same measure might apply to the Turner Prize. Many of the earlier winners – Grayson Perry, Damien Hirst, Gilbert and George – and even one runner-up, Tracey Emin, have a name or an artwork known even to those with little interest in art. But recent recipients – Susan Philipsz, Martin Boyce – are more of what you might call curators’ eggs, their impact contained within gallery walls.

This is again the case with the 2014 winner, Duncan Campbell. The Turner’s high profile was created by media debate; it helped to have an image (Hirst’s shark, Gormley’s Angel of the North) that was easily reducible to headlines. Campbell’s winning entry is a 54-minute film reworking a 1950s French documentary, with sequences co-created with the choreographer Michael Clark. Few visitors to the Tate Britain exhibition (until 4 January 2015) can be expected to watch it in full.

Like Hollywood, the Turner favoured showbiz-savvy creators with a grabby pitch but struggles to get recognition for art-house films. Channel 4’s live coverage suffered from sound problems but even if it gets the microphones right next year, the Turner is having trouble being heard. There’s no obligation on artists to become popular but, having gone from a period in which they did to one in which they don’t, the trophy named after Mike Leigh’s latest protagonist is in a difficult transition. 

Mark Lawson is a journalist and broadcaster, best known for presenting Front Row on Radio 4 for 16 years. He writes a weekly column in the critics section of the New Statesman.

This article first appeared in the 04 December 2014 issue of the New Statesman, Deep trouble

Getty
Show Hide image

The internet makes writing as innovative as speech

When a medium acquires new functions, it will need to be adapted by means of creating new forms.

Many articles on how the internet has changed language are like linguistic versions of the old Innovations catalogue, showcasing the latest strange and exciting products of our brave new digital culture: new words (“rickroll”); new uses of existing words (“trend” as a verb); abbreviations (smh, or “shaking my head”); and graphic devices (such as the much-hyped “new language” of emojis). Yet these formal innovations are merely surface (and in most cases ephemeral) manifestations of a deeper change a change in our relationship with the written word.

I first started to think about this at some point during the Noughties, after I noticed the odd behaviour of a friend’s teenage daughter. She was watching TV, alone and in silence, while her thumbs moved rapidly over the keys of her mobile phone. My friend explained that she was chatting with a classmate: they weren’t in the same physical space, but they were watching the same programme, and discussing it in a continuous exchange of text messages. What I found strange wasn’t the activity itself. As a teenage girl in the 1970s, I, too, was capable of chatting on the phone for hours to someone I’d spent all day with at school. The strange part was the medium: not spoken language, but written text.

In 1997, research conducted for British Telecom found that face-to-face speech accounted for 86 per cent of the average Briton’s communications, and telephone speech for 12 per cent. Outside education and the (white-collar or professional) workplace, most adults did little writing. Two decades later, it’s probably still true that most of us talk more than we write. But there’s no doubt we are making more use of writing, because so many of us now use it in our social interactions. We text, we tweet, we message, we Facebook; we have intense conversations and meaningful relationships with people we’ve never spoken to.

Writing was not designed to serve this purpose. Its original function was to store information in a form that did not depend on memory for its transmission and preservation. It acquired other functions, of the social kind, among others; but even in the days when “snail mail” was less snail-like (in large cities in the early 1900s there were five postal deliveries a day), “conversations” conducted by letter or postcard fell far short of the rapid back-and-forth that ­today’s technology makes possible.

When a medium acquires new functions, it will need to be adapted by means of creating new forms. Many online innovations are motivated by the need to make written language do a better job of two things in particular: communicating tone, and expressing individual or group identity. The rich resources speech offers for these purposes (such as accent, intonation, voice quality and, in face-to-face contexts, body language) are not reproducible in text-based communication. But users of digital media have found ways to exploit the resources that are specific to text, such as spelling, punctuation, font and spacing.

The creative use of textual resources started early on, with conventions such as capital letters to indicate shouting and the addition of smiley-face emoticons (the ancestors of emojis) to signal humorous or sarcastic intent, but over time it has become more nuanced and differentiated. To those in the know, a certain respelling (as in “smol” for “small”) or the omission of standard punctuation (such as the full stop at the end of a message) can say as much about the writer’s place in the virtual world as her accent would say about her location in the real one.

These newer conventions have gained traction in part because of the way the internet has developed. As older readers may recall, the internet was once conceptualised as an “information superhighway”, a vast and instantly accessible repository of useful stuff. But the highway was a one-way street: its users were imagined as consumers rather than producers. Web 2.0 changed that. Writers no longer needed permission to publish: they could start a blog, or write fan fiction, without having to get past the established gatekeepers, editors and publishers. And this also freed them to deviate from the linguistic norms that were strictly enforced in print – to experiment or play with grammar, spelling and punctuation.

Inevitably, this has prompted complaints that new digital media have caused literacy standards to plummet. That is wide of the mark: it’s not that standards have fallen, it’s more that in the past we rarely saw writing in the public domain that hadn’t been edited to meet certain standards. In the past, almost all linguistic innovation (the main exception being formal or technical vocabulary) originated in speech and appeared in print much later. But now we are seeing traffic in the opposite direction.

Might all this be a passing phase? It has been suggested that as the technology improves, many text-based forms of online communication will revert to their more “natural” medium: speech. In some cases this seems plausible (in a few it’s already happening). But there are reasons to think that speech will not supplant text in all the new domains that writing has conquered.

Consider my friend’s daughter and her classmate, who chose to text when they could have used their phones to talk. This choice reflected their desire for privacy: your mother can’t listen to a text-based conversation. Or consider the use of texting to perform what politeness theorists call “face-threatening acts”, such as sacking an employee or ending an intimate relationship. This used to be seen as insensitive, but my university students now tell me they prefer it – again, because a text is read in private. Your reaction to being dumped will not be witnessed by the dumper: it allows you to retain your dignity, and gives you time to craft your reply.

Students also tell me that they rarely speak on the phone to anyone other than their parents without prearranging it. They see unsolicited voice calls as an imposition; text-based communication is preferable (even if it’s less efficient) because it doesn’t demand the recipient’s immediate and undivided attention. Their guiding principle seems to be: “I communicate with whom I want, when I want, and I respect others’ right to do the same.”

I’ll confess to finding this new etiquette off-putting: it seems ungenerous, unspontaneous and self-centred. But I can also see how it might help people cope with the overwhelming and intrusive demands of a world where you’re “always on”. (In her book Always On: Language in an Online and Mobile World, Naomi Baron calls it “volume control”, a way of turning down the incessant noise.) As with the other new practices I’ve mentioned, it’s a strategic adaptation, exploiting the inbuilt capabilities of technology, but in ways that owe more to our own desires and needs than to the conscious intentions of its designers. Or, to put it another way (and forgive me if I adapt a National Rifle Association slogan): technologies don’t change language, people do.

Deborah Cameron is Professor of Language and Communication at the University of Oxford and a fellow of Worcester College

This article first appeared in the 16 February 2017 issue of the New Statesman, The New Times