Google Glass - now available as shades. Photo: Ajit Niranjan / The New Statesman
Show Hide image

Google Glass launches in the UK, but don't expect to be wearing them anytime soon

Google just launched their prototype smartglasses in the UK, two years after they hit the US.

“Ok, glass.”

Two simple words, and a passable imitation of Benedict Cumberbatch’s public school tones – think Sherlock, not Smaug – start a frenzy of activity in the top right-hand corner of my eye. A list of voice commands appears on a screen that feels as if it's projected eight feet away, which I scroll through with the slightest tilt of my head. 

I triple-tap my temple and suddenly I can see the solar system from within the showroom by Central St. Martins on an overcast Monday evening. Constellations and planets are annotated in space but the text is unnecessary. I turn slowly on the spot till I locate the sun hovering over St. Pancras, and a soft voice reads out a Wikipedia-style entry of the star.

This is Google Glass, the latest in high-tech gadgetry. Star Chart, just one of the apps in the prototype I’m playing about with during Glass’ UK launch last night, is like a virtual planetarium which operates on a point-and-look model – no swiping or clicking needed. GPS and gyroscopes make it perfectly suited to Google’s hands-free headset.

The technology giant is selling the prototype of Google Glass for £1000, but don’t write it off because of the price-tag. Though the final version will undoubtedly be much cheaper, the current model is being released now to get public feedback on the project. Just as it has been in the US, Google is looking for British “Explorers” to test the product out and report their experiences of it. Speaking to The Guardian, 'Head of Glass' Ivy Ross – the intellectual counterpart to Blondie – said:

What you’re seeing now is that the people in businesses that acquired them are coming up with all these amazing use cases for it, but the same thing is happening with consumers – artists, mums, dads, school teachers, scientists – they’re doing amazing things with it too.” 

Their London video gives a little taster of how they expect it to take off.

Set aside the technological jargon – one of the team describes it as an “optical head-mounted display optimised for augmented reality” – and it's hard to deny that Glass is actually quite nifty, and user-friendly too: within ten minutes I've got the hang of interacting with the headset, through a combination of vocal commands, swipes and head nods. The employee demonstrating Glass to me – whose Polish accent is just a touch too strong for the voice recognition software – even showcases the surreptitious "wink-for-a-photo"  command. 

Fun as the applications are, there's a strong mood in the room that Google is onto something bigger than a snazzy gadget. Global director of marketing Ed Sanders believes Glass might help us interact more with the real world by taking us away from smartphones and tablets:

People are looking down; people are getting buried in technology. We have a deep, sort of philosophical desire to help people look back up. And one of the big things behind Glass is how you put people back in the moment.”

Supposedly, its functions can be called up without taking the user away from the action. The demonstrator puts this in perspective: imagine you’re on holiday. Want to find directions to a fancy restaurant? Translate the indecipherable Italian menu? Shazam the Pavarotti in the background? Google thinks Glass will let it embed technology in day-to-day life without detracting from the experiences.

Sanders – who managed to use Glass to record the first time his son said ‘Dada’ – thinks the company really might be onto something. The smartglasses were developed by Google X, a “Charlie-and-the-chocolate-factory” division of Google responsible for projects like the driverless car. The guiding mantra at the semi-secret research facility is to make technology ten times better, not just ten percent – hence the X in the name.

But Glass isn't without its shortcomings. The product's been plagued by bugs and it looks to be a long, long while before a polished, glitch-free version is on the market. Unfortunately the criticisms don't stop there. In the short time I used it, the demonstrator accidentally 'took control' of my glasses by saying commands a bit too loudly. In America it’s come under so much criticism for intruding on privacy that bars and restaurants in tech-hub San Francisco have banned it. Civil liberties groups have voiced concerns that the technology will enable stealthy spying.

Of course, there's the fashion angle as well. Despite partnering up with Ray-Ban and other high-end fashion brands, the fact remains that many users are reluctant to publicise their purchase. Google can make the design as streamlined and versatile as it likes, but something about the mini-computer sat on the bridge of your nose just screams "dweeb". 

So don't expect to see Glass becoming a part of everyday life anytime soon. The technology might be getting there but there's a whole marketing minefield that Google will have to navigate through first. After all, who really wants to be a "Glasshole"?

Getty
Show Hide image

The internet makes writing as innovative as speech

When a medium acquires new functions, it will need to be adapted by means of creating new forms.

Many articles on how the internet has changed language are like linguistic versions of the old Innovations catalogue, showcasing the latest strange and exciting products of our brave new digital culture: new words (“rickroll”); new uses of existing words (“trend” as a verb); abbreviations (smh, or “shaking my head”); and graphic devices (such as the much-hyped “new language” of emojis). Yet these formal innovations are merely surface (and in most cases ephemeral) manifestations of a deeper change a change in our relationship with the written word.

I first started to think about this at some point during the Noughties, after I noticed the odd behaviour of a friend’s teenage daughter. She was watching TV, alone and in silence, while her thumbs moved rapidly over the keys of her mobile phone. My friend explained that she was chatting with a classmate: they weren’t in the same physical space, but they were watching the same programme, and discussing it in a continuous exchange of text messages. What I found strange wasn’t the activity itself. As a teenage girl in the 1970s, I, too, was capable of chatting on the phone for hours to someone I’d spent all day with at school. The strange part was the medium: not spoken language, but written text.

In 1997, research conducted for British Telecom found that face-to-face speech accounted for 86 per cent of the average Briton’s communications, and telephone speech for 12 per cent. Outside education and the (white-collar or professional) workplace, most adults did little writing. Two decades later, it’s probably still true that most of us talk more than we write. But there’s no doubt we are making more use of writing, because so many of us now use it in our social interactions. We text, we tweet, we message, we Facebook; we have intense conversations and meaningful relationships with people we’ve never spoken to.

Writing was not designed to serve this purpose. Its original function was to store information in a form that did not depend on memory for its transmission and preservation. It acquired other functions, of the social kind, among others; but even in the days when “snail mail” was less snail-like (in large cities in the early 1900s there were five postal deliveries a day), “conversations” conducted by letter or postcard fell far short of the rapid back-and-forth that ­today’s technology makes possible.

When a medium acquires new functions, it will need to be adapted by means of creating new forms. Many online innovations are motivated by the need to make written language do a better job of two things in particular: communicating tone, and expressing individual or group identity. The rich resources speech offers for these purposes (such as accent, intonation, voice quality and, in face-to-face contexts, body language) are not reproducible in text-based communication. But users of digital media have found ways to exploit the resources that are specific to text, such as spelling, punctuation, font and spacing.

The creative use of textual resources started early on, with conventions such as capital letters to indicate shouting and the addition of smiley-face emoticons (the ancestors of emojis) to signal humorous or sarcastic intent, but over time it has become more nuanced and differentiated. To those in the know, a certain respelling (as in “smol” for “small”) or the omission of standard punctuation (such as the full stop at the end of a message) can say as much about the writer’s place in the virtual world as her accent would say about her location in the real one.

These newer conventions have gained traction in part because of the way the internet has developed. As older readers may recall, the internet was once conceptualised as an “information superhighway”, a vast and instantly accessible repository of useful stuff. But the highway was a one-way street: its users were imagined as consumers rather than producers. Web 2.0 changed that. Writers no longer needed permission to publish: they could start a blog, or write fan fiction, without having to get past the established gatekeepers, editors and publishers. And this also freed them to deviate from the linguistic norms that were strictly enforced in print – to experiment or play with grammar, spelling and punctuation.

Inevitably, this has prompted complaints that new digital media have caused literacy standards to plummet. That is wide of the mark: it’s not that standards have fallen, it’s more that in the past we rarely saw writing in the public domain that hadn’t been edited to meet certain standards. In the past, almost all linguistic innovation (the main exception being formal or technical vocabulary) originated in speech and appeared in print much later. But now we are seeing traffic in the opposite direction.

Might all this be a passing phase? It has been suggested that as the technology improves, many text-based forms of online communication will revert to their more “natural” medium: speech. In some cases this seems plausible (in a few it’s already happening). But there are reasons to think that speech will not supplant text in all the new domains that writing has conquered.

Consider my friend’s daughter and her classmate, who chose to text when they could have used their phones to talk. This choice reflected their desire for privacy: your mother can’t listen to a text-based conversation. Or consider the use of texting to perform what politeness theorists call “face-threatening acts”, such as sacking an employee or ending an intimate relationship. This used to be seen as insensitive, but my university students now tell me they prefer it – again, because a text is read in private. Your reaction to being dumped will not be witnessed by the dumper: it allows you to retain your dignity, and gives you time to craft your reply.

Students also tell me that they rarely speak on the phone to anyone other than their parents without prearranging it. They see unsolicited voice calls as an imposition; text-based communication is preferable (even if it’s less efficient) because it doesn’t demand the recipient’s immediate and undivided attention. Their guiding principle seems to be: “I communicate with whom I want, when I want, and I respect others’ right to do the same.”

I’ll confess to finding this new etiquette off-putting: it seems ungenerous, unspontaneous and self-centred. But I can also see how it might help people cope with the overwhelming and intrusive demands of a world where you’re “always on”. (In her book Always On: Language in an Online and Mobile World, Naomi Baron calls it “volume control”, a way of turning down the incessant noise.) As with the other new practices I’ve mentioned, it’s a strategic adaptation, exploiting the inbuilt capabilities of technology, but in ways that owe more to our own desires and needs than to the conscious intentions of its designers. Or, to put it another way (and forgive me if I adapt a National Rifle Association slogan): technologies don’t change language, people do.

Deborah Cameron is Professor of Language and Communication at the University of Oxford and a fellow of Worcester College

This article first appeared in the 16 February 2017 issue of the New Statesman, The New Times