Show Hide image

Screen test

Video games dominate Britain’s entertainment industry, yet we lack the critical vocabulary to unders

Cultural realities tend to lag behind economic ones. How else to explain that the UK’s biggest (worth £4.5bn-plus in annual sales) and fastest-growing (at close to 20 per cent annually) entertainment medium still barely registers on the nation’s more rarefied intellectual radar? I am talking, of course, about video games – as the field of interactive entertainment still rather quaintly tends to be known. And the reason for its neglect is not so much snobbery as a gaping absence in our critical vocabulary and sensibilities.

When, today, we ask a question such as “Is it art?” we are no longer looking for a yes or no answer. The 20th century decided that urinals, cans of soup, recorded silence, heaps of bricks and fake human excrement could all be art, of a certain kind. Under these circumstances, it would be more than a little perverse to deny the idea of art to objects as lovingly crafted, as considered and as creative as video games. The question that’s really at stake is something more specific. If video games are art, what kind of art are they? What are their particular attributes and potential? And, perhaps most importantly, just how good are they?

I recently posed similar questions to someone who is very definitely both an artist and a gamer: the writer Naomi Alderman. Alderman’s first novel, Disobedience, appeared in 2006 and won her the Orange Award for New Writers. In parallel to her work as a literary writer, however, she also spent three years pursuing a very different kind of career: that of lead writer on the experimental “alternate reality” game Perplex City. To many authors, such a venture might have felt like a period of time away from “real” writing. Yet, Alderman explained, for her it was more a discovery that these two modes of writing were not only compatible, but symbiotic. I asked her whether she had preferred working on her novel or on the game. “I couldn’t choose,” she said. “I feel that if I were to give up either the novel or the game, I wouldn’t be able to do the other.”

It’s a creative interconnection Alderman traces back to her childhood. “My first memory of playing a game was around 1981, when my mum took me to the Puffin Club exhibition, a kind of roadshow for kids who read books published by Puffin. I remember they had a bank of computers at this one where you could queue up to get ten minutes playing a text-based adventure game. And I thought, ‘This is absolutely brilliant.’ I was fascinated.” These games were some of the first things it was possible to play on a computer in which plot and character meant more than a handful of pixels dashing across the screen. For Alderman, as for many others, the experience was closely associated “with stories and with the idea of being able to walk into a story”. And the dizzying kind of thought experiment that the best fiction can undertake – its gleeful defiance of the rules of time and nature – lies close to the heart of what video games do best.

As a modern example, Alderman describes a game called Katamari. In it, for want of a better description, you roll stuff up. You control, she tells me, “a little ball, which is effectively sticky, and you’re rolling it around a landscape picking stuff up. As you do so, your ball gets bigger and bigger. It’s almost impossible to explain how much fun this is, the pleasure of growing your little ball, which starts off just big enough to pick up pins and sweets from a tabletop and ends up picking envelopes, then televisions, then tables, then houses, then streets; until in the end you can roll it across the whole world picking up clouds and continents.”

Katamari may sound like an oddity, but its pleasures are typical of a central kind of video-game experience, in that they are in part architectural: something one inhabits and encounters incrementally; a space designed to be occupied and experienced rather than viewed simply as a whole. Players in a well-made game will relish not just its appearance but also the feel of exploring and gradually mastering its unreal space. Yet, in what sense is any of this art, or even artistic? Just as every word within a novel has to be written, of course, every single element of any video game has to be crafted from scratch. To talk about the “art” element of games is, I would argue, to talk about the point at which this fantastically intricate undertaking achieves a particular concentration, complexity and resonance.

It’s worth remembering, too, just how young a medium video games are. Commercial games have existed for barely 30 years; the analogy with film, now almost 120 years old, is an illuminating one. In December 1895, the Lumière brothers, Auguste and Louis, showed the first films of real-life images to a paying audience, in Paris. This, clearly, was a medium, but not yet an art form; and for its first decade, film remained largely a novelty, a technology that astounded viewers with images such as trains rushing into a station, sending early audiences running out of cinemas in terror. It took several decades for film to master its own, unique artistic language: cinematography. It took time, too, for audiences to expect more from it than raw wonder or exhilaration. Yet today you would be hard-pushed to find a single person who does not admire at least one film as a work of art.

If, however, you ask about video games, the chances are that you’ll find plenty of people who don’t play them at all, let alone consider them of any artistic interest. This is hardly surprising: at first glance it can seem that many games remain, in artistic terms, at the level of cinema’s train entering a station – occasions for technological shock and awe, rather than for the more densely refined emotions of art.

Yet the nature of games as a creative medium has changed profoundly in recent years – as I discovered when I spoke to Justin Villiers, an award-winning screenwriter and film-maker who since late 2007 has been plying his trade in the realm of video games. Even a few years ago, he explained, his career move would have been artistically unthinkable. “In the old days, the games industry fed on itself. You’d have designers who were brought up on video games writing games themselves, so they were entirely self-referential; all the characters sounded like refugees from weak Star Trek episodes or Lord of the Rings out-takes. But now there is new blood in the industry – people with backgrounds in cinema and theatre and comic books and television. In the area in which I work, writing and direction, games are just starting to offer genuine catharsis, or to bring about epiphanies; they’re becoming more than simple tools to sublimate our desires or our fight for survival.”

I suggest the film analogy, and wonder what stage of cinema games now correspond to. “It reminds me of the late 1960s and early 1970s, because there were no rules, or, as soon as there were some, someone would come along and break them. Kubrick needed a lens for 2001: a Space Odyssey that didn’t exist, so, together with the director of photography, he invented one.” How does this translate to the world of games? “It’s like that in the industry right now. Around a table you have the creative director, lead animator, game designer, sound designer and me, and we’re all trying to work out how to create a moment in a game or a sequence that has never been done before, ever.”

Villiers is, he admits, an unlikely evangelist: someone who was initially deeply sceptical of games’ claims as art. But it would be wrong, he concedes, simply to assume that the current explosion of talent within the gaming industry will allow it to overtake film or television as a storytelling medium. Today’s best games may be as good as some films in their scripts, performances, art direction and suchlike. But most are still much worse; and in any case, the most cinematic games are already splitting off into a hybrid subgenre that lies outside the mainstream of gaming. If we are to understand the future of games, as both a medium and an art form, we must look to what is unique about them. And that is their interactivity.

To explore this further, I spoke to a game designer who is responsible for some of the most visionary titles to appear in recent years – Jenova Chen. Chen is co-founder of the California-based games studio thatgamecompany, a young firm whose mission, as he explains it, is breathtakingly simple: to produce games that are “beneficial and relevant to adult life; that can touch you as books, films and music can”.

Chen’s latest game, Flower, is the partial fulfilment of these ambitions, a work whose genesis in many ways seems closer to that of a poem or painting than an interactive entertainment. “I grew up in Shanghai,” he explains. “A huge city, one of the world’s biggest and most polluted. Then I came to America and one day I was driving from Los Angeles to San Francisco and I saw endless fields of green grass, and rows and rows of windmill farms. And I was shocked, because up until then I had never seen a scene like this. So I started to think: wouldn’t it be nice for people living in a city to turn a games console into a portal, leading into these endless green fields?”

From this grew a game that is both incredibly simple and utterly compelling. You control a petal from a single flower, and must move it around a shimmering landscape of fields and a gradually approaching city by directing a wind to blow it along, gathering other petals from other flowers as you go. Touch a button on the control pad to make the wind blow harder; let go to soften it; gently shift the controller in the air to change directions. You can, as I did on my first play, simply trace eddies in the air, or gust between tens of thousands of blades of grass. Or you can press further into the world of the game and begin to learn how the landscape of both city and fields is altered by your touch, springing into light and life as you pass.

“We want the player to feel like they are healing,” Chen tells me, “that they are creating life and energy and spreading light and love.” If this sounds hopelessly naive, it is important to remember that the sophistication of a game experience depends not so much on its conceptual complexity as on the intricacy of its execution. In Flower, immense effort has gone into making something that appears simple and beautiful, but that is minutely reactive and adaptable. Here, the sensation of “flow” – of immersion in the task of illumination and exploration – connects to some of those fundamental emotions that are the basis of all enduring art: its ability to enthral and transport its audience, to stir in them a heightened sense of time and place.

Still, an important question remains. What can’t games do? On the one hand, work such as Chen’s points to a huge potential audience for whole new genres of game. On the other hand, there are certain limitations inherent in the very fabric of an interactive medium, perhaps the most important of which is also the most basic: its lack of inevitability. As the tech-savvy critic and author Steven Poole has argued, “great stories depend for their effect on irreversibility – and this is because life, too, is irreversible. The pity and terror that Aristotle says we feel as spectators to a tragedy are clearly dependent on our apprehension of circumstances that cannot be undone.” Games have only a limited, and often incidental, ability to convey such feelings.

Thus, the greatest pleasure of games is immersion: you move, explore and learn, sometimes in the company of thousands of other players. There is nothing inherently mindless about such an interaction; but nor should there be any question of games replacing books or films. Instead – just as the printed word, recorded music and moving images have already done – this interactive art will continue to develop along with its audience. It will, I believe, become one of the central ways in which we seek to understand (and distract, and delight) ourselves in the 21st century. And, for the coming generations – for which the world before video games will seem as remote a past as one without cinema does to us – the best gift we can bequeath is a muscular and discerning critical engagement.

Tom Chatfield is the arts and books editor of Prospect magazine. His book on the culture of video games, “Gameland”, is forthcoming from Virgin Books (£18.99)

VIDEO GAMES: THE CANON

Pong (1972). The first true video game. Bounce a square white blob between two white bats. A software revolution.

Pac-Man (1980). A little yellow ball, in a maze, eating dots, being chased by ghosts. The beauty of interactive complexity arising from something simple and slightly crazy – and still fiendishly fun today.

Tetris (1989). This utterly abstract puzzle of falling blocks and vanishing lines was launched on the Nintendo Game Boy and single-handedly guaranteed the hand-held console’s triumph as a global phenomenon. Perhaps the purest logical play experience ever created.

Civilization (1991). View the world from the top down and guide a civilisation from hunter-gathering to landing on the moon. Hours, days and months of utterly absorbing micromanagement.

Doom (1993). Run around a scary maze wielding a selection of big guns being chased by aliens. Then chase your friends. Doom did it first and created a genre. For the first time, a computer had made grown men tremble.

Ultima Online (1997). Enter a living, breathing online world with thousands of other players; become a tradesman, buy your own house, chat, make and betray new friends. The first multiplayer online role-playing game is still, for many, the purest and greatest of them all.

The Sims (2000). Simulated daily activities for virtual people; help them and watch them live. For those who think games are all violent and mindless, note that this began the best-selling series of games in history – more than 100 million copies sold, and counting.

Bejeweled (2001). A simple, pretty puzzle game that changed the games industry simply because it could be downloaded in minutes by any computer attached to the internet. Digital distribution is the future, and this title first proved it.

Guitar Hero (2005). Live out your dreams of rock deification with friends gathered round to watch you pummel a plastic guitar. A revolution in cross-media: cool, sociable fun, and a licence to print money for its creators.

Wii Sports (2006). Wave your arms around while holding a white controller. Now anyone could play tennis and go bowling with family and friends in the living room. Nintendo delivered another revolution in gaming with this debut title for its Wii console.

This article first appeared in the 04 May 2009 issue of the New Statesman, Flu: Everything you need to know

JOHN DEVOLLE/GETTY IMAGES
Show Hide image

Fitter, dumber, more productive

How the craze for Apple Watches, Fitbits and other wearable tech devices revives the old and discredited science of behaviourism.

When Tim Cook unveiled the latest operating system for the Apple Watch in June, he described the product in a remarkable way. This is no longer just a wrist-mounted gadget for checking your email and social media notifications; it is now “the ultimate device for a healthy life”.

With the watch’s fitness-tracking and heart rate-sensor features to the fore, Cook explained how its Activity and Workout apps have been retooled to provide greater “motivation”. A new Breathe app encourages the user to take time out during the day for deep breathing sessions. Oh yes, this watch has an app that notifies you when it’s time to breathe. The paradox is that if you have zero motivation and don’t know when to breathe in the first place, you probably won’t survive long enough to buy an Apple Watch.

The watch and its marketing are emblematic of how the tech trend is moving beyond mere fitness tracking into what might one call quality-of-life tracking and algorithmic hacking of the quality of consciousness. A couple of years ago I road-tested a brainwave-sensing headband, called the Muse, which promises to help you quiet your mind and achieve “focus” by concentrating on your breathing as it provides aural feedback over earphones, in the form of the sound of wind at a beach. I found it turned me, for a while, into a kind of placid zombie with no useful “focus” at all.

A newer product even aims to hack sleep – that productivity wasteland, which, according to the art historian and essayist Jonathan Crary’s book 24/7: Late Capitalism and the Ends of Sleep, is an affront to the foundations of capitalism. So buy an “intelligent sleep mask” called the Neuroon to analyse the quality of your sleep at night and help you perform more productively come morning. “Knowledge is power!” it promises. “Sleep analytics gathers your body’s sleep data and uses it to help you sleep smarter!” (But isn’t one of the great things about sleep that, while you’re asleep, you are perfectly stupid?)

The Neuroon will also help you enjoy technologically assisted “power naps” during the day to combat “lack of energy”, “fatigue”, “mental exhaustion” and “insomnia”. When it comes to quality of sleep, of course, numerous studies suggest that late-night smartphone use is very bad, but if you can’t stop yourself using your phone, at least you can now connect it to a sleep-enhancing gadget.

So comes a brand new wave of devices that encourage users to outsource not only their basic bodily functions but – as with the Apple Watch’s emphasis on providing “motivation” – their very willpower.  These are thrillingly innovative technologies and yet, in the way they encourage us to think about ourselves, they implicitly revive an old and discarded school of ­thinking in psychology. Are we all neo-­behaviourists now?

***

The school of behaviourism arose in the early 20th century out of a virtuous scientific caution. Experimenters wished to avoid anthropomorphising animals such as rats and pigeons by attributing to them mental capacities for belief, reasoning, and so forth. This kind of description seemed woolly and impossible to verify.

The behaviourists discovered that the actions of laboratory animals could, in effect, be predicted and guided by careful “conditioning”, involving stimulus and reinforcement. They then applied Ockham’s razor: there was no reason, they argued, to believe in elaborate mental equipment in a small mammal or bird; at bottom, all behaviour was just a response to external stimulus. The idea that a rat had a complex mentality was an unnecessary hypothesis and so could be discarded. The psychologist John B Watson declared in 1913 that behaviour, and behaviour alone, should be the whole subject matter of psychology: to project “psychical” attributes on to animals, he and his followers thought, was not permissible.

The problem with Ockham’s razor, though, is that sometimes it is difficult to know when to stop cutting. And so more radical behaviourists sought to apply the same lesson to human beings. What you and I think of as thinking was, for radical behaviourists such as the Yale psychologist Clark L Hull, just another pattern of conditioned reflexes. A human being was merely a more complex knot of stimulus responses than a pigeon. Once perfected, some scientists believed, behaviourist science would supply a reliable method to “predict and control” the behaviour of human beings, and thus all social problems would be overcome.

It was a kind of optimistic, progressive version of Nineteen Eighty-Four. But it fell sharply from favour after the 1960s, and the subsequent “cognitive revolution” in psychology emphasised the causal role of conscious thinking. What became cognitive behavioural therapy, for instance, owed its impressive clinical success to focusing on a person’s cognition – the thoughts and the beliefs that radical behaviourism treated as mythical. As CBT’s name suggests, however, it mixes cognitive strategies (analyse one’s thoughts in order to break destructive patterns) with behavioural techniques (act a certain way so as to affect one’s feelings). And the deliberate conditioning of behaviour is still a valuable technique outside the therapy room.

The effective “behavioural modification programme” first publicised by Weight Watchers in the 1970s is based on reinforcement and support techniques suggested by the behaviourist school. Recent research suggests that clever conditioning – associating the taking of a medicine with a certain smell – can boost the body’s immune response later when a patient detects the smell, even without a dose of medicine.

Radical behaviourism that denies a subject’s consciousness and agency, however, is now completely dead as a science. Yet it is being smuggled back into the mainstream by the latest life-enhancing gadgets from Silicon Valley. The difference is that, now, we are encouraged to outsource the “prediction and control” of our own behaviour not to a benign team of psychological experts, but to algorithms.

It begins with measurement and analysis of bodily data using wearable instruments such as Fitbit wristbands, the first wave of which came under the rubric of the “quantified self”. (The Victorian polymath and founder of eugenics, Francis Galton, asked: “When shall we have anthropometric laboratories, where a man may, when he pleases, get himself and his children weighed, measured, and rightly photographed, and have their bodily faculties tested by the best methods known to modern science?” He has his answer: one may now wear such laboratories about one’s person.) But simply recording and hoarding data is of limited use. To adapt what Marx said about philosophers: the sensors only interpret the body, in various ways; the point is to change it.

And the new technology offers to help with precisely that, offering such externally applied “motivation” as the Apple Watch. So the reasoning, striving mind is vacated (perhaps with the help of a mindfulness app) and usurped by a cybernetic system to optimise the organism’s functioning. Electronic stimulus produces a physiological response, as in the behaviourist laboratory. The human being herself just needs to get out of the way. The customer of such devices is merely an opaquely functioning machine to be tinkered with. The desired outputs can be invoked by the correct inputs from a technological prosthesis. Our physical behaviour and even our moods are manipulated by algorithmic number-crunching in corporate data farms, and, as a result, we may dream of becoming fitter, happier and more productive.

***

 

The broad current of behaviourism was not homogeneous in its theories, and nor are its modern technological avatars. The physiologist Ivan Pavlov induced dogs to salivate at the sound of a bell, which they had learned to associate with food. Here, stimulus (the bell) produces an involuntary response (salivation). This is called “classical conditioning”, and it is advertised as the scientific mechanism behind a new device called the Pavlok, a wristband that delivers mild electric shocks to the user in order, so it promises, to help break bad habits such as overeating or smoking.

The explicit behaviourist-revival sell here is interesting, though it is arguably predicated on the wrong kind of conditioning. In classical conditioning, the stimulus evokes the response; but the Pavlok’s painful electric shock is a stimulus that comes after a (voluntary) action. This is what the psychologist who became the best-known behaviourist theoretician, B F Skinner, called “operant conditioning”.

By associating certain actions with positive or negative reinforcement, an animal is led to change its behaviour. The user of a Pavlok treats herself, too, just like an animal, helplessly suffering the gadget’s painful negative reinforcement. “Pavlok associates a mild zap with your bad habit,” its marketing material promises, “training your brain to stop liking the habit.” The use of the word “brain” instead of “mind” here is revealing. The Pavlok user is encouraged to bypass her reflective faculties and perform pain-led conditioning directly on her grey matter, in order to get from it the behaviour that she prefers. And so modern behaviourist technologies act as though the cognitive revolution in psychology never happened, encouraging us to believe that thinking just gets in the way.

Technologically assisted attempts to defeat weakness of will or concentration are not new. In 1925 the inventor Hugo Gernsback announced, in the pages of his magazine Science and Invention, an invention called the Isolator. It was a metal, full-face hood, somewhat like a diving helmet, connected by a rubber hose to an oxygen tank. The Isolator, too, was designed to defeat distractions and assist mental focus.

The problem with modern life, Gernsback wrote, was that the ringing of a telephone or a doorbell “is sufficient, in nearly all cases, to stop the flow of thoughts”. Inside the Isolator, however, sounds are muffled, and the small eyeholes prevent you from seeing anything except what is directly in front of you. Gernsback provided a salutary photograph of himself wearing the Isolator while sitting at his desk, looking like one of the Cybermen from Doctor Who. “The author at work in his private study aided by the Isolator,” the caption reads. “Outside noises being eliminated, the worker can concentrate with ease upon the subject at hand.”

Modern anti-distraction tools such as computer software that disables your internet connection, or word processors that imitate an old-fashioned DOS screen, with nothing but green text on a black background, as well as the brain-measuring Muse headband – these are just the latest versions of what seems an age-old desire for technologically imposed calm. But what do we lose if we come to rely on such gadgets, unable to impose calm on ourselves? What do we become when we need machines to motivate us?

***

It was B F Skinner who supplied what became the paradigmatic image of ­behaviourist science with his “Skinner Box”, formally known as an “operant conditioning chamber”. Skinner Boxes come in different flavours but a classic example is a box with an electrified floor and two levers. A rat is trapped in the box and must press the correct lever when a certain light comes on. If the rat gets it right, food is delivered. If the rat presses the wrong lever, it receives a painful electric shock through the booby-trapped floor. The rat soon learns to press the right lever all the time. But if the levers’ functions are changed unpredictably by the experimenters, the rat becomes confused, withdrawn and depressed.

Skinner Boxes have been used with success not only on rats but on birds and primates, too. So what, after all, are we doing if we sign up to technologically enhanced self-improvement through gadgets and apps? As we manipulate our screens for ­reassurance and encouragement, or wince at a painful failure to be better today than we were yesterday, we are treating ourselves similarly as objects to be improved through operant conditioning. We are climbing willingly into a virtual Skinner Box.

As Carl Cederström and André Spicer point out in their book The Wellness Syndrome, published last year: “Surrendering to an authoritarian agency, which is not just telling you what to do, but also handing out rewards and punishments to shape your behaviour more effectively, seems like undermining your own agency and autonomy.” What’s worse is that, increasingly, we will have no choice in the matter anyway. Gernsback’s Isolator was explicitly designed to improve the concentration of the “worker”, and so are its digital-age descendants. Corporate employee “wellness” programmes increasingly encourage or even mandate the use of fitness trackers and other behavioural gadgets in order to ensure an ideally efficient and compliant workforce.

There are many political reasons to resist the pitiless transfer of responsibility for well-being on to the individual in this way. And, in such cases, it is important to point out that the new idea is a repackaging of a controversial old idea, because that challenges its proponents to defend it explicitly. The Apple Watch and its cousins promise an utterly novel form of technologically enhanced self-mastery. But it is also merely the latest way in which modernity invites us to perform operant conditioning on ourselves, to cleanse away anxiety and dissatisfaction and become more streamlined citizen-consumers. Perhaps we will decide, after all, that tech-powered behaviourism is good. But we should know what we are arguing about. The rethinking should take place out in the open.

In 1987, three years before he died, B F Skinner published a scholarly paper entitled Whatever Happened to Psychology as the Science of Behaviour?, reiterating his now-unfashionable arguments against psychological talk about states of mind. For him, the “prediction and control” of behaviour was not merely a theoretical preference; it was a necessity for global social justice. “To feed the hungry and clothe the naked are ­remedial acts,” he wrote. “We can easily see what is wrong and what needs to be done. It is much harder to see and do something about the fact that world agriculture must feed and clothe billions of people, most of them yet unborn. It is not enough to advise people how to behave in ways that will make a future possible; they must be given effective reasons for behaving in those ways, and that means effective contingencies of reinforcement now.” In other words, mere arguments won’t equip the world to support an increasing population; strategies of behavioural control must be designed for the good of all.

Arguably, this authoritarian strand of behaviourist thinking is what morphed into the subtly reinforcing “choice architecture” of nudge politics, which seeks gently to compel citizens to do the right thing (eat healthy foods, sign up for pension plans) by altering the ways in which such alternatives are presented.

By contrast, the Apple Watch, the Pavlok and their ilk revive a behaviourism evacuated of all social concern and designed solely to optimise the individual customer. By ­using such devices, we voluntarily offer ourselves up to a denial of our voluntary selves, becoming atomised lab rats, to be manipulated electronically through the corporate cloud. It is perhaps no surprise that when the founder of American behaviourism, John B Watson, left academia in 1920, he went into a field that would come to profit very handsomely indeed from his skills of manipulation – advertising. Today’s neo-behaviourist technologies promise to usher in a world that is one giant Skinner Box in its own right: a world where thinking just gets in the way, and we all mechanically press levers for food pellets.

This article first appeared in the 18 August 2016 issue of the New Statesman, Corbyn’s revenge