Nobody Remembers Their First Kill: the importance of video game violence

Violence isn't unique to cinema or games - they're just the latest recruit to the aftermath blame tradition.

Nobody remembers their first kill. It’s not like the high security prison-yards, where they pace just to forget, dream-haunted. When it comes to video games, nobody remembers their first kill. If you can recall your first video game, well, then you’ve a chance of pinpointing the setting (over a blackened Space Invaders’ killing field? Atop a Sonic the Hedgehog green hill? Deep within a Pac-Man labyrinth?). But a name, date and face? Not likely.

It’s not just the troubling number of digital skeletons in the players’ closet that prevents recollection – although from Super Mario to Call of Duty, the trail of dead we game-killers leave behind is of genocidal proportions. It’s that these slayings are inconsequential. Remember the first pawn or knight you "took" in chess – the moment you callously toppled its body from the board? Hardly. Even if the piece had a name and backstory – a wife and children waiting on news back home, a star-crossed romance with an rival pawn – such details would have been forgotten the moment you packed away the board.

Most game murder (and its moments-older twin, game violence) leaves no imprint on the memory because it lacks meaning outside of the game context. Unlike depictions of death in cinema, which can trigger keen memories of the viewer’s own past pains and sorrows, game violence is principally systemic in nature; its purpose is to move the player either towards a state of victory or of defeat, rarely to tears or reflection. Likewise, there is no remorse for the game murder not only because the crime is fictional but also because, unless you’re playing for money or a hand in marriage, there is no consequence beyond the border of the game’s own fleeting reality.

Video games were deadly from the get-go. Spacewar! – the proto-game of the MIT labs played on $120,000 mainframe computers in the early-1960s set the tone: a combative space game in which two players attempted to be the first to gun the other down. From this moment onwards violence was the medium’s defining quiddity. This is no great surprise. Most sports are metaphors for combat. The team games – soccer, rugby and so on - are sprawling battles in which attackers and defenders ebb and flow up and down the field in a clash of will and power led by their military-titled "captains". American Football is a series of frantic First World War-style scrambles for territory measured in 10-yard increments. Tennis is a pistol duel, squinting shots lined up in the glare of a high-noon sun; running races are breakneck chases between predator and prey, triggered by the firing of a gun. That video games would extend the combat metaphor that defines most human play was natural.

The arcades concentrated the metaphor into sixty-second clashes between player and computer, dealing as they invariably did in the violence of sudden failure. This was a financial decision more than it was an artistic one: their designers needed to kill off the player after a minute or so in order to squeeze another quarter out of them. Violence was part of the business model: in the battle between human and machine, the machine must always overwhelm the player. In such games, as the author David Mitchell wrote, we play to postpone the inevitable, that moment when our own capacity for meting out playful death is overcome by our opponent’s. This is the DNA of all games, handed down from the playground to the board and, finally to the screen.

The problem of game violence then – the problem that’s inspired a liberal president to call for Congress to fund another clutch of studies into its potential effects on the player – cannot derive from its existence or even its ubiquity. Violence is a necessary function of the video game. The problem must be to do with the aesthetic of the violence – the way in which its rendered on the screen. It is a question of form, not function – something that moves the conversation into the realm of all screen violence, a style concern.

The date at which cinematic violence began to become violent can be accurately set at 1966, the year that the Hay Production Code (which moderated on-screen "brutality and possible gruesomeness") was reversed and film edged closer to becoming a director’s medium. Arthur Penn’s Bonnie and Clyde (1967) and Sam Peckinpah’s The Wild Bunch (1969) took the cartoonish invulnerability of old movie violence (the "ox-stunning fisticuffs", as Vladimir Nabokov put it) and splattered the screen with blood and gore instead. Soon movie directors were ordering blood pouches in the thousands, crimson-washing every fight scene, exploring the boundaries of this newfound visual freedom.

Depictions of video game violence chart a similar trajectory from the staid to the outlandish, but it's a journey whose pace was set by technology, not censorship. Early game designers couldn’t spare the graphical processing power needed to render a tubular spout of blood or a glistening wound. They made do with guttural screams to bring the collapsing pixels to more vivid life.

Devoid of censorship and drawn to the potential marketing potency of being dubbed a "nasty", some developers courted controversy with violent subject matter (notably 1982’s Custer’s Revenge, an Atari 2600 game in which players assume the role of a scrawny settler dodging arrows in a bid to rape a bound American native girl). But even the most vulgar scene is robbed of its power when rendered in tubby pixels, like a lewd scrawl in a tittering teen-age boy’s exercise book.

When the technology caught up and games had the opportunity to begin to present the game violence and murders in a truer to life form, the uncanny valley effect continued to render them inefficient. 1997’s Carmageddon, a game in which players attempt to mow down policemen and the elderly in a car was the first game banned from sale in the UK, but this was due to a back-fired marketing stunt (the developer unnecessarily sent the game to the censors hoping for an 18-rating to increase the game’s notoriety, and found its sale prohibited) rather than sober deliberation or genuine public outcry.

Real violence, the non-violent among us suppose, is unlike Hollywood’s screen violence (pre or post 1966), being less dramatic, less graceful and quicker in character. Few video games, even today with their obsession towards a sort of "realism", attempt to present anything approaching a realistic depiction of violence. It’s all comic book, high-contrast spectacle, designed for maximum feedback, maximum excitement: a multiverse of Michael Bay overstatement. It’s all stylised in the extreme.

That’s not to say that video games don’t have the capacity to depict violence in its grim, real-world horror. Indeed, they are the optimum medium, with their unreal actors and easily fabricated tools and effects of violence. But few game-makers currently appear interested in exploring this space. In part this is because the independent game movement, which drove Hollywood’s interest in truer violence post-1966 is more interested in non-violent games. When violence is the staple of the mainstream the subversive creative space is in creating games devoid of the stuff. One of 2012’s most highly regarded indie titles, Fez, was created to specifically without a single on-screen death. Not even Mario – gaming’s Mickey - with his Goomba-defeating head stomps can claim as much. In a medium soaked with inconsequential violence, the counter-culture exists in the creative space that exists away from the metaphorical battlegrounds with their headshots and KOs.

The concern about game violence recently became America’s concern-du-jour, an addendum (suspect?) to the post-Sandy Hook gun control debate. In December 2012 Wayne LaPierre, executive vice president of the National Rifle Association, accused the games industry of being “a callous, corrupt and corrupting shadow industry that sells and stows violence against its own people.” Then, in January 2013, representatives from Electronic Arts and Activision - the publishers behind the Call of Duty and Medal of Honor series - were called into a conference with vice-president Joe Biden to discuss the relationship between games and real-life violence. Subsequently President Obama has called for more studies to investigate what links tie game violence to real violence, while US senator Lamar Alexander provided the extremist perspective in claiming on television that “video games is a bigger problem than guns”.

Overstated depictions of violence are not unique to video games and cinema. Shakespeare’s theatres were awash with blood, and directors routinely using goat’s entrails to add verisimilitude to a gory scene. If the realistic (or exaggerated) depiction of violence in art leads to real world mimicry, then it’s been happening for centuries. As the British comedian Peter Cook drolly put it, when referring to the supposed copycat effect of screen violence: "Michael Moriarty was very good as that Nazi on the television. As soon as I switched off the third episode, I got on the number eighteen bus and got up to Golders Green and... I must've slaughtered about eighteen thousand before I realised, you know, what I was doing. And I thought: it's the fucking television that's driven me to this."

Video games are the latest recruit to the aftermath blame tradition. And, like all new mediums, they provide the right sort of looking scapegoat, enjoyed as they are by a generally younger demographic (at least, in the cultural perception), from whose ranks America’s highest profile public-killers appear to step.

There is perhaps only one factor that separates games from other screen media: the interactivity. It’s here that the generational mistrust of the medium is allowed to blossom into full-throated critique. The games are killing simulators, they say. They allow the unstable to act out their murder fantasies – something the cinematic nasty could never do. This argument ignores the truth that violence in all games is primarily functional, always within the context of a broader aim, the conflict between the player and the designer. The interactivity may place the player in the role of a killer, but only in the same way that the chess-player is cast as the ruthless general.

And yet there is truth in the statement too. A disturbed mind could ignore the vital function of violence in a game, and instead fully-focus upon its form. The crucial ingredient is not the game itself, but the disturbed mind with its dreams of sadism, fantasies of mortal power, obsession with trauma, not to mention its brokenness and depravity. Even within this context, and with an inability to discern what is earnest and what is play, a lifetime of violent games is unlikely to affect anything but the style of a subsequent atrocity.

In the aftershock of an act of madness some seek prayer, others revenge – but most seek sense in the senseless moment. In the hours following the Sandy Hook massacre a news outlet erroneously reported that the shooter was Ryan Lanza, the brother of gunman Adam Lanza. Poring over his Facebook profile, many noticed that Ryan had ‘liked’ the video game Mass Effect, a space RPG trilogy created by the brothers Dr Ray Muzyka and Dr Greg Zeschuk. Emboldened by an expert on Fox News drawing an immediate link between the killing and video games an angry mob descended on the developer’s Facebook page declaring them "child killers".

Despite the absurdity of the logic, a chain effect was set in action, one that’s toppled up to the White House. Video games are the youngest creative medium. What literature learned in four millennia, cinema was forced to learn in a century and video games must now master in three decades. The issue of game violence and its potential effects may seem like an abstract, esoteric issue, demanding of scientific study to make clear what is opaque. But game violence has logic and precedence and is always an act of play, not of sincerity. The worry is then with those who cannot tell the difference, from disturbed high school student to the US senator.

Simon Parkin is a journalist and author who has written for The Guardian, Edge, Eurogamer - and now the New Statesman. He tweets @simonparkin

Do you remember the first chess piece you "took"? Violence doesn't just occur in digital games. Photograph: Potamos Photography on Flickr via Creative Commons
FRED TOMASELLI/PRIVATE COLLECTION/BRIDGEMAN IMAGES
Show Hide image

How nature created consciousness – and our brains became minds

In From Bacteria to Bach and Back, Daniel C Dennett investigates the evolution of consciousness.

In the preface to his new book, the ­philosopher Daniel Dennett announces proudly that what we are about to read is “the sketch, the backbone, of the best scientific theory to date of how our minds came into existence”. By the end, the reader may consider it more scribble than spine – at least as far as an account of the origins of human consciousness goes. But this is still a superb book about evolution, engineering, information and design. It ranges from neuroscience to nesting birds, from computing theory to jazz, and there is something fascinating on every page.

The term “design” has a bad reputation in biology because it has been co-opted by creationists disguised as theorists of “intelligent design”. Nature is the blind watchmaker (in Richard Dawkins’s phrase), dumbly building remarkable structures through a process of random accretion and winnowing over vast spans of time. Nonetheless, Dennett argues stylishly, asking “design” questions about evolution shouldn’t be ­taboo, because “biology is reverse engin­eering”: asking what some phenomenon or structure is for is an excellent way to understand how it might have arisen.

Just as in nature there is design without a designer, so in many natural phenomena we can observe what Dennett calls “competence without comprehension”. Evolution does not understand nightingales, but it builds them; your immune system does not understand disease. Termites do not build their mounds according to blueprints, and yet the results are remarkably complex: reminiscent in one case, as Dennett notes, of Gaudí’s church the Sagrada Família. In general, evolution and its living products are saturated with competence without comprehension, with “unintelligent design”.

The question, therefore, is twofold. Why did “intelligent design” of the kind human beings exhibit – by building robotic cars or writing books – come about at all, if unintelligent design yields such impressive results? And how did the unintelligent-design process of evolution ever build intelligent designers like us in the first place? In sum, how did nature get from bacteria to Bach?

Dennett’s answer depends on memes – self-replicating units of cultural evolution, metaphorical viruses of the mind. Today we mostly use “meme” to mean something that is shared on social media, but in Richard Dawkins’s original formulation of the idea, a meme can be anything that is culturally transmitted and undergoes change: melodies, ideas, clothing fashions, ways of building pots, and so forth. Some might say that the only good example of a meme is the very idea of a meme, given that it has replicated efficiently over the years despite being of no use whatsoever to its hosts. (The biologist Stephen Jay Gould, for one, didn’t believe in memes.) But Dennett thinks that memes add something important to discussions of “cultural evolution” (a contested idea in its own right) that is not captured by established disciplines such as history or sociology.

The memes Dennett has in mind here are words: after all, they reproduce, with variation, in a changing environment (the mind of a host). Somehow, early vocalisations in our species became standardised as words. They acquired usefulness and meaning, and so, gradually, their use spread. Eventually, words became the tools that enabled our brains to reflect on what they were ­doing, thus bootstrapping themselves into full consciousness. The “meme invasion”, as Dennett puts it, “turned our brains into minds”. The idea that language had a critical role to play in the development of human consciousness is very plausible and not, in broad outline, new. The question is how much Dennett’s version leaves to explain.

Before the reader arrives at that crux, there are many useful philosophical interludes: on different senses of “why” (why as in “how come?” against why as in “what for?”), or in the “strange inversions of reasoning” offered by Darwin (the notion that competence does not require comprehension), Alan Turing (that a perfect computing machine need not know what arithmetic is) and David Hume (that causation is a projection of our minds and not something we perceive directly). Dennett suggests that the era of intelligent design may be coming to an end; after all, our best AIs, such as the ­AlphaGo program (which beat the human European champion of the boardgame Go 5-0 in a 2015 match), are these days created as learning systems that will teach themselves what to do. But our sunny and convivial host is not as worried as some about an imminent takeover by intelligent machines; the more pressing problem, he argues persuasively, is that we usually trust computerised systems to an extent they don’t deserve. His final call for critical thinking tools to be made widely available is timely and admirable. What remains puzzlingly vague to the end, however, is whether Dennett actually thinks human consciousness – the entire book’s explanandum – is real; and even what exactly he means by the term.

Dennett’s 1991 book, Consciousness Explained, seemed to some people to deny the existence of consciousness at all, so waggish critics retitled it Consciousness Explained Away. Yet it was never quite clear just what Dennett was claiming didn’t exist. In this new book, confusion persists, owing to his reluctance to define his terms. When he says “consciousness” he appears to mean reflective self-consciousness (I am aware that I am aware), whereas many other philosophers use “consciousness” to mean ordinary awareness, or experience. There ensues much sparring with straw men, as when he ridicules thinkers who assume that gorillas, say, have consciousness. They almost certainly don’t in his sense, and they almost certainly do in his opponents’ sense. (A gorilla, we may be pretty confident, has experience in the way that a volcano or a cloud does not.)

More unnecessary confusion, in which one begins to suspect Dennett takes a polemical delight, arises from his continued use of the term “illusion”. Consciousness, he has long said, is an illusion: we think we have it, but we don’t. But what is it that we are fooled into believing in? It can’t be experience itself: as the philosopher Galen Strawson has pointed out, the claim that I only seem to have experience presupposes that I really am having experience – the experience of there seeming to be something. And throughout this book, Dennett’s language implies that he thinks consciousness is real: he refers to “conscious thinking in H[omo] sapiens”, to people’s “private thoughts and experiences”, to our “proper minds, enculturated minds full of thinking tools”, and to “a ‘rich mental life’ in the sense of a conscious life like ours”.

The way in which this conscious life is allegedly illusory is finally explained in terms of a “user illusion”, such as the desktop on a computer operating system. We move files around on our screen desktop, but the way the computer works under the hood bears no relation to these pictorial metaphors. Similarly, Dennett writes, we think we are consistent “selves”, able to perceive the world as it is directly, and acting for rational reasons. But by far the bulk of what is going on in the brain is unconscious, ­low-level processing by neurons, to which we have no access. Therefore we are stuck at an ­“illusory” level, incapable of experiencing how our brains work.

This picture of our conscious mind is rather like Freud’s ego, precariously balan­ced atop a seething unconscious with an entirely different agenda. Dennett explains wonderfully what we now know, or at least compellingly theorise, about how much unconscious guessing, prediction and logical inference is done by our brains to produce even a very simple experience such as seeing a table. Still, to call our normal experience of things an “illusion” is, arguably, to privilege one level of explanation arbitrarily over another. If you ask me what is happening on my computer at the moment, I shall reply that I am writing a book review on a word processor. If I embarked instead on a description of electrical impulses running through the CPU, you would think I was being sarcastically obtuse. The normal answer is perfectly true. It’s also true that I am currently seeing my laptop screen even as this experience depends on innumerable neural processes of guessing and reconstruction.

The upshot is that, by the end of this brilliant book, the one thing that hasn’t been explained is consciousness. How does first-person experience – the experience you are having now, reading these words – arise from the electrochemical interactions of neurons? No one has even the beginnings of a plausible theory, which is why the question has been called the “Hard Problem”. Dennett’s story is that human consciousness arose because our brains were colonised by word-memes; but how did that do the trick? No explanation is forthcoming. Dennett likes to say the Hard Problem just doesn’t exist, but ignoring it won’t make it go away – even if, as his own book demonstrates, you can ignore it and still do a lot of deep and fascinating thinking about human beings and our place in nature.

Steven Poole’s books include “Rethink: the Surprising History of New Ideas” (Random House Books)

This article first appeared in the 16 February 2017 issue of the New Statesman, The New Times