Nobody Remembers Their First Kill: the importance of video game violence

Violence isn't unique to cinema or games - they're just the latest recruit to the aftermath blame tradition.

Nobody remembers their first kill. It’s not like the high security prison-yards, where they pace just to forget, dream-haunted. When it comes to video games, nobody remembers their first kill. If you can recall your first video game, well, then you’ve a chance of pinpointing the setting (over a blackened Space Invaders’ killing field? Atop a Sonic the Hedgehog green hill? Deep within a Pac-Man labyrinth?). But a name, date and face? Not likely.

It’s not just the troubling number of digital skeletons in the players’ closet that prevents recollection – although from Super Mario to Call of Duty, the trail of dead we game-killers leave behind is of genocidal proportions. It’s that these slayings are inconsequential. Remember the first pawn or knight you "took" in chess – the moment you callously toppled its body from the board? Hardly. Even if the piece had a name and backstory – a wife and children waiting on news back home, a star-crossed romance with an rival pawn – such details would have been forgotten the moment you packed away the board.

Most game murder (and its moments-older twin, game violence) leaves no imprint on the memory because it lacks meaning outside of the game context. Unlike depictions of death in cinema, which can trigger keen memories of the viewer’s own past pains and sorrows, game violence is principally systemic in nature; its purpose is to move the player either towards a state of victory or of defeat, rarely to tears or reflection. Likewise, there is no remorse for the game murder not only because the crime is fictional but also because, unless you’re playing for money or a hand in marriage, there is no consequence beyond the border of the game’s own fleeting reality.

Video games were deadly from the get-go. Spacewar! – the proto-game of the MIT labs played on $120,000 mainframe computers in the early-1960s set the tone: a combative space game in which two players attempted to be the first to gun the other down. From this moment onwards violence was the medium’s defining quiddity. This is no great surprise. Most sports are metaphors for combat. The team games – soccer, rugby and so on - are sprawling battles in which attackers and defenders ebb and flow up and down the field in a clash of will and power led by their military-titled "captains". American Football is a series of frantic First World War-style scrambles for territory measured in 10-yard increments. Tennis is a pistol duel, squinting shots lined up in the glare of a high-noon sun; running races are breakneck chases between predator and prey, triggered by the firing of a gun. That video games would extend the combat metaphor that defines most human play was natural.

The arcades concentrated the metaphor into sixty-second clashes between player and computer, dealing as they invariably did in the violence of sudden failure. This was a financial decision more than it was an artistic one: their designers needed to kill off the player after a minute or so in order to squeeze another quarter out of them. Violence was part of the business model: in the battle between human and machine, the machine must always overwhelm the player. In such games, as the author David Mitchell wrote, we play to postpone the inevitable, that moment when our own capacity for meting out playful death is overcome by our opponent’s. This is the DNA of all games, handed down from the playground to the board and, finally to the screen.

The problem of game violence then – the problem that’s inspired a liberal president to call for Congress to fund another clutch of studies into its potential effects on the player – cannot derive from its existence or even its ubiquity. Violence is a necessary function of the video game. The problem must be to do with the aesthetic of the violence – the way in which its rendered on the screen. It is a question of form, not function – something that moves the conversation into the realm of all screen violence, a style concern.

The date at which cinematic violence began to become violent can be accurately set at 1966, the year that the Hay Production Code (which moderated on-screen "brutality and possible gruesomeness") was reversed and film edged closer to becoming a director’s medium. Arthur Penn’s Bonnie and Clyde (1967) and Sam Peckinpah’s The Wild Bunch (1969) took the cartoonish invulnerability of old movie violence (the "ox-stunning fisticuffs", as Vladimir Nabokov put it) and splattered the screen with blood and gore instead. Soon movie directors were ordering blood pouches in the thousands, crimson-washing every fight scene, exploring the boundaries of this newfound visual freedom.

Depictions of video game violence chart a similar trajectory from the staid to the outlandish, but it's a journey whose pace was set by technology, not censorship. Early game designers couldn’t spare the graphical processing power needed to render a tubular spout of blood or a glistening wound. They made do with guttural screams to bring the collapsing pixels to more vivid life.

Devoid of censorship and drawn to the potential marketing potency of being dubbed a "nasty", some developers courted controversy with violent subject matter (notably 1982’s Custer’s Revenge, an Atari 2600 game in which players assume the role of a scrawny settler dodging arrows in a bid to rape a bound American native girl). But even the most vulgar scene is robbed of its power when rendered in tubby pixels, like a lewd scrawl in a tittering teen-age boy’s exercise book.

When the technology caught up and games had the opportunity to begin to present the game violence and murders in a truer to life form, the uncanny valley effect continued to render them inefficient. 1997’s Carmageddon, a game in which players attempt to mow down policemen and the elderly in a car was the first game banned from sale in the UK, but this was due to a back-fired marketing stunt (the developer unnecessarily sent the game to the censors hoping for an 18-rating to increase the game’s notoriety, and found its sale prohibited) rather than sober deliberation or genuine public outcry.

Real violence, the non-violent among us suppose, is unlike Hollywood’s screen violence (pre or post 1966), being less dramatic, less graceful and quicker in character. Few video games, even today with their obsession towards a sort of "realism", attempt to present anything approaching a realistic depiction of violence. It’s all comic book, high-contrast spectacle, designed for maximum feedback, maximum excitement: a multiverse of Michael Bay overstatement. It’s all stylised in the extreme.

That’s not to say that video games don’t have the capacity to depict violence in its grim, real-world horror. Indeed, they are the optimum medium, with their unreal actors and easily fabricated tools and effects of violence. But few game-makers currently appear interested in exploring this space. In part this is because the independent game movement, which drove Hollywood’s interest in truer violence post-1966 is more interested in non-violent games. When violence is the staple of the mainstream the subversive creative space is in creating games devoid of the stuff. One of 2012’s most highly regarded indie titles, Fez, was created to specifically without a single on-screen death. Not even Mario – gaming’s Mickey - with his Goomba-defeating head stomps can claim as much. In a medium soaked with inconsequential violence, the counter-culture exists in the creative space that exists away from the metaphorical battlegrounds with their headshots and KOs.

The concern about game violence recently became America’s concern-du-jour, an addendum (suspect?) to the post-Sandy Hook gun control debate. In December 2012 Wayne LaPierre, executive vice president of the National Rifle Association, accused the games industry of being “a callous, corrupt and corrupting shadow industry that sells and stows violence against its own people.” Then, in January 2013, representatives from Electronic Arts and Activision - the publishers behind the Call of Duty and Medal of Honor series - were called into a conference with vice-president Joe Biden to discuss the relationship between games and real-life violence. Subsequently President Obama has called for more studies to investigate what links tie game violence to real violence, while US senator Lamar Alexander provided the extremist perspective in claiming on television that “video games is a bigger problem than guns”.

Overstated depictions of violence are not unique to video games and cinema. Shakespeare’s theatres were awash with blood, and directors routinely using goat’s entrails to add verisimilitude to a gory scene. If the realistic (or exaggerated) depiction of violence in art leads to real world mimicry, then it’s been happening for centuries. As the British comedian Peter Cook drolly put it, when referring to the supposed copycat effect of screen violence: "Michael Moriarty was very good as that Nazi on the television. As soon as I switched off the third episode, I got on the number eighteen bus and got up to Golders Green and... I must've slaughtered about eighteen thousand before I realised, you know, what I was doing. And I thought: it's the fucking television that's driven me to this."

Video games are the latest recruit to the aftermath blame tradition. And, like all new mediums, they provide the right sort of looking scapegoat, enjoyed as they are by a generally younger demographic (at least, in the cultural perception), from whose ranks America’s highest profile public-killers appear to step.

There is perhaps only one factor that separates games from other screen media: the interactivity. It’s here that the generational mistrust of the medium is allowed to blossom into full-throated critique. The games are killing simulators, they say. They allow the unstable to act out their murder fantasies – something the cinematic nasty could never do. This argument ignores the truth that violence in all games is primarily functional, always within the context of a broader aim, the conflict between the player and the designer. The interactivity may place the player in the role of a killer, but only in the same way that the chess-player is cast as the ruthless general.

And yet there is truth in the statement too. A disturbed mind could ignore the vital function of violence in a game, and instead fully-focus upon its form. The crucial ingredient is not the game itself, but the disturbed mind with its dreams of sadism, fantasies of mortal power, obsession with trauma, not to mention its brokenness and depravity. Even within this context, and with an inability to discern what is earnest and what is play, a lifetime of violent games is unlikely to affect anything but the style of a subsequent atrocity.

In the aftershock of an act of madness some seek prayer, others revenge – but most seek sense in the senseless moment. In the hours following the Sandy Hook massacre a news outlet erroneously reported that the shooter was Ryan Lanza, the brother of gunman Adam Lanza. Poring over his Facebook profile, many noticed that Ryan had ‘liked’ the video game Mass Effect, a space RPG trilogy created by the brothers Dr Ray Muzyka and Dr Greg Zeschuk. Emboldened by an expert on Fox News drawing an immediate link between the killing and video games an angry mob descended on the developer’s Facebook page declaring them "child killers".

Despite the absurdity of the logic, a chain effect was set in action, one that’s toppled up to the White House. Video games are the youngest creative medium. What literature learned in four millennia, cinema was forced to learn in a century and video games must now master in three decades. The issue of game violence and its potential effects may seem like an abstract, esoteric issue, demanding of scientific study to make clear what is opaque. But game violence has logic and precedence and is always an act of play, not of sincerity. The worry is then with those who cannot tell the difference, from disturbed high school student to the US senator.

Simon Parkin is a journalist and author who has written for The Guardian, Edge, Eurogamer - and now the New Statesman. He tweets @simonparkin

Do you remember the first chess piece you "took"? Violence doesn't just occur in digital games. Photograph: Potamos Photography on Flickr via Creative Commons
JACQUES DEMARTHON/AFP/Getty Images
Show Hide image

Why aren’t there more scientists in the National Portrait Gallery?

If the National Portrait Gallery celebrates the best of British achievements, there’s a vast area that is being overlooked.

The National Portrait Gallery (NPG) in London is my favourite place to visit in the city, even though I’m a mere scientist, or uncultured philistine as the gallery’s curators might consider me. Much of my research involves “omics”. We have “genomics” and “transcriptomics" to describe the science of sequencing genomes. “Proteomics” characterises our proteins and “metabolomics” measures refers to the small chemical “metabolites” from which we’re composed. The “ome” suffix has come to represent the supposed depiction of systems in their totality. We once studied genes, but now we can sequence whole genomes. The totality of scientific literature is the “bibliome”. The NPG purports to hang portraits of everyone who is anyone; a sort of “National Portraitome”.

However, I am increasingly struck by the subjective view of who is on display. Some areas of British life get better coverage than others. Kings and queens are there; Prime ministers, authors, actors, artists and playwrights too. But where are the scientists? Those individuals who have underpinned so much of all we do in the modern world. Their lack of representation is disappointing, to say the least. A small room on the ground floor purports to represent contemporary science. An imposing portrait of Sir Paul Nurse, Nobel laureate and current president of the world’s most prestigious science academy (the Royal Society (RS)) dominates the room. Opposite him is a smaller picture of Nurse’s predecessor at the RS, astronomer Martin Rees. James Dyson (the vacuum cleaner chap), James Lovelock (an environmental scientist) and Susan Greenfield all have some scientific credentials. A couple of businessmen are included in the room (like scientists, these people aren’t artists, actors, playwrights or authors). There is also one of artist Mark Quinn’s grotesque blood-filled heads. Some scientists do study blood of course.

Where are our other recent Nobel winners? Where are the directors of the great research institutes, funding bodies, universities and beyond? Does the nation really revere its artists, playwrights and politicians so much more than its scientists? I couldn’t find a picture of Francis Crick, co-discoverer of the key role played by DNA in genetics. Blur, however, are there. “Parklife” is certainly a jaunty little song, but surely knowing about DNA has contributed at least as much to British life.

Returning to my “omics” analogy, the gallery itself is actually more like what’s called the “transcriptome”. Genes in DNA are transcribed into RNA copies when they are turned on, or “expressed”. Every cell in our body has the same DNA, but each differs because different genes are expressed in different cell types. Only a fraction of the NPG’s collection ends up “expressed” on its walls at any one time. The entire collection is, however, available online. This allows better insight into the relative value placed upon the arts and sciences. The good news is that Francis Crick has 10 portraits in the collection – considerably more than Blur. Better still, Sir Alexander Fleming, the Scottish discoverer of antibiotics has 20 likenesses, two more than Ian Fleming, creator of James Bond. I had suspected the latter might do better. After all, antibiotics have only saved hundreds of millions of lives, while Bond saved us all when he took out Dr No.

To get a broader view, I looked at British winners of a Nobel Prize since 1990, of which there have been 27. Three of these were for literature, another three each for economics and physics, a couple for peace, five for chemistry and 11 for physiology or medicine. The writers Doris Lessing, Harold Pinter and V S Naipaul respectively have 16, 19 and five portraits in the collection. A majority of the scientist winners have no portrait at all. In fact there are just 16 likenesses for the 24 non-literature winners, compared to 40 for the three writers. Albeit of dubious statistical power, this small survey suggests a brilliant writer is around 20 times more likely to be recognised in the NPG than a brilliant scientist. William Golding (1983) was the last British winner of a Nobel for literature prior to the 90s. His eight likenesses compare to just two for Cesar Milstein who won the prize for physiology or medicine a year later in 1984. Milstein invented a process to create monoclonal antibodies, which today serve as a significant proportion of all new medicines and generate over £50bn in revenue each year. Surely Milstein deserves more than a quarter of the recognition (in terms of portraits held in the gallery) bestowed upon Golding for his oeuvre, marvellous as it was.

C P Snow famously crystallised the dichotomy between science and the humanities in his 1959 Rede lecture on “The Two Cultures and the Scientific Revolution” (which was based on an article first published in the New Statesman in 1956). He attacked the British establishment for entrenching a cultural preference for the humanities above science, a schism he saw growing from the roots of Victorian scientific expansion. The gallery supports Snow’s view. Room 18, my favourite, “Art, Invention and Thought: the Romantics” covers that turbulent period covering the late eighteenth and early nineteenth centuries. Here we find the groundbreaking astronomer (and harpsichordist) William Herschel, the inventor of vaccination Dr Edward Jenner, the pioneering chemist Humphrey Davy and the physicist who came up with the first credible depiction of an atom, John Dalton. Opposite Jenner (who also composed poetry) is the portrait of another medically trained sitter, John Keats, who actually swapped medicine for poetry. Wordsworth, Coleridge, Burns, Blake, Clare, Shelley and Byron, all adorn the walls here. The great Mary Shelly has a space too. She wrote Frankenstein after listening to Davy’s famous lectures on electricity. The early nineteenth century saw the arts and science united in trying to explain the universe.

Room 27, the richest collection of scientists in the building, then brings us the Victorians. The scientists sit alone. Darwin takes pride of place, flanked by his “bull dog” Thomas Huxley. Other giants of Victorian science and invention are present, such as Charles Lyell, Richard Owen, Brunel, Stephenson, Lister and Glasgow’s Lord Kelvin. Inevitably the expansion of science and understanding of the world at this time drove a cultural divide. It’s less clear, however, why the British establishment grasped the humanities to the bosom of its cultural life, whilst shunning science. But as the gallery portrays today, it is a tradition that has stuck. However, surely the NPG however has an opportunity to influence change. All it needs to do is put some more scientists on its walls.