“Don’t Starve” is one recent game that encourages players to appreciate the real consequence of death. Image: Klei Entertainment
Show Hide image

What can “permadeath” video games teach us about suicide?

Permanence and finality in video games can help us be better at understanding, and talking about, mental health issues.

“Happy, depressed, spiteful, manic or suicidal? More Mudokons... with real emotions,” reads the back cover of the 1998 PlayStation video game Oddworld: Abe’s Exoddus.

To the credit of developers Oddworld Inhabitants, its titular Oddworld series was revolutionary in its exploration of mature themes – slavery, captivity, capitalist greed, to name but a few examples – which in the Nineties was a distinguished side-step from the whimsical cartoon mascots and fluorescent fantasy worlds prevalent at the time. Oddworld was darker and thus more intriguing than most of the competition, yet how Exoddus scrutinises suicide in this instance appears to resign the deeply complex state of mind to a badge of honour, a commodity, a testament to the game’s advanced artificial intelligence. Unfortunately, this blasé characterisation of the act is a damaging indictment of a game capable of tackling sophisticated ideas.

But again, Oddworld was ahead of its time. The fact that it even considered suicide within its narrative was against the grain. Nowadays, video games are more culturally aware and the burgeoning indie renaissance the medium has enjoyed over the last few years has facilitated a more refined discourse in and around interpersonal themes, not least suicide. Zoe Quinn’s Depression Quest and Will O’Neill’s Actual Sunlight both examine mental health by placing the player in the shoes of characters suffering from depression and suicidal tendencies. It’s bleak, but naturally reflects the subject matter. Most importantly, though, it’s informative – not only for those naive or ignorant to these conditions, but for those who may be in a relatable position, although the latter should be exercised with caution. Video games can never replace professional consultation, but they can show players that they’re not alone, and this can mark the first step towards remedial treatment.   

The rise of permadeath in video games – whereby player characters die permanently in-game, or where a game must restart from the beginning should the player character die, in the absence of multiple lives or continues – has changed the way players approach games. In these instances, emotion is often the driving force when it comes to decision-making, and thus with permadeath mental state governs player action, as opposed to logical rationale.

It’s worth noting here that self-sacrifice – when players kill themselves to respawn or restart levels; or non-playable characters sacrifice themselves for the greater good/to save their companions – is different from suicide as portrayed in the above examples. Permadeath essentially forces players to consider consequence, permanence and finality within the bounds of digital landscapes.

But what about out with virtual settings – are these themes and ideas transferable to reality? An academic paper published in 2014 entitled “Being Bad in a Video Game Can Make Us More Morally Sensitive”, co-authored by Dr Matthew Grizzard, discusses how engaging in certain virtual behaviours has scope to elicit feelings of guilt and can thus encourage prosocial real life consequences. Grizzard et al hypothesise that committing immoral behaviour in video games can lead to increased moral sensitivity in players. This would suggest a heightened sense or understanding of consequence on the part of the player, therefore I ask Dr Grizzard if this line of thought could extend to a better comprehension of suicide – both in-game and in real life.

“I think [the] question really has two parts: (1) Do video games encourage a better understanding of the finality of suicide in real life? Versus, (2) Could video games encourage a better understanding of the finality of suicide in real life?” he says. “With regard to the first question, I don’t think games necessarily encourage ideas of permanence and finality. Video games are designed to be played multiple times with death being a temporary inconvenience rather than permanent. In fact, players will sometimes even kill themselves in games when they encounter obstacles or become stuck in a game to ‘respawn’ at an earlier time point in the game. So video games, particularly popular press video games, encourage a view that death as temporary. Death is portrayed as detrimental in games, but it is not a one-way door.

“With regard to the second question – I do think games could encourage a better understanding of the finality of suicide.”

For many players, video games represent a safe place and facilitate a certain level of escapism. Reality is suspended and thus the in-game cycle of dying and respawning and restarting is part of the deal. But even in games that utilize the permadeath feature, death is often more of a hindrance than it is absolutely final, as Grizzard suggests. The games that use permadeath as their primary mechanic, such as Klei Entertainment’s 2013 hit Don’t Starve, seem to be the ones which best represent finality, encouraging players to appreciate the real consequence of death.

“As designers we work really hard to give players agency,” says Klei Entertainment founder Jamie Cheng. “Permadeath is almost ‘free’ agency, in that suddenly every action matters a whole lot more. I think players appreciate that, and as a designer it gives us a chance to show them similar scenarios over time, and how their actions can drastically change outcomes.

“Obviously emotion plays a part, but in my experience the emotion happens after the finality, not before. That is: when the player dies or a catastrophic event happens, that’s when the weight of consequence hits – but beforehand, players are simply more attuned to their actions and less frivolous.”

Although Cheng admits suicide was never something that was considered during Don’t Starve’s design process, he does point to the fact that it’s more important to consider the active adventure, as opposed to its end. I suggest that in a game which places so much emphasis on preserving life, comparisons between virtual and actual reality are more or less likely to follow.

“I’m unclear that there’s much correlation between in-game and reality,” offers Cheng. “Instead of affecting how players perceive reality, our goal is the other way around – to have a video game that mimics reality in its finality and consequence. In addition, we want players to enjoy the journey. Since the player knows that eventually they’ll lose it all, it makes more sense that the process is the interesting part, and not what you get at the end of the journey.”

It could be argued that no matter which way around these ideas are depicted, in light of what Cheng says, the end result illustrates an intrinsic link between game worlds and the real world. This would seem to play perfectly into Grizzard’s view that video games could do more in encouraging a better understanding of the finality of suicide. He points to other transformative media that tackles similar themes such as film, noting Frank Capra’s It’s a Wonderful Life as a pertinent example of how powerful viewing the world through someone else’s eyes – in this example a fictitious character – can be for viewers. There’s no reason why video games can’t deliver something similar.

“Evolutionarily, play represents a safe place to practice or experience skills that we generally don’t or can’t have direct access to in the real-world for many reasons,” adds Grizzard. “Both predatory and prey animals play to learn how to survive in the wild. Human play serves similar roles, with the skills that we learn being not only related to physical attributes but also social attributes. For example, in medical schools in the US, doctors-in-training practice giving bad news to patients in ‘play’ scenarios with actors.

“These scenarios help doctors practise skills that they rarely have the opportunity to practice in the real-world in a safe environment where making mistakes has few consequences. Video games can be particularly adept at allowing the same type of play for several reasons. Primarily, the human brain doesn’t firmly distinguish between real versus mediated stimuli. Our brain reacts to mediated images in a similar fashion as it does to real images. This is why scary movies can make us jump or tearjerkers can make us cry. Video games have the potential to provide players with a rich virtual environment filled characters and stimuli that they respond to as if they were real.

“As such, games could provide a glimpse into the severe negative consequences of suicide on family members and friends. This glimpse is obviously impossible in the real world, but games have the ability to simulate it.”

Video games are in the auspicious position of not only being a persuasive, cogent and expressive medium, but, unique to any other form of media, are also interactive. Physically engaging players in two-way stories arguably puts the platform in the best position to challenge perceptions, and to explore personal, more sophisticated themes. In the grand scheme of things, video games as a medium are relatively new, thus there is no reason why this can’t or won’t continue to grow in the future.   

“Video games do have the potential,” adds Grizzard. “However, questions still remain as to whether a single play experience that associates strong consequences with suicide could overcome the more traditional ‘death is temporary’ play experiences that are generally seen in video games. These are fascinating questions, and I would be hesitant to conclude one way or the other. “As always, more research must be done.”

Research such as Grizzard’s, coupled with the rising number of video games tackling social issues, must continue. As a society, suicide has become stigmatised to the point where we seem almost scared to discuss the subject for fear of admitting failure or weakness. This is, of course, ridiculous and British culture is particularly guilty of endorsing the “stiff-upper lip” mentality that perpetuates warped machismo doctrines such as this. I’m from Glasgow and in 2012, the suicide rates in Scotland were 73 per cent greater than those of England and Wales. No one is suggesting video games can single-handed drive these statistics down, but if the medium can help encourage healthier, more enlightened conversation around the issue, then it's heading in the right direction.

If you are affected by any of the issues discussed here, you can call the Samaritans free in the UK on 08457 909090, or in the US contact the National Suicide Prevention Line on 1-800-273-8255.

Show Hide image

The City of London was never the same after the "Big Bang"

Michael Howard reviews Iain Martin's new book on the legacy of the financial revolution 30 years on.

We are inundated with books that are, in effect, inquests on episodes of past failure, grievous mistakes in policy decisions and shortcomings of leadership. So it is refreshing to read this lively account of a series of actions that add up to one of the undoubted, if not undisputed, successes of modern ­government action.

Iain Martin has marked the 30th anniversary of the City’s Big Bang, which took place on 27 October 1986, by writing what he bills as the inside story of a financial revolution that changed the world. Yet his book ranges far and wide. He places Big Bang in its proper context in the history of the City of London, explaining, for example, and in some detail, the development of the financial panics of 1857 and 1873, as well as more recent crises with which we are more familiar.

Big Bang is the term commonly applied to the changes in the London Stock Exchange that followed an agreement reached between Cecil Parkinson, the then secretary of state for trade and industry, and Nicholas Goodison, the chairman of the exchange, shortly after the 1983 election. The agreement provided for the dismantling of many of the restrictive practices that had suited the cosy club of those who had made a comfortable living on the exchange for decades. It was undoubtedly one of the most important of the changes made in the early 1980s that equipped the City of London to become the world’s pre-eminent centre of international capital that it is today.

But it was not the only one. There was the decision early in the life of the Thatcher government to dismantle foreign-exchange restrictions, as well as the redevelopment of Docklands, which provided room for the physical expansion of the City (which was so necessary for the influx of foreign banks that followed the other changes).

For the first change, Geoffrey Howe and Nigel Lawson, at the Treasury at the time, deserve full credit, particularly as Margaret Thatcher was rather hesitant about the radical nature of the change. The second was a result of Michael Heseltine setting up the London Docklands Development Corporation, which assumed planning powers that were previously in the hands of the local authorities in the area. Canary Wharf surely would not exist today had that decision not been made – and even though the book gives a great deal of well-deserved credit to the officials and developers who took up the baton, Heseltine’s role is barely mentioned. Rarely is a politician able to see the physical signs of his legacy so clearly. Heseltine would be fully entitled to appropriate Christopher Wren’s epitaph: “Si monumentum requiris, circumspice.”

These changes are often criticised for having opened the gates to unbridled capitalism and greed and Martin, while acknow­ledging the lasting achievements of the new regime, also explores its downside. Arguably, he sometimes goes too far. Are the disparities in pay that we now have a consequence of Big Bang? Can it be blamed for the increase in the pay of footballers? This is doubtful. Surely these effects owe more to market forces, in the case of footballers, and shortcomings in corporate governance, in the case of executive pay. (It will be interesting to see whether the attempts by the current government to address the latter achieve the desired results.)

Martin deals with the allegation that the changes brought in a new world in which moneymaking could be given full rein without the need to abide by any significant regulation. This is far from the truth. My limited part in bringing about these changes was the responsibility I was handed, in my first job in government, for steering through parliament what became the Financial Services Act 1986. This was intended to provide statutory underpinning for a system of self-regulation by the various sectors of the financial industry. It didn’t work out exactly as I had intended but, paradoxically, one of the main criticisms of the regulatory system made in the book is that we now have a system that is too legalistic. Rather dubious comparisons are made with a largely mythical golden age, when higher standards of conduct were the order of the day without any need for legal constraints. The history of insider dealing (and the all-too-recently recognised need to legislate to make this unlawful) gives the lie to this rose-tinted picture of life in the pre-Big Bang City.

As Martin rightly stresses, compliance with the law is not enough. People also need to take into account the moral implications of their conduct. However, there are limits to the extent to which governments can legislate on this basis. The law can provide the basic parameters within which legal behaviour is to be constrained. Anything above and beyond that must be a matter for individual conscience, constrained by generally accepted standards of morality.

The book concludes with an attempt at an even-handed assessment of the likely future for the City in the post-Brexit world. There are risks and uncertainties. Mercifully, Martin largely avoids a detailed discussion of the Markets in Financial Instruments Directive and its effect on “passporting”, which allows UK financial services easy access to the European Economic Area. But surely the City will hold on to its pre-eminence as long as it retains its advantages as a place to conduct business? The European banks and other institutions that do business in London at present don’t do so out of love or affection. They do so because they are able to operate there with maximum efficiency.

The often rehearsed advantages of London – the time zone, the English language, the incomparable professional infrastructure – will not go away. It is not as if there is an abundance of capital available in the banks of the EU: Europe’s business and financial institutions cannot afford to dispense with the services that London has to offer. As Martin puts it in the last sentences of the book, “All one can say is: the City will survive, and prosper. It usually does.”

Crash Bang Wallop is not flawless. (One of its amusing errors is to refer, in the context of a discussion of the difficulties faced by the firm Slater Walker, to one of its founders as Jim Walker, a name that neither Jim Slater nor Peter Walker, the actual founders, would be likely to recognise.) Yet it is a thoroughly readable account of one of the most important and far-reaching decisions of modern government, and a timely reminder of how the City of London got to where it is now.

Michael Howard is a former leader of the Conservative Party

This article first appeared in the 20 October 2016 issue of the New Statesman, Brothers in blood