Why I won't be buying an Xbox One or a PS4

As a veteran of many Console Wars, Alan Williamson believes that the best console is the one you have with you.

This year brings the most boring games console launch in history. I don’t mean that in a hyperbolic, share-this-incendiary-link-with-your-friends way: having lived through and been an active combatant in four generations of console wars, like many former soldiers I have now become an advocate for peace.

The First Great World Console War broke out in the early Nineties between Sega’s Mega Drive and the Super Nintendo Entertainment System (Nintendo’s progress in the 1980s was more a swift annexation). Both manufacturers were then broadsided by the introduction of the Sony PlayStation in 1995. Since then we have descended into effective Cold War, an ever-escalating technological arms race between equally weighted armies with few casualties. While there was isolated fighting in smaller Handheld Console Wars, a gaming Vietnam where Pokémon waged a guerrilla war for children’s minds, the Fourth World Console War ended with over eighty million consoles sold for each belligerent. I’d make an analogy about PCs and the United Nations, but I think the metaphor is already stretched to breaking at this point.

Each generation of the Console Wars had its own innovations, each console its own personality and fan base. The second saw the birth of affordable 3D graphics and some of the most critically-acclaimed games of all time, such as Legend of Zelda: Ocarina of Time, Metal Gear Solid and Final Fantasy VII. The third was dominated by Sony’s PlayStation 2 (don’t blame me, I had a Dreamcast) but also saw the beginnings of online play and the rise of Xbox megabrand Halo. We’re now at the tail end of the fourth generation, which has brought consoles mostly up to speed with PCs through high-definition graphics and digital distribution of games.

The war has reached a stalemate: Sony and Microsoft still rule in their home countries, while a Nintendo Wii gathers dust in every living room of the developed world. As a consequence of the global recession, game publishers have tried to extract even more revenue from a squeezed market. Witness the rise and demise of the loathed ‘online pass’, now replaced by the euphemistic ‘season pass’; paid downloads on day one that unlock content already on the disc; pre-order bonuses; and free to play games - or as I like to call them, pay to play games. History will show this generation as one that expanded the monetisation of games as much as the experiences themselves, often to the detriment of fun and artistic merit.

So here we go again with two new omnipotent wonderboxes, the PlayStation 4 and Xbox One. The games look much the same as the old ones: of course, similar criticisms were levelled at the Xbox 360, but even a layman could appreciate the beauty of Project Gotham Racing 3 compared to its predecessor. Perhaps it was the somewhat-sweaty razzmatazz of Earl’s Court at Eurogamer Expo in September, but I just couldn’t tell the difference between Forza Motorsport 5 and Forza 4, and I’ve played over fifty hours of Forza 4. With big franchises like Call of Duty and Assassin’s Creed launching on both consoles, as well as their predecessors, the choice is based more on ideology and available funds rather than real quantifiable differences. In fact, with so few games available, it makes less sense to buy a PS4 than a PS3.

Games journalists are in a difficult position. Those who haven’t been invited to New York for a complimentary gold-plated PS4 (I’m writing on a train travelling through Slough, but thanks for asking) will have pre-ordered the expensive new consoles to support their job, adding buyer’s remorse to an increasingly-dominant business model of offering their words online for free, funded by advertising. This model relies on hyperbole: every month a new ‘best game ever’, every day an announcement of something on the horizon, every minute a constant stream of rumour posts. It focuses on sensationalising games and the machines that play them, rather than criticising them. It encourages critics to follow the zeitgeist rather than dwelling on the games that linger in our minds. It blurs the lines between editorial and advertorial: after all, what is a news post about a PlayStation TV commercial if not advertising?

The YouTubeisation of publishing elevates every berk with a webcam and an opinion to the same level as the seasoned journalist. The Twitterisation of news encourages news-breaking, but not fact-checking. This isn’t unique to coverage of videogames, but the medium is entangled with technology and therefore at the forefront of innovations in publishing. Meanwhile, channels like PlayStation Access and Nintendo Direct show that publishers can successfully skip the middleman and advertise directly to customers. A new generation of games consoles deserves newer, deeper ways of looking at them - something outlets like Press Select, Boss Fight Books and my own Five out of Ten are trying to address. But we may be at the stage where readers are better served by the groupthink of their peers than the proclamations of journalists, where real decisions are made and discussions are had below the line.

While we’re keen to proclaim that videogames now generate more revenue than cinema, few have asked whether that was sustainable or even desirable. While digital distribution has led to a bigger market for indie developers, especially on the PC and smartphones, the biggest successes like Call of Duty and Battlefield are better for their publishers than the people that make them. In the UK we’ve seen the closure of much-loved studios like Blitz Games and Sony Liverpool, while other British studios like Rare have lost their lustre; the team that made the charming Banjo-Kazooie and Viva Piñata now produce bland sports titles, a reflection of the wishes of their corporate overlord Microsoft. The industry undervalues its creators and programmers, encouraging a ‘crunch’ culture with unpaid overtime and ridiculous hours. This system where distributor-takes-all reminds me of the Hollywood visual effects studio Rhythm and Hues, which won an Oscar for Life of Pi before declaring bankruptcy. If videogames really are such an important industry, they can do a lot better than emulating Hollywood in content and working culture.

Some pundits believe this may be the last console generation. I’d like to believe otherwise. I have fond memories of consoles and continue to make more: they provide a cheaper entry point into the fantastic worlds of fiction that games offer, without the expense or complexities of a PC. Yet perhaps games have outgrown the traditional model of consoles: the exponential growth of indie games is better suited to the less restrictive system of a personal computer, mobile phone or even the Ouya ‘microconsole’. Valve’s SteamOS promises the power of Linux married with their friendly distribution platform. While Sony and Microsoft are taking steps to open development on their consoles, their revenue model is built on strict control of the system: they focus on making money off the games they sell, not the platform itself.

Even more exciting are devices like the Oculus Rift, a virtual reality headset that offers a sea change in the way we play games. However, according to its creators it requires tremendous computer horsepower to be convincing - more than even the Xbox One and PS4 can provide. For years, consoles offered the best way to play games, but with that advantage gone they’re like digital cameras in a world where everyone has a camera built into their phone. I choose that analogy carefully, because I think portable consoles like the Nintendo 3DS are much better than an iPhone for games, but there’s a trade off between quality and the utility of having an all-in-one device. The best console is the one you have with you.

“War never changes,” mused Ron Perlman in the introduction to 2008’s Fallout 3. But this is a war that needs to change if games consoles are to expand, or merely retain their cultural relevance. Consoles used to represent inclusivity and the comforts of socialising with friends, but now they are targeted at an audience - and a medium - which is growing up and leaving them behind.

What should I buy?

I don’t play games, but I’d like to start

Nintendo’s latest console, the Wii U, is an underrated box. It’s cheaper than the competition and can play older Wii games as well as its new, shinier ones. Nintendo still make the best games, appealing to both children and adults like the videogame equivalent of Pixar. Unfortunately, also like Pixar, they release one great product every two years. Super Mario 3D World and Legend of Zelda: The Wind Waker are better than anything PlayStation and Xbox can offer this year.

Not only is the iPad a great computer, it’s also a great way to play games. But please avoid the mainstream tosh like Candy Crush Saga and instead try innovative titles like The Room, Year Walk and Device 6.

As an alternative, the website Forest Ambassador lists free five-minute games that work on most computers, and has the feel of a hippie art gallery. Hopefully, that last sentence will tell you whether you’ll like it.

I play games on my phone and want something better

The Nintendo 3DS goes from strength to strength with life absorbers like Animal Crossing and Pokémon X, plus the usual Mario, Mario Kart and Zelda. Since you probably won’t use the retina-bursting 3D functionality, you may as well buy the cheaper 2DS. It can also play games from the vast library of DS titles.

Steam is a free download for any computer running Windows, OS X or Linux and has an unrivalled library of games, from the biggest new releases to smaller (but no less compelling) games like Spelunky, FTL: Faster Than Light and Redshirt.

I want the best gaming experience available

PS4, Xbox One or a monster gaming PC. Choose a side, then spend the next five years of your life attacking the option you didn’t pick in internet comment threads.

Alan Williamson is Editor-in-Chief of the videogame culture magazine Five out of Ten

The new controller for the Xbox One. Photo: Getty
GERRY BRAKUS
Show Hide image

“Like a giant metal baby”: whether you like it or not, robots are already part of our world

For centuries, we have built replacements for ourselves. But are we ready to understand the implications?

There were no fireworks to dazzle the crowd lining the streets of Alexandria to celebrate Cleopatra’s triumphant return to the city in 47BC. Rather, there was a four-and-a-half-metre-tall robotic effigy of the queen, which squirted milk from mechanical bosoms on to the heads of onlookers. Cleopatra, so the figure was meant to symbolise, was a mother to her people.

It turns out that robots go back a long way. At the “Robots” exhibition now on at the Science Museum in London, a clockwork monk from 1560 walks across a table while raising a rosary and crucifix, its lips murmuring in devotion. It is just one of more than 100 exhibits, drawn from humankind’s half-millennium-long obsession with creating mechanical tools to serve us.

“We defined a robot as a machine which looks lifelike, or behaves in lifelike ways,” Ben Russell, the lead curator of the exhibition, told me. This definition extends beyond the mechanisms of the body to include those of the mind. This accounts for the inclusion of robots such as “Cog”, a mash-up of screws, motors and scrap metal that is, the accompanying blurb assures visitors, able to learn about the world by poking at colourful toys, “like a giant metal baby”.

The exhibits show that there has long existed in our species a deep desire to rebuild ourselves from scratch. That impulse to understand and replicate the systems of the body can be seen in some of the earliest surviving examples of robotics. In the 16th century, the Catholic Church commissioned some of the first anthropomorphic mechanical machines, suggesting that the human body had clockwork-like properties. Models of Jesus bled and automatons of Satan roared.

Robots have never been mere anatomical models, however. In the modern era, they are typically employed to work on the so-called 4D tasks: those that are dull, dumb, dirty, or dangerous. A few, such as Elektro, a robot built in Ohio in the late 1930s, which could smoke a cigarette and blow up balloons, were showmen. Elektro toured the US in 1950 and had a cameo in an adult movie, playing a mechanical fortune-teller picking lottery numbers and racehorses.

Nevertheless, the idea of work is fundamental to the term “robot”. Karel Čapek’s 1920s science-fiction play RUR, credited with introducing the word to the English language, depicts a cyborg labour force that rebels against its human masters. The Czech word robota means “forced labour”. It is derived from rab, which means “slave”.

This exhibition has proved timely. A few weeks before it opened in February, a European Parliament commission demanded that a set of regulations be drawn up to govern the use and creation of robots. In early January, Reid Hoffman and Pierre Omidyar, the founders of LinkedIn and eBay respectively, contributed $10m each to a fund intended to prevent the development of artificial intelligence applications that could harm society. Human activity is increasingly facilitated, monitored and analysed by AI and robotics.

Developments in AI and cybernetics are converging on the creation of robots that are free from direct human oversight and whose impact on human well-being has been, until now, the stuff of science fiction. Engineers have outpaced philosophers and lawmakers, who are still grappling with the implications as autonomous cars roll on to our roads.

“Is the world truly ready for a vehicle that can drive itself?” asked a recent television advert for a semi-autonomous Mercedes car (the film was pulled soon afterwards). For Mercedes, our answer to the question didn’t matter much. “Ready or not, the future is here,” the ad concluded.

There have been calls to halt or reverse advances in robot and AI development. Stephen Hawking has warned that advanced AI “could spell the end of the human race”. The entrepreneur Elon Musk agreed, stating that AI presents the greatest existential threat to mankind. The German philosopher Thomas Metzinger has argued that the prospect of increasing suffering in the world through this new technology is so morally awful that we should cease to build artificially intelligent robots immediately.

Others counter that it is impossible to talk sensibly about robots and AI. After all, we have never properly settled on the definitions. Is an inkjet printer a robot? Does Apple’s Siri have AI? Today’s tech miracle is tomorrow’s routine tool. It can be difficult to know whether to take up a hermit-like existence in a wifi-less cave, or to hire a Japanese robo-nurse to swaddle our ageing parents.

As well as the fear of what these machines might do to us if their circuits gain sentience, there is the pressing worry of, as Russell puts it, “what we’re going to do with all these people”. Autonomous vehicles, say, could wipe out the driving jobs that have historically been the preserve of workers displaced from elsewhere.

“How do we plan ahead and put in place the necessary political, economic and social infrastructure so that robots’ potentially negative effects on society are mitigated?” Russell asks. “It all needs to be thrashed out before it becomes too pressing.”

Such questions loom but, in looking to the past, this exhibition shows how robots have acted as society’s mirrors, reflecting how our hopes, dreams and fears have changed over the centuries. Beyond that, we can perceive our ever-present desires to ease labour’s burden, to understand what makes us human and, perhaps, to achieve a form of divinity by becoming our own creators. 

This article first appeared in the 23 March 2017 issue of the New Statesman, Trump's permanent revolution