This is a metaphor. Photograph: Getty Images
Show Hide image

Your brain on pseudoscience: the rise of popular neurobollocks

The “neuroscience” shelves in bookshops are groaning. But are the works of authors such as Malcolm Gladwell and Jonah Lehrer just self-help books dressed up in a lab coat?

An intellectual pestilence is upon us. Shop shelves groan with books purporting to explain, through snazzy brain-imaging studies, not only how thoughts and emotions function, but how politics and religion work, and what the correct answers are to age-old philosophical controversies. The dazzling real achievements of brain research are routinely pressed into service for questions they were never designed to answer. This is the plague of neuroscientism – aka neurobabble, neurobollocks, or neurotrash – and it’s everywhere.

In my book-strewn lodgings, one literally trips over volumes promising that “the deepest mysteries of what makes us who we are are gradually being unravelled” by neuroscience and cognitive psychology. (Even practising scientists sometimes make such grandiose claims for a general audience, perhaps urged on by their editors: that quotation is from the psychologist Elaine Fox’s interesting book on “the new science of optimism”, Rainy Brain, Sunny Brain, published this summer.) In general, the “neural” explanation has become a gold standard of non-fiction exegesis, adding its own brand of computer-assisted lab-coat bling to a whole new industry of intellectual quackery that affects to elucidate even complex sociocultural phenomena. Chris Mooney’s The Republican Brain: the Science of Why They Deny Science – and Reality disavows “reductionism” yet encourages readers to treat people with whom they disagree more as pathological specimens of brain biology than as rational interlocutors.

The New Atheist polemicist Sam Harris, in The Moral Landscape, interprets brain and other research as showing that there are objective moral truths, enthusiastically inferring – almost as though this were the point all along – that science proves “conservative Islam” is bad.

Happily, a new branch of the neuroscienceexplains everything genre may be created at any time by the simple expedient of adding the prefix “neuro” to whatever you are talking about. Thus, “neuroeconomics” is the latest in a long line of rhetorical attempts to sell the dismal science as a hard one; “molecular gastronomy” has now been trumped in the scientised gluttony stakes by “neurogastronomy”; students of Republican and Democratic brains are doing “neuropolitics”; literature academics practise “neurocriticism”. There is “neurotheology”, “neuromagic” (according to Sleights of Mind, an amusing book about how conjurors exploit perceptual bias) and even “neuromarketing”. Hoping it’s not too late to jump on the bandwagon, I have decided to announce that I, too, am skilled in the newly minted fields of neuroprocrastination and neuroflâneurship.

Illumination is promised on a personal as well as a political level by the junk enlightenment of the popular brain industry. How can I become more creative? How can I make better decisions? How can I be happier? Or thinner? Never fear: brain research has the answers. It is self-help armoured in hard science. Life advice is the hook for nearly all such books. (Some cram the hard sell right into the title – such as John B Arden’s Rewire Your Brain: Think Your Way to a Better Life.) Quite consistently, heir recommendations boil down to a kind of neo- Stoicism, drizzled with brain-juice. In a selfcongratulatory egalitarian age, you can no longer tell people to improve themselves morally. So self-improvement is couched in instrumental, scientifically approved terms.

The idea that a neurological explanation could exhaust the meaning of experience was already being mocked as “medical materialism” by the psychologist William James a century ago. And today’s ubiquitous rhetorical confidence about how the brain works papers over a still-enormous scientific uncertainty. Paul Fletcher, professor of health neuroscience at the University of Cambridge, says that he gets “exasperated” by much popular coverage of neuroimaging research, which assumes that “activity in a brain region is the answer to some profound question about psychological processes. This is very hard to justify given how little we currently know about what different regions of the brain actually do.” Too often, he tells me in an email correspondence, a popular writer will “opt for some sort of neuro-flapdoodle in which a highly simplistic and questionable point is accompanied by a suitably grand-sounding neural term and thus acquires a weightiness that it really doesn’t deserve. In my view, this is no different to some mountebank selling quacksalve by talking about the physics of water molecules’ memories, or a beautician talking about action liposomes.”

Shades of grey

The human brain, it is said, is the most complex object in the known universe. That a part of it “lights up” on an fMRI scan does not mean the rest is inactive; nor is it obvious what any such lighting-up indicates; nor is it straightforward to infer general lessons about life from experiments conducted under highly artificial conditions. Nor do we have the faintest clue about the biggest mystery of all – how does a lump of wet grey matter produce the conscious experience you are having right now, reading this paragraph? How come the brain gives rise to the mind? No one knows.

So, instead, here is a recipe for writing a hit popular brain book. You start each chapter with a pat anecdote about an individual’s professional or entrepreneurial success, or narrow escape from peril. You then mine the neuroscientific research for an apparently relevant specific result and narrate the experiment, perhaps interviewing the scientist involved and describing his hair. You then climax in a fit of premature extrapolation, inferring from the scientific result a calming bromide about what it is to function optimally as a modern human being. Voilà, a laboratory-sanctioned Big Idea in digestible narrative form. This is what the psychologist Christopher Chabris has named the “story-study-lesson” model, perhaps first perfected by one Malcolm Gladwell. A series of these threesomes may be packaged into a book, and then resold again and again as a stand-up act on the wonderfully lucrative corporate lecture circuit.

Such is the rigid formula of Imagine: How Creativity Works, published in March this year by the American writer Jonah Lehrer. The book is a shatteringly glib mishmash of magazine yarn, bizarrely incompetent literary criticism, inspiring business stories about mops and dolls and zany overinterpretation of research findings in neuroscience and psychology. Lehrer responded to my hostile review of the book by claiming that I thought the science he was writing about was “useless”, but such garbage needs to be denounced precisely in defence of the achievements of science. (In a sense, as Paul Fletcher points out, such books are “anti science, given that science is supposed to be  our protection against believing whatever we find most convenient, comforting or compelling”.) More recently, Lehrer admitted fabricating quotes by Bob Dylan in Imagine, which was hastily withdrawn from sale, and he resigned from his post at the New Yorker. To invent things supposedly said by the most obsessively studied popular artist of our age is a surprising gambit. Perhaps Lehrer misunderstood his own advice about creativity.

Mastering one’s own brain is also the key to survival in a dog-eat-dog corporate world, as promised by the cognitive scientist Art Markman’s Smart Thinking: How to Think Big, Innovate and Outperform Your Rivals. Meanwhile, the field (or cult) of “neurolinguistic programming” (NLP) sells techniques not only of self-overcoming but of domination over others. (According to a recent NLP handbook, you can “create virtually any and all states” in other people by using “embedded commands”.) The employee using such arcane neurowisdom will get promoted over the heads of his colleagues; the executive will discover expert-sanctioned ways to render his underlings more docile and productive, harnessing “creativity” for profit.

Waterstones now even has a display section labelled “Smart Thinking”, stocked with pop brain tracts. The true function of such books, of course, is to free readers from the responsibility of thinking for themselves. This is made eerily explicit in the psychologist Jonathan Haidt’s The Righteous Mind, published last March, which claims to show that “moral knowledge” is best obtained through “intuition” (arising from unconscious brain processing) rather than by explicit reasoning. “Anyone who values truth should stop worshipping reason,” Haidt enthuses, in a perverse manifesto for autolobotomy. I made an Olympian effort to take his advice seriously, and found myself rejecting the reasoning of his entire book.

Modern neuro-self-help pictures the brain as a kind of recalcitrant Windows PC. You know there is obscure stuff going on under the hood, so you tinker delicately with what you can see to try to coax it into working the way you want. In an earlier age, thinkers pictured the brain as a marvellously subtle clockwork mechanism, that being the cutting-edge high technology of the day. Our own brain-as-computer metaphor has been around for decades: there is the “hardware”, made up of different physical parts (the brain), and the “software”, processing routines that use different neuronal “circuits”. Updating things a bit for the kids, the evolutionary psychologist Robert Kurzban, in Why Everyone (Else) Is a Hypocrite, explains that the brain is like an iPhone running a bunch of different apps.

Such metaphors are apt to a degree, as long as you remember to get them the right way round. (Gladwell, in Blink – whose motivational selfhelp slogan is that “we can control rapid cognition” – burblingly describes the fusiform gyrus as “an incredibly sophisticated piece of brain software”, though the fusiform gyrus is a physical area of the brain, and so analogous to “hardware” not “software”.) But these writers tend to reach for just one functional story about a brain subsystem – the story that fits with their Big Idea – while ignoring other roles the same system might play. This can lead to a comical inconsistency across different books, and even within the oeuvre of a single author.

Is dopamine “the molecule of intuition”, as Jonah Lehrer risibly suggested in The Decisive Moment (2009), or is it the basis of “the neural highway that’s responsible for generating the pleasurable emotions”, as he wrote in Imagine? (Meanwhile, Susan Cain’s Quiet: the Power of Introverts in a World That Can’t Stop Talking calls dopamine the “reward chemical” and postulates that extroverts are more responsive to it.) Other recurring stars of the pop literature are the hormone oxytocin (the “love chemical”) and mirror neurons, which allegedly explain empathy. Jonathan Haidt tells the weirdly unexplanatory micro-story that, in one experiment, “The subjects used their mirror neurons, empathised, and felt the other’s pain.” If I tell you to use your mirror neurons, do you know what to do? Alternatively, can you do as Lehrer advises and “listen to” your prefrontal cortex? Self-help can be a tricky business.

Cherry-picking

Distortion of what and how much we know is bound to occur, Paul Fletcher points out, if the literature is cherry-picked.

“Having outlined your theory,” he says, “you can then cite a finding from a neuroimaging study identifying, for example, activity in a brain region such as the insula . . . You then select from among the many theories of insula function, choosing the one that best fits with your overall hypothesis, but neglecting to mention that nobody really knows what the insula does or that there are many ideas about its possible function.”

But the great movie-monster of nearly all the pop brain literature is another region: the amygdala. It is routinely described as the “ancient” or “primitive” brain, scarily atavistic. There is strong evidence for the amygdala’s role in fear, but then fear is one of the most heavily studied emotions; popularisers downplay or ignore the amygdala’s associations with the cuddlier emotions and memory. The implicit picture is of our uneasy coexistence with a beast inside the head, which needs to be controlled if we are to be happy, or at least liberal. (In The Republican Brain, Mooney suggests that “conservatives and authoritarians” might be the nasty way they are because they have a “more active amygdala”.) René Descartes located the soul in the pineal gland; the moral of modern pop neuroscience is that original sin is physical – a bestial, demonic proto-brain lurking at the heart of darkness within our own skulls. It’s an angry ghost in the machine.

Indeed, despite their technical paraphernalia of neurotransmitters and anterior temporal gyruses, modern pop brain books are offering a spiritual topography. Such is the seductive appeal of fMRI brain scans, their splashes of red, yellow and green lighting up what looks like a black intracranial vacuum. In mass culture, the fMRI scan (usually merged from several individuals) has become a secular icon, the converse of a Hubble Space Telescope image. The latter shows us awe-inspiring vistas of distant nebulae, as though painstakingly airbrushed by a sci-fi book-jacket artist; the former peers the other way, into psychedelic inner space. And the pictures, like religious icons, inspire uncritical devotion: a 2008 study, Fletcher notes, showed that “people – even neuroscience undergrads – are more likely to believe a brain scan than a bar graph”.

In The Invisible Gorilla, Christopher Chabris and his collaborator Daniel Simons advise readers to be wary of such “brain porn”, but popular magazines, science websites and books are frenzied consumers and hypers of these scans. “This is your brain on music”, announces a caption to a set of fMRI images, and we are invited to conclude that we now understand more about the experience of listening to music. The “This is your brain on” meme, it seems, is indefinitely extensible: Google results offer “This is your brain on poker”, “This is your brain on metaphor”, “This is your brain on diet soda”, “This is your brain on God” and so on, ad nauseam. I hereby volunteer to submit to a functional magnetic-resonance imaging scan while reading a stack of pop neuroscience volumes, for an illuminating series of pictures entitled This Is Your Brain on Stupid Books About Your Brain.

None of the foregoing should be taken to imply that fMRI and other brain-investigation techniques are useless: there is beautiful and amazing science in how they work and what well-designed experiments can teach us. “One of my favourites,” Fletcher says, “is the observation that one can take measures of brain activity (either using fMRI or EEG) while someone is learning . . . a list of words, and that activity can actually predict whether particular words will be remembered when the person is tested later (even the next day). This to me demonstrates something important – that observing activity in the brain can tell us something about how somebody is processing stimuli in ways that the person themselves is unable to report. With measures like that, we can begin to see how valuable it is to measure brain activity – it is giving us information that would otherwise be hidden from us.”

In this light, one might humbly venture a preliminary diagnosis of the pop brain hacks’ chronic intellectual error. It is that they misleadingly assume we always know how to interpret such “hidden” information, and that it is always more reliably meaningful than what lies in plain view. The hucksters of neuroscientism are the conspiracy theorists of the human animal, the 9/11 Truthers of the life of the mind.

Steven Poole is the author of the forthcoming book “You Aren’t What You Eat”, which will be published by Union Books in October.

This article was updated on 18 September 2012.

This article first appeared in the 10 September 2012 issue of the New Statesman, Autumn politics special

OLLY CURTIS/XBOX: THE OFFICIAL MAGAZINE, VIA GETTY IMAGES
Show Hide image

The life of Pi

How the gaming prodigy David Braben and his friends invented a tiny £15 device that became the biggest-selling British computer.

If you had visited David Braben’s room at Jesus College, Cambridge in 1983 you would have found an unusual scene. Sure, it was just as cramped, muddled and tinged with the fragrance of generations of undergraduates as that of any other student. But while Braben’s neighbours lined their walls with textbooks and Hollywood posters, the shelves in his room supported cascades of cabling and copper wire. And there in the centre of the desk, amid a shanty town of screws and pliers, an Acorn Atom computer hummed.

Braben knew its insides better than his own. Such was the extent of his frequent and intrusive tinkering that he left the machine’s casing permanently off, leaving the circuitry exposed, like that of a battle-wrecked android. One winter’s day that year, he and a friend, Ian Bell, stood in front of the Atom’s chunky monitor. Braben moved his hand towards the keyboard and, with a tap, executed a Big Bang.

Elite, as Braben and Bell’s universe would later be named, was an ambitious computer simulation of endless rolling galaxies, waiting to be explored via a digital spaceship. To grow such vastness from such rudimentary technology, Braben had to pull off the equivalent of a numerical conjuring trick. Rather than manually plotting cosmic systems by typing star and planet co-ordinates into a database, he used the Fibonacci sequence, which starts with “0” and “1”, and continues the sequence by adding the two preceding numbers. This mathematical curiosity governs a variety of natural phenomena, such as the arrangement of leaves on a tree or the pattern of the florets in a flower, making it the ideal formula to spawn a seed from which virtual galaxies could be generated.

The game offered breadth and depth. You toured the universe in a spaceship, represented on screen by a few scant white lines, free to mine resources, dogfight with pirates or even become a galactic marauder yourself, preying on the cargo ships that sailed along trade routes. While most arcade games of the time brought players into their reality for a few brief minutes before kicking them out again, penniless and defeated, Elite worked at a different pace. Players could spend hours touring its innumerable systems. Braben’s contemporaries were astonished. “We stood around wide-eyed; these were feats of coding we had thought impossible on the low-powered machines of the day,” Jack Lang, a university friend of Braben’s, told me.

Braben and Bell’s invention became a sensation. Elite sold out of its initial run of 50,000 copies in less than two weeks, and went on to sell 600,000 copies across 17 different computer formats, making millionaires of its young creators. The game also inspired a generation of so-called Britsoft programmers who, over the next decade, would make Britain a leading hub for computer-game development, and produce, in Tomb RaiderGrand Theft Auto and Championship Manager, a clutch of enviable and world-renowned names.

 

***

 

Twenty years later, when he was running Frontier Developments, one of the most successful games companies in the UK, Braben noticed a trend. Each time his company advertised a job in programming, ­fewer candidates would apply. “I was expecting the number of applicants to rise because we’d had some positive press,” he told me when I visited him at the Frontier offices in Cambridge.

Braben, who, in his black hoodie, looks significantly younger than his 53 years, runs Frontier from a spacious, glass-fronted office. Nearby, scores of artists, designers and programmers tap and toil in orderly phalanxes of computers. The company, which in 2016 turned over £21.4m, employs more than 300 staff.

“But at that time we found that we were having to hire from abroad,” Braben told me. He called some directors at other British games companies and found that they had the same problem. Then he called the University of Birmingham, where he sat on the advisory board. “They, too, were in crisis: applicants to the computer science course had dropped off a cliff,” he said. “It made no sense to me.”

At the time, Braben was running focus tests with children on one of the company’s games, and he sneaked an additional question into his survey: “What is the most boring lesson at school?” The response left him bewildered – ICT (information and communications technology). “You would think computing would be the most exciting lesson for a child at school, wouldn’t you?” he said.

He called a local schoolteacher. “The issue became immediately obvious: the curriculum was teaching children nothing more than how to use Word and Excel. Programming had been removed from lessons and, in most cases, ICT was being taught by people who were computer-illiterate.” The teacher told him that students would run riot in class. Some children had discovered that by deleting a few critical files from Windows they could ensure that the computer would fail to switch on the next time the machine was rebooted.

“Schools were having to employ people just to repair this vandalism,” Braben said. The drop-off in applicants to computer science courses at universities and for positions in development studios was, he concluded, a result of years of classroom neglect. The Britsoft industry, it seemed, was in danger of collapsing from the bottom up.

Braben wrote to Margaret Hodge, then an education minister in Tony Blair’s Labour government. “I thought they were keen on education,” he recalled. “But when we met, Hodge told me that they were already teaching computer studies. She accused me of special pleading for my industry.” (Hodge has said, through a spokeswoman, that she “does not recall this meeting”.)

Braben told Hodge that she didn’t need to take his word for it; she could simply speak to a few teachers. “It was so frustrating,” he said. “Government was pouring all of this money into things that weren’t necessarily making a difference to getting kids into computer science. I was just trying to point out that the games industry was a huge asset that could be used to inspire kids. Kids like to learn to program if it’s framed around making games.”

This was Braben’s own childhood experience. His father worked for the Cabinet Office researching nuclear physics, and the family moved around, living in Cheshire in Stockton Heath, near Warrington, then briefly in Italy and finally in Epping, in the eastern suburbs of London. All the while Braben was designing games for him and his two younger siblings to play. One of the first was a modified version of battleships, played in the back garden using pieces pilfered from other board games, and based on nautical battles from the Second World War that he had read about in history books.

After he persuaded his parents to buy him the Acorn Atom, Braben progressed to designing computer games. For one of them, he drew a map of the northern hemisphere as viewed from space. He then taped the map to the computer screen and traced the outline of the countries in code. In the resulting game, players assumed either the role of the Americans or the Russians, tasked with sending nuclear bombs arcing across the screen in an attempt to destroy their opponent’s main cities. The winner was rewarded with a rudimentary computer version of their side’s national anthem.

Braben, who attended Buckhurst Hill County High, a grammar school in Chigwell, Essex, was a natural programmer, talented at maths and physics. But the computer on which he learned his basic programming skills, the Acorn Atom – the precursor of the BBC Micro, which would soon be found in many school ICT rooms – made it easy for him.

“It came with everything you needed in the box,” he said. “People say these days that design software costs only around £100, but that’s a huge amount for a kid. The amazing thing was that, with the Acorn and the BBC Micro and many of those other early machines, you had everything you needed to learn how to program anything you could imagine right from the get-go.”

Braben’s talent extended to entrepreneurship. When he was 17, he wrote to a games publisher saying that he believed his games to be as good as theirs. A week later three men in suits showed up at his parents’ house; he was worried about taking his computer to their office on public transport, so they offered to come to him. Astonished at what the boy had managed to achieve with the hardware, they offered him a job on the spot. Braben pretended to mull the offer over for a few days, before refusing the position in favour of studying natural sciences at Cambridge.

It was the memory of these formative experiences to which he returned when he was cold-shouldered by the government. He called Lang, by then an entrepreneur in Cambridge, who said the university there was also struggling to attract computer science applicants. The pair discussed ways to get the subject taught in the classroom, and a plan formed. If they could find a way to teach programming outside the school system, perhaps the schools would follow.

Initially Lang and Braben considered designing a programming course using bespoke software. The problem was that schools and libraries around the country used different versions of Windows. Finding a one-size-fits-all solution for students to compile and run their games proved impossible. Instead, Lang suggested the idea of a budget computer, one that would allow children the freedom to tinker, customise and break things, and then restore it all at the touch of a button.

“It struck me that probably the best way these days for a young student to learn how to program is to buy an old BBC Micro off eBay,” Braben said. “That’s a bit of an admission, isn’t it? It’s also fundamentally capped by the number of BBC Micros that are still working in the world, so it’s not a general solution. But it’s such a good way of learning. It encourages you to experiment. Rebooting a PC can easily damage the software. With the BBC Micro you could do all kinds of outrageous things and then just reset it. The hardware was tough, too.”

It is possible to destroy a BBC Micro, Braben said, but very difficult. So the idea was to build a computer that reflected the Micro’s sturdiness and simplicity: a machine for all-comers, practically indestructible in form, and universal in function. In 2003 Braben, Lang and four of their friends – Pete Lomas, Alan Mycroft, Robert Mullins and Eben Upton (“slightly eccentric guys from Cambridge”, as Braben puts it) – met at a computer lab at the university and, from a shopping list of components, began to price up a microcomputer.

“We knew how cheap components were becoming because of the rise of mobile phones,” Braben said. “But when we came up with the final price we couldn’t believe how low it was.” The group estimated it would be possible to build a home computer with a single USB port and an HDMI (high-definition multimedia interface) connector – which enables the device to be connected to a compatible screen – for £15.

 

***

 

The six men named their invention the Raspberry Pi. “Fruit seemed good; Raspberry particularly good because it’s a bit of a thumb-nose at the convention. We added Pi to make it sound a bit mathematical,” said Braben. They formed the Raspberry Pi Foundation, a charity aiming to “promote the study of computer science and related topics . . . and put the fun back into learning computing”. It was almost a decade before their vision for the micro-budget microcomputer would become a reality.

“We decided that we needed support from a large organisation,” Braben said. “We started speaking to the BBC and spent a few years discussing the project with them as potential partners.” The group even offered to give the corporation the software design free of charge. But the strong initial interest led to a series of interminable meetings, where nobody from the BBC seemed willing to be the one to make the final decision.

“The final meeting I had with the BBC really annoyed me,” he said. “They told me that I needed to seek sign-off from a group that had already signed off on the project, simply because there had been a reorganisation in that group. We were going around in circles. That’s when I realised it wasn’t going to work.”

Immediately after the meeting, a furious Braben strode to the White City office of Rory Cellan-Jones, the BBC’s technology correspondent. Cellan-Jones knew of Braben from reading Francis Spufford’s 2003 book, Backroom Boys, a biography of various British inventors in which Braben and Bell featured prominently.

“When Braben contacted me under the illusion that I was somebody at the BBC with some semblance of power, rather than an infantryman, I was delighted,” Cellan-Jones told me. Yet he was at a loss as to what he could do to help the inventor standing in front of him with a Raspberry Pi in his hand. “I thought to myself: well, there’s nothing I can do with this. I can’t get a crew to film something like that.”

Sensing Braben’s despair, Cellan-Jones suggested that he film a short video on his phone there and then; he would post it to his BBC blog and announce the Raspberry Pi to the world. Doing so might, Cellan-Jones reasoned, force the BBC’s hand. At the very least it would help to gauge public interest in the device.

In a nearby corridor, Braben held the device up to the camera and explained what it was and why it might be important. “It was short and simple,” he recalled. At lunchtime on 5 May 2011, Cellan-Jones posted the video and a story about the computer to his blog. “It’s not much bigger than your finger, it looks like a leftover from an electronics factory, but its makers believe their £15 computer could help a new generation discover programming,” he wrote.

The story went viral, receiving a quarter of a million hits that day. “I was surprised and delighted,” Cellan-Jones said. “It was a great idea from the start. But I encounter lots of great ideas. You get to the stage where you start to believe that nothing will work. Then, every now and again, someone turns up with a rocket ship to Mars.”

Despite the interest, the BBC, as Braben puts it, kept coming up with reasons why the corporation shouldn’t back it. So the six members of the foundation decided to fund the first 10,000 units out of their own pockets. On 29 February 2012, at 5am, Braben began a day of media appearances, first on BBC Worldwide, then on Radio 4’s Today programme. An hour later, the website where the public could order one of the first Raspberry Pis went live. Within five seconds it had sold out.

Unable to keep up with the demand, the website sold far more units than the team had components for. “It went very well indeed,” Braben said.

***

 

Since then, the rise of Raspberry Pi has been inexorable, with more than seven million units sold. This fully customisable and programmable computer, no larger than a credit card and only slightly thicker, can be used for everything from controlling the lights in your garage to learning how to become a software developer. In Syria it has been used to create local radio transmitters, able to broadcast messages to towns within a range of up to six kilometres, disseminating information about nearby skirmishes and essential supplies.

The Pi computer has been used to take weather balloons to the edge of space – its four AA batteries draw just enough current to stop the device from freezing – enabling schoolchildren to send teddy bears into the stratosphere to take photographs of the curvature of the planet. It can even broadcast its position by GPS, enabling those children to locate the device when it floats back to Earth. It doesn’t matter too much if it is lost, because it costs as little as £5 in its most basic form. This year, the foundation gave away a basic Raspberry Pi on the front of the MagPi, an affiliated magazine that teaches readers how, among other things, to program a football game from scratch.

Hundreds of thousands of young people have attended the foundation’s educational programmes. In 2015 Raspberry Pi entered into a collaboration with Code Club, an organisation created as a response to “the collective failure to prepare young people for life and work in a world that is shaped by digital technologies”. Code Club now runs more than 3,800 clubs in the UK and over 1,000 more in 70 other countries. Staffed by volunteers, the clubs provide nine-t0-11-year-olds with the opportunity to make things using computers. Roughly 44,000 young people regularly attend Code Clubs in the UK alone; some 40 per cent of these youngsters are girls.

Braben’s plan to get British schoolchildren learning how to program has been even more fruitful. Since Raspberry Pi’s launch, applications for computer science degrees have increased by a factor of six. Data from Cambridge Assessment, the exams and research group, shows a significant increase in numbers of children choosing to study ICT at GCSE level, with a 17 per cent year-on-year rise in 2015.

There have been other beneficial side effects. Thanks to the buzz generated by the Raspberry Pi, and pressure from the foundation as well as Google, Microsoft and others, the government has put computer science back on the national curriculum.

“We’re seeing a huge growth in engagement with computer science in the UK, and Raspberry Pi has been a big part of that movement,” said Philip Colligan, the chief executive of the Raspberry Pi Foundation. “It came along at just the right moment and provided a physical manifestation of the idea that kids should be learning how to make things with computers, not just how to consume.”

Cellan-Jones agrees that the timing of the device’s launch was perfect. “It was certainly part of a wide movement to change how ICT was taught in schools, but of all those efforts I think it played the most important part. By having a physical object it made it tangible.”

Braben believes that the Raspberry Pi and its many imitators are dispelling the mystique that has grown around technology, driven in part, he says, by Apple’s closed systems. It is almost impossible, for example, to remove the cover of an iPhone to see how it works.

“When I was growing up, if my hi-fi was buzzing I’d take the lid off and maybe put some Blu-Tack in to stop the buzzing,” he said. “At some point, this collective fear crept in.”

For Braben, who has two stepchildren, now going on 13 and 18, it’s important for children not to be afraid of the technology on which they rely. “You only need one person in ten to actually study computer science. But for everyone else, having some understanding about, say, what goes on in your phone is incredibly helpful.

“In so many walks of life, whether you’re a builder using power tools or an accountant using accounting software, you are forever being presented with and relying upon technology. Understanding a little about what’s going on, rather than being afraid and embarrassed, is crucial.”

So, too, is having fun along the way. Braben has since returned to the stars of his youth by way of Elite: Dangerous. This sequel to the game that made him his fortune was released in late 2015. Rather than turn to algorithms to scatter the universe with stars and planets, this time the Frontier team re-created our own galaxy.

The digital sky for the revamped game includes every known star present in our own, their positions drawn from the numerous publicly available sky maps, each of which can be visited in the game using a spaceship. Altogether, the game is comprised of 400 billion stars, their planetary systems – and moons – all, like the insides of the computers on which they run, waiting to be explored.

This article first appeared in the 02 February 2017 issue of the New Statesman, American carnage