This is a metaphor. Photograph: Getty Images
Show Hide image

Your brain on pseudoscience: the rise of popular neurobollocks

The “neuroscience” shelves in bookshops are groaning. But are the works of authors such as Malcolm Gladwell and Jonah Lehrer just self-help books dressed up in a lab coat?

An intellectual pestilence is upon us. Shop shelves groan with books purporting to explain, through snazzy brain-imaging studies, not only how thoughts and emotions function, but how politics and religion work, and what the correct answers are to age-old philosophical controversies. The dazzling real achievements of brain research are routinely pressed into service for questions they were never designed to answer. This is the plague of neuroscientism – aka neurobabble, neurobollocks, or neurotrash – and it’s everywhere.

In my book-strewn lodgings, one literally trips over volumes promising that “the deepest mysteries of what makes us who we are are gradually being unravelled” by neuroscience and cognitive psychology. (Even practising scientists sometimes make such grandiose claims for a general audience, perhaps urged on by their editors: that quotation is from the psychologist Elaine Fox’s interesting book on “the new science of optimism”, Rainy Brain, Sunny Brain, published this summer.) In general, the “neural” explanation has become a gold standard of non-fiction exegesis, adding its own brand of computer-assisted lab-coat bling to a whole new industry of intellectual quackery that affects to elucidate even complex sociocultural phenomena. Chris Mooney’s The Republican Brain: the Science of Why They Deny Science – and Reality disavows “reductionism” yet encourages readers to treat people with whom they disagree more as pathological specimens of brain biology than as rational interlocutors.

The New Atheist polemicist Sam Harris, in The Moral Landscape, interprets brain and other research as showing that there are objective moral truths, enthusiastically inferring – almost as though this were the point all along – that science proves “conservative Islam” is bad.

Happily, a new branch of the neuroscienceexplains everything genre may be created at any time by the simple expedient of adding the prefix “neuro” to whatever you are talking about. Thus, “neuroeconomics” is the latest in a long line of rhetorical attempts to sell the dismal science as a hard one; “molecular gastronomy” has now been trumped in the scientised gluttony stakes by “neurogastronomy”; students of Republican and Democratic brains are doing “neuropolitics”; literature academics practise “neurocriticism”. There is “neurotheology”, “neuromagic” (according to Sleights of Mind, an amusing book about how conjurors exploit perceptual bias) and even “neuromarketing”. Hoping it’s not too late to jump on the bandwagon, I have decided to announce that I, too, am skilled in the newly minted fields of neuroprocrastination and neuroflâneurship.

Illumination is promised on a personal as well as a political level by the junk enlightenment of the popular brain industry. How can I become more creative? How can I make better decisions? How can I be happier? Or thinner? Never fear: brain research has the answers. It is self-help armoured in hard science. Life advice is the hook for nearly all such books. (Some cram the hard sell right into the title – such as John B Arden’s Rewire Your Brain: Think Your Way to a Better Life.) Quite consistently, heir recommendations boil down to a kind of neo- Stoicism, drizzled with brain-juice. In a selfcongratulatory egalitarian age, you can no longer tell people to improve themselves morally. So self-improvement is couched in instrumental, scientifically approved terms.

The idea that a neurological explanation could exhaust the meaning of experience was already being mocked as “medical materialism” by the psychologist William James a century ago. And today’s ubiquitous rhetorical confidence about how the brain works papers over a still-enormous scientific uncertainty. Paul Fletcher, professor of health neuroscience at the University of Cambridge, says that he gets “exasperated” by much popular coverage of neuroimaging research, which assumes that “activity in a brain region is the answer to some profound question about psychological processes. This is very hard to justify given how little we currently know about what different regions of the brain actually do.” Too often, he tells me in an email correspondence, a popular writer will “opt for some sort of neuro-flapdoodle in which a highly simplistic and questionable point is accompanied by a suitably grand-sounding neural term and thus acquires a weightiness that it really doesn’t deserve. In my view, this is no different to some mountebank selling quacksalve by talking about the physics of water molecules’ memories, or a beautician talking about action liposomes.”

Shades of grey

The human brain, it is said, is the most complex object in the known universe. That a part of it “lights up” on an fMRI scan does not mean the rest is inactive; nor is it obvious what any such lighting-up indicates; nor is it straightforward to infer general lessons about life from experiments conducted under highly artificial conditions. Nor do we have the faintest clue about the biggest mystery of all – how does a lump of wet grey matter produce the conscious experience you are having right now, reading this paragraph? How come the brain gives rise to the mind? No one knows.

So, instead, here is a recipe for writing a hit popular brain book. You start each chapter with a pat anecdote about an individual’s professional or entrepreneurial success, or narrow escape from peril. You then mine the neuroscientific research for an apparently relevant specific result and narrate the experiment, perhaps interviewing the scientist involved and describing his hair. You then climax in a fit of premature extrapolation, inferring from the scientific result a calming bromide about what it is to function optimally as a modern human being. Voilà, a laboratory-sanctioned Big Idea in digestible narrative form. This is what the psychologist Christopher Chabris has named the “story-study-lesson” model, perhaps first perfected by one Malcolm Gladwell. A series of these threesomes may be packaged into a book, and then resold again and again as a stand-up act on the wonderfully lucrative corporate lecture circuit.

Such is the rigid formula of Imagine: How Creativity Works, published in March this year by the American writer Jonah Lehrer. The book is a shatteringly glib mishmash of magazine yarn, bizarrely incompetent literary criticism, inspiring business stories about mops and dolls and zany overinterpretation of research findings in neuroscience and psychology. Lehrer responded to my hostile review of the book by claiming that I thought the science he was writing about was “useless”, but such garbage needs to be denounced precisely in defence of the achievements of science. (In a sense, as Paul Fletcher points out, such books are “anti science, given that science is supposed to be  our protection against believing whatever we find most convenient, comforting or compelling”.) More recently, Lehrer admitted fabricating quotes by Bob Dylan in Imagine, which was hastily withdrawn from sale, and he resigned from his post at the New Yorker. To invent things supposedly said by the most obsessively studied popular artist of our age is a surprising gambit. Perhaps Lehrer misunderstood his own advice about creativity.

Mastering one’s own brain is also the key to survival in a dog-eat-dog corporate world, as promised by the cognitive scientist Art Markman’s Smart Thinking: How to Think Big, Innovate and Outperform Your Rivals. Meanwhile, the field (or cult) of “neurolinguistic programming” (NLP) sells techniques not only of self-overcoming but of domination over others. (According to a recent NLP handbook, you can “create virtually any and all states” in other people by using “embedded commands”.) The employee using such arcane neurowisdom will get promoted over the heads of his colleagues; the executive will discover expert-sanctioned ways to render his underlings more docile and productive, harnessing “creativity” for profit.

Waterstones now even has a display section labelled “Smart Thinking”, stocked with pop brain tracts. The true function of such books, of course, is to free readers from the responsibility of thinking for themselves. This is made eerily explicit in the psychologist Jonathan Haidt’s The Righteous Mind, published last March, which claims to show that “moral knowledge” is best obtained through “intuition” (arising from unconscious brain processing) rather than by explicit reasoning. “Anyone who values truth should stop worshipping reason,” Haidt enthuses, in a perverse manifesto for autolobotomy. I made an Olympian effort to take his advice seriously, and found myself rejecting the reasoning of his entire book.

Modern neuro-self-help pictures the brain as a kind of recalcitrant Windows PC. You know there is obscure stuff going on under the hood, so you tinker delicately with what you can see to try to coax it into working the way you want. In an earlier age, thinkers pictured the brain as a marvellously subtle clockwork mechanism, that being the cutting-edge high technology of the day. Our own brain-as-computer metaphor has been around for decades: there is the “hardware”, made up of different physical parts (the brain), and the “software”, processing routines that use different neuronal “circuits”. Updating things a bit for the kids, the evolutionary psychologist Robert Kurzban, in Why Everyone (Else) Is a Hypocrite, explains that the brain is like an iPhone running a bunch of different apps.

Such metaphors are apt to a degree, as long as you remember to get them the right way round. (Gladwell, in Blink – whose motivational selfhelp slogan is that “we can control rapid cognition” – burblingly describes the fusiform gyrus as “an incredibly sophisticated piece of brain software”, though the fusiform gyrus is a physical area of the brain, and so analogous to “hardware” not “software”.) But these writers tend to reach for just one functional story about a brain subsystem – the story that fits with their Big Idea – while ignoring other roles the same system might play. This can lead to a comical inconsistency across different books, and even within the oeuvre of a single author.

Is dopamine “the molecule of intuition”, as Jonah Lehrer risibly suggested in The Decisive Moment (2009), or is it the basis of “the neural highway that’s responsible for generating the pleasurable emotions”, as he wrote in Imagine? (Meanwhile, Susan Cain’s Quiet: the Power of Introverts in a World That Can’t Stop Talking calls dopamine the “reward chemical” and postulates that extroverts are more responsive to it.) Other recurring stars of the pop literature are the hormone oxytocin (the “love chemical”) and mirror neurons, which allegedly explain empathy. Jonathan Haidt tells the weirdly unexplanatory micro-story that, in one experiment, “The subjects used their mirror neurons, empathised, and felt the other’s pain.” If I tell you to use your mirror neurons, do you know what to do? Alternatively, can you do as Lehrer advises and “listen to” your prefrontal cortex? Self-help can be a tricky business.

Cherry-picking

Distortion of what and how much we know is bound to occur, Paul Fletcher points out, if the literature is cherry-picked.

“Having outlined your theory,” he says, “you can then cite a finding from a neuroimaging study identifying, for example, activity in a brain region such as the insula . . . You then select from among the many theories of insula function, choosing the one that best fits with your overall hypothesis, but neglecting to mention that nobody really knows what the insula does or that there are many ideas about its possible function.”

But the great movie-monster of nearly all the pop brain literature is another region: the amygdala. It is routinely described as the “ancient” or “primitive” brain, scarily atavistic. There is strong evidence for the amygdala’s role in fear, but then fear is one of the most heavily studied emotions; popularisers downplay or ignore the amygdala’s associations with the cuddlier emotions and memory. The implicit picture is of our uneasy coexistence with a beast inside the head, which needs to be controlled if we are to be happy, or at least liberal. (In The Republican Brain, Mooney suggests that “conservatives and authoritarians” might be the nasty way they are because they have a “more active amygdala”.) René Descartes located the soul in the pineal gland; the moral of modern pop neuroscience is that original sin is physical – a bestial, demonic proto-brain lurking at the heart of darkness within our own skulls. It’s an angry ghost in the machine.

Indeed, despite their technical paraphernalia of neurotransmitters and anterior temporal gyruses, modern pop brain books are offering a spiritual topography. Such is the seductive appeal of fMRI brain scans, their splashes of red, yellow and green lighting up what looks like a black intracranial vacuum. In mass culture, the fMRI scan (usually merged from several individuals) has become a secular icon, the converse of a Hubble Space Telescope image. The latter shows us awe-inspiring vistas of distant nebulae, as though painstakingly airbrushed by a sci-fi book-jacket artist; the former peers the other way, into psychedelic inner space. And the pictures, like religious icons, inspire uncritical devotion: a 2008 study, Fletcher notes, showed that “people – even neuroscience undergrads – are more likely to believe a brain scan than a bar graph”.

In The Invisible Gorilla, Christopher Chabris and his collaborator Daniel Simons advise readers to be wary of such “brain porn”, but popular magazines, science websites and books are frenzied consumers and hypers of these scans. “This is your brain on music”, announces a caption to a set of fMRI images, and we are invited to conclude that we now understand more about the experience of listening to music. The “This is your brain on” meme, it seems, is indefinitely extensible: Google results offer “This is your brain on poker”, “This is your brain on metaphor”, “This is your brain on diet soda”, “This is your brain on God” and so on, ad nauseam. I hereby volunteer to submit to a functional magnetic-resonance imaging scan while reading a stack of pop neuroscience volumes, for an illuminating series of pictures entitled This Is Your Brain on Stupid Books About Your Brain.

None of the foregoing should be taken to imply that fMRI and other brain-investigation techniques are useless: there is beautiful and amazing science in how they work and what well-designed experiments can teach us. “One of my favourites,” Fletcher says, “is the observation that one can take measures of brain activity (either using fMRI or EEG) while someone is learning . . . a list of words, and that activity can actually predict whether particular words will be remembered when the person is tested later (even the next day). This to me demonstrates something important – that observing activity in the brain can tell us something about how somebody is processing stimuli in ways that the person themselves is unable to report. With measures like that, we can begin to see how valuable it is to measure brain activity – it is giving us information that would otherwise be hidden from us.”

In this light, one might humbly venture a preliminary diagnosis of the pop brain hacks’ chronic intellectual error. It is that they misleadingly assume we always know how to interpret such “hidden” information, and that it is always more reliably meaningful than what lies in plain view. The hucksters of neuroscientism are the conspiracy theorists of the human animal, the 9/11 Truthers of the life of the mind.

Steven Poole is the author of the forthcoming book “You Aren’t What You Eat”, which will be published by Union Books in October.

This article was updated on 18 September 2012.

This article first appeared in the 10 September 2012 issue of the New Statesman, Autumn politics special

Picture: Archives Charmet / Bridgeman Images
Show Hide image

What Marx got right

...and what he got wrong.

1. You’re probably a capitalist – among other things

Are you a capitalist? The first question to ask is: do you own shares? Even if you don’t own any directly (about half of Americans do but the proportion is far lower in most other countries) you may have a pension that is at least partly invested in the stock market; or you’ll have savings in a bank.

So you have some financial wealth: that is, you own capital. Equally, you are probably also a worker, or are dependent directly or indirectly on a worker’s salary; and you’re a consumer. Unless you live in an autonomous, self-sufficient commune – very unusual – you are likely to be a full participant in the capitalist system.

We interact with capitalism in multiple ways, by no means all economic. And this accounts for the conflicted relationship that most of us (including me) have with capitalism. Typically, we neither love it nor hate it, but we definitely live it.

2. Property rights are fundamental to capitalism . . . but they are not absolute

If owning something means having the right to do what you want with it, property rights are rarely unconstrained. I am free to buy any car I want – so long as it meets European pollution standards and is legally insured; and I can drive it anywhere I want, at least on public roads, as long as I have a driver’s licence and keep to the speed limit. If I no longer want the car, I can’t just dump it: I have to dispose of it in an approved manner. It’s mine, not yours or the state’s, and the state will protect my rights over it. But – generally for good reason – how I can use it is quite tightly constrained.

This web of rules and constraints, which both defines and restricts property rights, is characteristic of a complex economy and society. Most capitalist societies attempt to resolve these tensions in part by imposing restrictions, constitutional or political, on arbitrary or confiscatory actions by governments that “interfere” with property rights. But the idea that property rights are absolute is not philosophically or practically coherent in a modern society.

3. What Marx got right about capitalism

Marx had two fundamental insights. The first was the importance of economic forces in shaping human society. For Marx, it was the “mode of production” – how labour and capital were combined, and under what rules – that explained more or less everything about society, from politics to culture. So, as modes of production change, so too does society. And he correctly concluded that industrialisation and capitalism would lead to profound changes in the nature of society, affecting everything from the political system to morality.

The second insight was the dynamic nature of capitalism in its own right. Marx understood that capitalism could not be static: given the pursuit of profit in a competitive economy, there would be constant pressure to increase the capital stock and improve productivity. This in turn would lead to labour-saving, or capital-intensive, technological change.

Putting these two insights together gives a picture of capitalism as a radical force. Such are its own internal dynamics that the economy is constantly evolving, and this in turn results in changes in the wider society.

4. And what he got wrong . . .

Though Marx was correct that competition would lead the owners of capital to invest in productivity-enhancing and labour-saving machinery, he was wrong that this would lead to wages being driven down to subsistence level, as had largely been the case under feudalism. Classical economics, which argued that new, higher-productivity jobs would emerge, and that workers would see their wages rise more or less in line with productivity, got this one right. And so, in turn, Marx’s most important prediction – that an inevitable conflict between workers and capitalists would lead ultimately to the victory of the former and the end of capitalism – was wrong.

Marx was right that as the number of industrial workers rose, they would demand their share of the wealth; and that, in contrast to the situation under feudalism, their number and geographical concentration in factories and cities would make it impossible to deny these demands indefinitely. But thanks to increased productivity, workers’ demands in most advanced capitalist economies could be satisfied without the system collapsing. So far, it seems that increased productivity, increased wages and increased consumption go hand in hand, not only in individual countries but worldwide.

5. All societies are unequal. But some are more unequal than others

In the late 19th and early 20th centuries, an increasing proportion of an economy’s output was captured by a small class of capitalists who owned and controlled the means of production. Not only did this trend stop in the 20th century, it was sharply reversed. Inherited fortunes, often dating back to the pre-industrial era, were eroded by taxes and inflation, and some were destroyed by the Great Depression. Most of all, after the Second World War the welfare state redistributed income and wealth within the framework of a capitalist economy.

Inequality rose again after the mid-1970s. Under Margaret Thatcher and Ronald Reagan, the welfare state was cut back. Tax and social security systems became less progressive. Deregulation, the decline of heavy industry and reduction of trade union power increased the wage differential between workers. Globally the chief story of the past quarter-century has been the rise of the “middle class”: people in emerging economies who have incomes of up to $5,000 a year. But at the same time lower-income groups in richer countries have done badly.

Should we now worry about inequality within countries, or within the world as a whole? And how much does an increasing concentration of income and wealth among a small number of people – and the consequent distortions of the political system – matter when set against the rapid ­income growth for large numbers of people in the emerging economies?

Growing inequality is not an inevitable consequence of capitalism. But, unchecked, it could do severe economic damage. The question is whether our political systems, national and global, are up to the challenge.

6. China’s road to capitalism is unique

The day after Margaret Thatcher died, I said on Radio 4’s Today programme: “In 1979, a quarter of a century ago, a politician came to power with a radical agenda of market-oriented reform; a plan to reduce state control and release the country’s pent-up economic dynamism. That changed the world, and we’re still feeling the impact. His name, of course, was Deng Xiaoping.”

The transition from state to market in China kick-started the move towards truly globalised capitalism. But the Chinese road to capitalism has been unique. First agriculture was liberalised, then entrepreneurs were allowed to set up small businesses, while at the same time state-owned enterprises reduced their workforces; yet there has been no free-for-all, either for labour or for capital. The movement of workers from rural to urban areas, and from large, unproductive, state-owned enterprises to more productive private businesses, though vast, has been controlled. Access to capital still remains largely under state control. Moreover, though its programme is not exactly “Keynesian”, China has used all the tools of macroeconomic management to keep growth high and relatively stable.

That means China is still far from a “normal” capitalist economy. The two main engines of growth have been investment and the movement of labour from the countryside to the cities. This in itself was enough, because China had so much catching-up to do. However, if the Chinese are to close the huge gap between themselves and the advanced economies, more growth will need to come from innovation and technological progress. No one doubts that China has the human resources to deliver this, but its system will have to change.

7. How much is enough?

The human instinct to improve our material position is deeply rooted: control over resources, especially food and shelter, made early human beings more able to reproduce. That is intrinsic to capitalism; the desire to acquire income and wealth motivates individuals to work, save, invent and invest. As Adam Smith showed, this benefits us all. But if we can produce more than enough for everybody, what will motivate people? Growth would stop. Not that this would necessarily be a bad thing: yet our economy and society would be very different.

Although we are at least twice as rich as we were half a century ago, the urge to consume more seems no less strong. Relative incomes matter. We compare ourselves not to our impoverished ancestors but to other people in similar situations: we strive to “keep up with the Joneses”. The Daily Telegraph once described a London couple earning £190,000 per year (in the top 0.1 per cent of world income) as follows: “The pair are worried about becoming financially broken as the sheer cost of middle-class life in London means they are stretched to the brink.” Talk about First World problems.

Is there any limit? Those who don’t like the excesses of consumerism might hope that as our material needs are satisfied, we will worry less about keeping up with the Joneses and more about our satisfaction and enjoyment of non-material things. It is equally possible, of course, that we’ll just spend more time keeping up with the Kardashians instead . . .

8. No more boom and bust

Are financial crises and their economic consequences part of the natural (capitalist) order of things? Politicians and economists prefer to think otherwise. No longer does anyone believe that “light-touch” regulation of the banking sector is enough. New rules have been introduced, designed to restrict leverage and ensure that failure in one or two financial institutions does not lead to systemic failure. Many would prefer a more wholesale approach to reining in the financial system; this would have gained the approval of Keynes, who thought that while finance was necessary, its role in capitalism should be strictly limited.

But maybe there is a more fundamental problem: that recurrent crises are baked into the system. The “financial instability” hypothesis says that the more governments and regulators stabilise the system, the more this will breed overconfidence, leading to more debt and higher leverage. And sooner or later the music stops. If that is the case, then financial capitalism plus human nature equals inevitable financial crises; and we should make sure that we have better contingency plans next time round.

9. Will robots take our jobs?

With increasing mechanisation (from factories to supermarket checkouts) and computerisation (from call centres to tax returns), is it becoming difficult for human beings to make or produce anything at less cost than a machine can?

Not yet – more Britons have jobs than at any other point in history. That we can produce more food and manufactured products with fewer people means that we are richer overall, leaving us to do other things, from economic research to performance art to professional football.

However, the big worry is that automation could shift the balance of power between capital and labour in favour of the former. Workers would still work; but many or most would be in relatively low-value, peripheral jobs, not central to the functioning of the economy and not particularly well paid. Either the distribution of income and wealth would widen further, or society would rely more on welfare payments and charity to reduce unacceptable disparities between the top and the bottom.

That is a dismal prospect. Yet these broader economic forces pushing against the interests of workers will not, on their own, determine the course of history. The Luddites were doomed to fail; but their successors – trade unionists who sought to improve working conditions and Chartists who demanded the vote so that they could restructure the economy and the state – mostly succeeded. The test will be whether our political and social institutions are up to the challenge.

10. What’s the alternative?

There is no viable economic alternative to capitalism at the moment but that does not mean one won’t emerge. It is economics that determines the nature of our society, and we are at the beginning of a profound set of economic changes, based on three critical developments.

Physical human input into production will become increasingly rare as robots take over. Thanks to advances in computing power and artificial intelligence, much of the analytic work that we now do in the workplace will be carried out by machines. And an increasing ability to manipulate our own genes will extend our lifespan and allow us to determine our offspring’s characteristics.

Control over “software” – information, data, and how it is stored, processed and manipulated – will be more important than control over physical capital, buildings and machines. The defining characteristic of the economy and society will be how that software is produced, owned and commanded: by the state, by individuals, by corporations, or in some way as yet undefined.

These developments will allow us, if we choose, to end poverty and expand our horizons, both materially and intellectually. But they could also lead to growing inequality, with the levers of the new economy controlled by a corporate and moneyed elite. As an optimist, I hope for the former. Yet just as it wasn’t the “free market” or individual capitalists who freed the slaves, gave votes to women and created the welfare state, it will be the collective efforts of us all that will enable humanity to turn economic advances into social progress. 

Jonathan Portes's most recent book is “50 Ideas You Really Need to Know: Capitalism” (Quercus)

Jonathan Portes is senior fellow The UK in a Changing Europe and Professor of Economics and Public Policy, King’s College London.

This article first appeared in the 22 June 2017 issue of the New Statesman, The zombie PM

0800 7318496