Books on Books (2003) by Jonathan Wolstenholme/Private Collection/Bridgeman Art Library
Show Hide image

Living life by the book: why reading isn't always good for you

Somewhere along the line, an orthodoxy hardened: cigarettes will kill you and Bon Jovi will give you a migraine, but reading – the ideal diet being Shakespeare and 19th-century novels, plus the odd modernist – will make you healthier, stronger, kinder. But is that true?

The Unexpected Professor: an Oxford Life in Books
John Carey
Faber & Faber, 353pp, £18.99

Reading and the Reader
Philip Davis
Oxford University Press, 147pp, £12.99

Why I Read: the Serious Pleasure of Books
Wendy Lesser
Farrar, Straus & Giroux, 226pp, £17.99

The Road to Middlemarch: My Life With George Eliot
Rebecca Mead
Granta Books, 296pp, £16.99

There is a series of postcards by the Dutch cartoonist Joost Swarte that applies the alarmist tone usually reserved for smoking to scenes of people reading. A sunbathing woman is going purple and the caption, set in black on white with a black border, says: “Reading causes ageing of the skin.” In other scenarios a man ignores the naked woman lying beside him (“Reading may reduce the blood flow and cause impotence”) and a mother pours huge quantities of salt into a meal (“Reading seriously harms you and others around you”). What makes the cartoons so flat and pointless, apart from Swarte’s winsome draftsmanship, is their apparent belief that the benevolence of reading is a stable fact, ripe for comic inversion, rather than a social attitude that we are free to dispute. It is the same ostensive irony that underpins George Orwell’s exercise in amateur accountancy, “Books v Cigarettes”.

Still, you can see where Swarte’s confusion came from. Reading has the best PR team in the business. Or perhaps it’s just that devoted readers have better access to the language of advocacy and celebration than chain-smokers or, say, power-ballad enthusiasts. Either way, somewhere along the line, an orthodoxy hardened: cigarettes will kill you and Bon Jovi will give you a migraine, but reading – the ideal diet being Shakespeare and 19th-century novels, plus the odd modernist – will make you healthier, stronger, kinder. With the foundation of Sex and Love Addicts Anonymous in 1976, reading became the last thing you can never do too often. Even the much-made argument that works of literature – Northanger Abbey, Madame Bovary – insist on the dangers of literature redounds to literature’s benefit, and provides yet another reason for reading.

But a serious, non-circular opposition case has been made, if not against reading, then against the idea that the western canon is morally improving or good for the soul. Shakespeare, most canonical of all, became a magnet for 1980s iconoclasts, who disparaged him as an imperial stooge (post-colonial theory), a tool of national power (cultural materialism) and a product of the same social/ideological energies as such putatively non-literary texts as James I’s Counterblaste to Tobacco (new historicism). Conducted for the most part in postgraduate seminar rooms and the pages of academic texts (the collection Political Shakespeare being perhaps the best-known English example), the debate was finally settled in the public sphere, where the cultural warriors, keen to alter reputations and revise the agenda, were greeted with indifference or derision.

At the turn of the 21st century, with the debate dying off and the future uncertain, Harold Bloom, in How to Read and Why, and Frank Kermode, in Shakespeare’s Language, tried to reassert the old agenda by teaching lessons that had been standard in their youth but had faded amid the chatter.

The project has since split in two, with reading primers teaching us “how” to read and reading memoirs providing testimony as to “why”, both in positive rather than implicitly combative terms. There is no longer any need to write “in defence of” reading, or, if there is, the defence is against forces such as “distraction” and “technology” that are indifferent to reading literature, not actively ranged against it. Even those memoirs that hinge on grisly challenges – a book a day (Tolstoy and the Purple Chair) or all 51 volumes of the Harvard Classics (The Whole Five Feet) – make no reference to “book addiction” or “hyper-literacy”. If a downside emerges, it does so between the lines.

In the penultimate sentence of his new book, John Carey says that reading “is freedom”, yet he provides more than enough evidence to the contrary. The Unexpected Professor is an autobiography (postwar austerity, grammar school, national service, Oxford, Oxford, Oxford) that doubles as a “selective and opinionated” history of English literature, and a glories-of-reading memoir that doubles as an anti-reading memoir. Carey notes that people like him often prefer reading things to seeing them – typically, his example comes not from his own life but from a poem by Wordsworth – and reflects: “So living your truest life in books may deaden the real world for you as well as enliven it.” But how, judging by this account, does reading enliven things?

Carey confesses to feeling guilty that as an undergraduate he could read all day, while “out in the real world” (there it is again) people were “slogging away”. But it doesn’t seem all that different from his life in the non-real world: “I secured a copy from Hammersmith Public Library . . . and slogged through all sixteen thousand lines of it. It was unspeakably boring” (Layamon’s Brut). “I slogged through it of course, because my aim was to learn, not to have fun” (Johnson’s Lives of the Poets). Even Wordsworth, who showed that reading can spoil you for experience, is read “as a kind of atonement”, in a “microscopically printed” edition that proves “not exactly an On-First-Looking-into-Chapman’s-Homer experience”. Once he had squinted his way through English literature, Carey was free to gorge on European novels, yet even that sounds like a mixed experience. Dostoevsky he found “hard going” and though there were other writers he enjoyed a good deal more – Zola, Tolstoy, Thomas Mann – he still “forced myself to make notes on the endpapers”. If there’s any enlivening going on, it’s not being enacted on life by literature but the other way around: playing cricket at other schools “made me understand better that bit in the Book of Numbers where the Israelites send out spies to size up the opposition . . .”

In What Good Are the Arts?, Carey wrote that the non-literary arts are “locked in inarticulacy”. But literature, in his version, is locked in articulacy, forever making pronouncements and cases and claims. His lifetime of reading, as recounted in this book, has given him nothing, other than the occasional ringing phrase, that he could not have found in some form of pamphlet. In Carey’s account, reading provides an opportunity to engage with writers who share your convictions and to reject the ones who don’t: Milton’s anti-royalism “put me on his side”, “what I liked most fiercely was Jonson’s exposure of rampaging luxury”, “What The Faerie Queene does is mythicise political power, attributing supernatural status to a dictatorial regime, and this makes it, at heart, crass and false”. A telling example of Carey’s picture of literature-as-logic comes when he quotes a well-known passage from George Eliot’s novel Middlemarch, a reflection on “that element of tragedy which lies in the very fact of frequency”:

If we had a keen vision and feeling of all ordinary human life, it would be like hearing the grass grow and the squirrel’s heart beat, and we should die of that roar which lies on the other side of silence. As it is, the quickest of us walk about well wadded with stupidity.

Although this is the passage Carey uses to support his view of Eliot as “the most intelligent of English novelists”, all he says is that she “is unusual in using poetry in the service of thinking . . . The tenderness of the heartbeat and the shock of the roar would be marvellous simply as a poetic moment. But it is also part of an argument.”

It comes down to a vision of language and how it relates to ideas. Carey writes that D H Lawrence “tries to make us see that, if he could, he’d communicate in some other way, freed from the limitations of thought”. But for Philip Davis, in his treatise-like Reading and the Reader, literature allows just such freedom. According to Davis, Eliot is not putting poetry to the service of “thinking”, in Carey’s op-ed sense of the word, but doing the kind of not-quite-thinking enabled by literary language. “Try counting the thoughts in a powerful paragraph in a realist novel,” he writes, after quoting the same passage from Middlemarch: “they are no longer separate units.” Earlier in the book he asserts that, “at its deepest”, an idea possesses more than “just a statable content”.

Carey is blithely confident about the meaning of literary texts but in the past has dismissed efforts to bring aesthetic response into the realm of scientific knowledge. Davis, by contrast, surrenders to literature’s indeterminacy but believes that its impact shows up on a brain scan. He quotes the example of cognitive scientists, his collaborators at the centre for reading research that he runs at the University of Liverpool, who have demonstrated “how a dramatically compressed Shakespearean coinage such as ‘this old man godded me’ excites the brain in a way that ‘this old man deified me’ . . . does not”. Davis claims that science shows “how” a Shakespearean coinage does this – but how literature achieves the effect is exactly what resists not just scientific decoding, but verbal description. “I cannot just talk about reading,” he writes, “when that is precisely not what I shall claim to be a literary way of thinking” (as if a vet used only man-made tools).

One result of Davis’s aversion to the general is a certain overexuberance with regard to quotations. He is constantly offering “a different instance”. When he writes “I can think of a hundred examples . . .” you are justified in fearing he will list them. Shakespeare is likened to “existential physics” and “process philosophy”, and a Shakespearean allusion renders a nonsensical proposition more nonsensical still: “In the readiness of all, the words themselves seem ready when they do come.” Equally forbidding though no more instructive is the sentence that begins: “It is fashionable to talk, after Csikszentmihalyi, of being ‘in the flow’ . . .” Though Davis has none of Carey’s semi-conscious misgivings about reading, he unwittingly exposes one of its greatest dangers. Biron, attacking study at the start of Love’s Labour’s Lost, claims that “light seeking light doth light of light beguile” (in which “light” means respectively the mind, enlightenment, sight and eyes). It might be said that Davis has read too much to write a readable book about reading.

However, Davis’s idea of what literature uniquely offers to the reader is a powerful one, and is shared to some extent by Wendy Lesser, the essayist and literary editor, in her warmer but no less erudite or sophisticated Why I Read, a tribute to what she calls “the serious pleasure of books”. Just as Davis likes writing in which language is used “as a sign of approximation to point to more than itself”, so Lesser admires writers who meet our desire for order “only halfway” (Eça de Queiroz) or give us “only a small part of what is really there” (Penelope Fitzgerald). But Lesser differs from Davis and also from Carey in taking a degree of responsibility: literature is grounded in the capricious reader, not in the permanent present of the text. Carey first read War and Peace in the 1960s but if his feelings about it have changed, he doesn’t tell us, whereas Lesser explains how it overtook Anna Karenina in her affections. And the reader’s shimmying perspective – the reader as human being – is treated as a topic in its own right by the journalist Rebecca Mead in The Road to Middlemarch, in which she traces how a novel that once gratified her teenage “aspirations to maturity and learnedness” has become “a melancholy dissection of the resignations that attend middle age, the paths untrodden and the choices unmade”.

Lesser and Mead treat the reader to a more attractive vision of reading, no less valuable for being far less dutiful, no less “salutary” for accommodating the kinds of pleasures that Lesser describes as “cellulose-based”. Carey’s distinctions between learning and having fun, between life and literature, are cleanly resolved. Just as reading the classics is not slog-work, so the library is not the unreal or anti-real world. “The library had been a place for studying,” Mead writes, of her rather jollier time at Oxford, “but it had also been a place for everything else; seeing friends, watching strangers, flirting and falling in love. Life happened in the library.” Without making the connection, she promotes a similarly unhermetic vision of her engagement with literature, which is not, she writes, just “a form of escapism” but a first-hand mode of existence – as Dickens more than implied when he wrote that reading Eliot’s Adam Bede had taken its place “among the actual experiences and endurances of my life”. When you are “grasped” by a book, Mead argues, “reading . . . feels like an urgent, crucial dimension of life itself”. And you can do it while you smoke.

Leo Robson is the lead fiction reviewer for the New Statesman.

This article first appeared in the 12 March 2014 issue of the New Statesman, 4 years of austerity

JOHN DEVOLLE/GETTY IMAGES
Show Hide image

Fitter, dumber, more productive

How the craze for Apple Watches, Fitbits and other wearable tech devices revives the old and discredited science of behaviourism.

When Tim Cook unveiled the latest operating system for the Apple Watch in June, he described the product in a remarkable way. This is no longer just a wrist-mounted gadget for checking your email and social media notifications; it is now “the ultimate device for a healthy life”.

With the watch’s fitness-tracking and heart rate-sensor features to the fore, Cook explained how its Activity and Workout apps have been retooled to provide greater “motivation”. A new Breathe app encourages the user to take time out during the day for deep breathing sessions. Oh yes, this watch has an app that notifies you when it’s time to breathe. The paradox is that if you have zero motivation and don’t know when to breathe in the first place, you probably won’t survive long enough to buy an Apple Watch.

The watch and its marketing are emblematic of how the tech trend is moving beyond mere fitness tracking into what might one call quality-of-life tracking and algorithmic hacking of the quality of consciousness. A couple of years ago I road-tested a brainwave-sensing headband, called the Muse, which promises to help you quiet your mind and achieve “focus” by concentrating on your breathing as it provides aural feedback over earphones, in the form of the sound of wind at a beach. I found it turned me, for a while, into a kind of placid zombie with no useful “focus” at all.

A newer product even aims to hack sleep – that productivity wasteland, which, according to the art historian and essayist Jonathan Crary’s book 24/7: Late Capitalism and the Ends of Sleep, is an affront to the foundations of capitalism. So buy an “intelligent sleep mask” called the Neuroon to analyse the quality of your sleep at night and help you perform more productively come morning. “Knowledge is power!” it promises. “Sleep analytics gathers your body’s sleep data and uses it to help you sleep smarter!” (But isn’t one of the great things about sleep that, while you’re asleep, you are perfectly stupid?)

The Neuroon will also help you enjoy technologically assisted “power naps” during the day to combat “lack of energy”, “fatigue”, “mental exhaustion” and “insomnia”. When it comes to quality of sleep, of course, numerous studies suggest that late-night smartphone use is very bad, but if you can’t stop yourself using your phone, at least you can now connect it to a sleep-enhancing gadget.

So comes a brand new wave of devices that encourage users to outsource not only their basic bodily functions but – as with the Apple Watch’s emphasis on providing “motivation” – their very willpower.  These are thrillingly innovative technologies and yet, in the way they encourage us to think about ourselves, they implicitly revive an old and discarded school of ­thinking in psychology. Are we all neo-­behaviourists now?

***

The school of behaviourism arose in the early 20th century out of a virtuous scientific caution. Experimenters wished to avoid anthropomorphising animals such as rats and pigeons by attributing to them mental capacities for belief, reasoning, and so forth. This kind of description seemed woolly and impossible to verify.

The behaviourists discovered that the actions of laboratory animals could, in effect, be predicted and guided by careful “conditioning”, involving stimulus and reinforcement. They then applied Ockham’s razor: there was no reason, they argued, to believe in elaborate mental equipment in a small mammal or bird; at bottom, all behaviour was just a response to external stimulus. The idea that a rat had a complex mentality was an unnecessary hypothesis and so could be discarded. The psychologist John B Watson declared in 1913 that behaviour, and behaviour alone, should be the whole subject matter of psychology: to project “psychical” attributes on to animals, he and his followers thought, was not permissible.

The problem with Ockham’s razor, though, is that sometimes it is difficult to know when to stop cutting. And so more radical behaviourists sought to apply the same lesson to human beings. What you and I think of as thinking was, for radical behaviourists such as the Yale psychologist Clark L Hull, just another pattern of conditioned reflexes. A human being was merely a more complex knot of stimulus responses than a pigeon. Once perfected, some scientists believed, behaviourist science would supply a reliable method to “predict and control” the behaviour of human beings, and thus all social problems would be overcome.

It was a kind of optimistic, progressive version of Nineteen Eighty-Four. But it fell sharply from favour after the 1960s, and the subsequent “cognitive revolution” in psychology emphasised the causal role of conscious thinking. What became cognitive behavioural therapy, for instance, owed its impressive clinical success to focusing on a person’s cognition – the thoughts and the beliefs that radical behaviourism treated as mythical. As CBT’s name suggests, however, it mixes cognitive strategies (analyse one’s thoughts in order to break destructive patterns) with behavioural techniques (act a certain way so as to affect one’s feelings). And the deliberate conditioning of behaviour is still a valuable technique outside the therapy room.

The effective “behavioural modification programme” first publicised by Weight Watchers in the 1970s is based on reinforcement and support techniques suggested by the behaviourist school. Recent research suggests that clever conditioning – associating the taking of a medicine with a certain smell – can boost the body’s immune response later when a patient detects the smell, even without a dose of medicine.

Radical behaviourism that denies a subject’s consciousness and agency, however, is now completely dead as a science. Yet it is being smuggled back into the mainstream by the latest life-enhancing gadgets from Silicon Valley. The difference is that, now, we are encouraged to outsource the “prediction and control” of our own behaviour not to a benign team of psychological experts, but to algorithms.

It begins with measurement and analysis of bodily data using wearable instruments such as Fitbit wristbands, the first wave of which came under the rubric of the “quantified self”. (The Victorian polymath and founder of eugenics, Francis Galton, asked: “When shall we have anthropometric laboratories, where a man may, when he pleases, get himself and his children weighed, measured, and rightly photographed, and have their bodily faculties tested by the best methods known to modern science?” He has his answer: one may now wear such laboratories about one’s person.) But simply recording and hoarding data is of limited use. To adapt what Marx said about philosophers: the sensors only interpret the body, in various ways; the point is to change it.

And the new technology offers to help with precisely that, offering such externally applied “motivation” as the Apple Watch. So the reasoning, striving mind is vacated (perhaps with the help of a mindfulness app) and usurped by a cybernetic system to optimise the organism’s functioning. Electronic stimulus produces a physiological response, as in the behaviourist laboratory. The human being herself just needs to get out of the way. The customer of such devices is merely an opaquely functioning machine to be tinkered with. The desired outputs can be invoked by the correct inputs from a technological prosthesis. Our physical behaviour and even our moods are manipulated by algorithmic number-crunching in corporate data farms, and, as a result, we may dream of becoming fitter, happier and more productive.

***

 

The broad current of behaviourism was not homogeneous in its theories, and nor are its modern technological avatars. The physiologist Ivan Pavlov induced dogs to salivate at the sound of a bell, which they had learned to associate with food. Here, stimulus (the bell) produces an involuntary response (salivation). This is called “classical conditioning”, and it is advertised as the scientific mechanism behind a new device called the Pavlok, a wristband that delivers mild electric shocks to the user in order, so it promises, to help break bad habits such as overeating or smoking.

The explicit behaviourist-revival sell here is interesting, though it is arguably predicated on the wrong kind of conditioning. In classical conditioning, the stimulus evokes the response; but the Pavlok’s painful electric shock is a stimulus that comes after a (voluntary) action. This is what the psychologist who became the best-known behaviourist theoretician, B F Skinner, called “operant conditioning”.

By associating certain actions with positive or negative reinforcement, an animal is led to change its behaviour. The user of a Pavlok treats herself, too, just like an animal, helplessly suffering the gadget’s painful negative reinforcement. “Pavlok associates a mild zap with your bad habit,” its marketing material promises, “training your brain to stop liking the habit.” The use of the word “brain” instead of “mind” here is revealing. The Pavlok user is encouraged to bypass her reflective faculties and perform pain-led conditioning directly on her grey matter, in order to get from it the behaviour that she prefers. And so modern behaviourist technologies act as though the cognitive revolution in psychology never happened, encouraging us to believe that thinking just gets in the way.

Technologically assisted attempts to defeat weakness of will or concentration are not new. In 1925 the inventor Hugo Gernsback announced, in the pages of his magazine Science and Invention, an invention called the Isolator. It was a metal, full-face hood, somewhat like a diving helmet, connected by a rubber hose to an oxygen tank. The Isolator, too, was designed to defeat distractions and assist mental focus.

The problem with modern life, Gernsback wrote, was that the ringing of a telephone or a doorbell “is sufficient, in nearly all cases, to stop the flow of thoughts”. Inside the Isolator, however, sounds are muffled, and the small eyeholes prevent you from seeing anything except what is directly in front of you. Gernsback provided a salutary photograph of himself wearing the Isolator while sitting at his desk, looking like one of the Cybermen from Doctor Who. “The author at work in his private study aided by the Isolator,” the caption reads. “Outside noises being eliminated, the worker can concentrate with ease upon the subject at hand.”

Modern anti-distraction tools such as computer software that disables your internet connection, or word processors that imitate an old-fashioned DOS screen, with nothing but green text on a black background, as well as the brain-measuring Muse headband – these are just the latest versions of what seems an age-old desire for technologically imposed calm. But what do we lose if we come to rely on such gadgets, unable to impose calm on ourselves? What do we become when we need machines to motivate us?

***

It was B F Skinner who supplied what became the paradigmatic image of ­behaviourist science with his “Skinner Box”, formally known as an “operant conditioning chamber”. Skinner Boxes come in different flavours but a classic example is a box with an electrified floor and two levers. A rat is trapped in the box and must press the correct lever when a certain light comes on. If the rat gets it right, food is delivered. If the rat presses the wrong lever, it receives a painful electric shock through the booby-trapped floor. The rat soon learns to press the right lever all the time. But if the levers’ functions are changed unpredictably by the experimenters, the rat becomes confused, withdrawn and depressed.

Skinner Boxes have been used with success not only on rats but on birds and primates, too. So what, after all, are we doing if we sign up to technologically enhanced self-improvement through gadgets and apps? As we manipulate our screens for ­reassurance and encouragement, or wince at a painful failure to be better today than we were yesterday, we are treating ourselves similarly as objects to be improved through operant conditioning. We are climbing willingly into a virtual Skinner Box.

As Carl Cederström and André Spicer point out in their book The Wellness Syndrome, published last year: “Surrendering to an authoritarian agency, which is not just telling you what to do, but also handing out rewards and punishments to shape your behaviour more effectively, seems like undermining your own agency and autonomy.” What’s worse is that, increasingly, we will have no choice in the matter anyway. Gernsback’s Isolator was explicitly designed to improve the concentration of the “worker”, and so are its digital-age descendants. Corporate employee “wellness” programmes increasingly encourage or even mandate the use of fitness trackers and other behavioural gadgets in order to ensure an ideally efficient and compliant workforce.

There are many political reasons to resist the pitiless transfer of responsibility for well-being on to the individual in this way. And, in such cases, it is important to point out that the new idea is a repackaging of a controversial old idea, because that challenges its proponents to defend it explicitly. The Apple Watch and its cousins promise an utterly novel form of technologically enhanced self-mastery. But it is also merely the latest way in which modernity invites us to perform operant conditioning on ourselves, to cleanse away anxiety and dissatisfaction and become more streamlined citizen-consumers. Perhaps we will decide, after all, that tech-powered behaviourism is good. But we should know what we are arguing about. The rethinking should take place out in the open.

In 1987, three years before he died, B F Skinner published a scholarly paper entitled Whatever Happened to Psychology as the Science of Behaviour?, reiterating his now-unfashionable arguments against psychological talk about states of mind. For him, the “prediction and control” of behaviour was not merely a theoretical preference; it was a necessity for global social justice. “To feed the hungry and clothe the naked are ­remedial acts,” he wrote. “We can easily see what is wrong and what needs to be done. It is much harder to see and do something about the fact that world agriculture must feed and clothe billions of people, most of them yet unborn. It is not enough to advise people how to behave in ways that will make a future possible; they must be given effective reasons for behaving in those ways, and that means effective contingencies of reinforcement now.” In other words, mere arguments won’t equip the world to support an increasing population; strategies of behavioural control must be designed for the good of all.

Arguably, this authoritarian strand of behaviourist thinking is what morphed into the subtly reinforcing “choice architecture” of nudge politics, which seeks gently to compel citizens to do the right thing (eat healthy foods, sign up for pension plans) by altering the ways in which such alternatives are presented.

By contrast, the Apple Watch, the Pavlok and their ilk revive a behaviourism evacuated of all social concern and designed solely to optimise the individual customer. By ­using such devices, we voluntarily offer ourselves up to a denial of our voluntary selves, becoming atomised lab rats, to be manipulated electronically through the corporate cloud. It is perhaps no surprise that when the founder of American behaviourism, John B Watson, left academia in 1920, he went into a field that would come to profit very handsomely indeed from his skills of manipulation – advertising. Today’s neo-behaviourist technologies promise to usher in a world that is one giant Skinner Box in its own right: a world where thinking just gets in the way, and we all mechanically press levers for food pellets.

This article first appeared in the 18 August 2016 issue of the New Statesman, Corbyn’s revenge