Show Hide image

Eternal vigilance

Throughout the 1940s, George Orwell was formulating the ideas about language and politics that found

By 1940, George Orwell had behind him four conventional “social” novels and, more significantly, three books of documentary reportage, each one better than the last, culminating in his classic account of the Spanish Civil War, Homage to Catalonia.

Gradually in the others but culminating in Homage, Orwell perfected his signature “plain” style, which so resembles someone speaking honestly and without pretence directly to you, and he had more or less settled on his political opinions: “Every line of serious work that I have written since 1936 has been written, directly or indirectly, against totalitarianism and for democratic socialism, as I understand it.” So he said in 1946.

But while this may have been settled, there were other matters Orwell was still working out in his mind. The subjects of the essays Orwell wrote in the 1940s are almost all, in one way or another, things Orwell doesn’t like. The essays are incessantly self-contradicting. First, Orwell declares that no great novel could now be written from a Catholic (or communist) perspective; later he allows that a novel could be written from such a perspective, in a pinch; and then, in his essay on Graham Greene, he comes very near to suggesting that only Catholics can now write novels.

In his essay on T S Eliot, he writes that it is “fashionable to say that in verse only the words count and ‘meaning’ is irrelevant, but in fact every poem contains a prose-meaning, and when the poem is any good it is a meaning which the poet urgently wishes to express. All art is to some extent propaganda.” Several years later, in “The Prevention of Literature”, in arguing for the idea that poetry might survive totalitarianism while prose would not, he writes that “what the poet is saying – that is, what his poem ‘means’ if translated into prose – is relatively unimportant even to himself”.

What is particularly frustrating about these contradictions is that at each successive moment Orwell presents them in his great style, his wonderful sharp-edged plain-spoken style, which makes you feel that there is no way on earth you could possibly disagree with him, unless you’re part of the pansy left, or a sandal-wearer and fruit-juice drinker, or maybe just a crank.

In a way I’m exaggerating, because the rightness of Orwell on a number of topics has been an albatross around his neck for 60 years. In truth, Orwell was wrong about all sorts of things, not least the inner logic of totalitarianism: he thought a mature totalitarian system would so deform its citizenry that they would not be able to overthrow it. This was the nightmare vision of Nineteen Eighty-Four. In fact, as it turned out in Russia, even the ruling elite was not willing to maintain mature totalitarianism after Stalin’s death.

Other totalitarian regimes have repeated the pattern. Orwell was wrong and Orwell contradicted himself. He was more insightful about the distant dangers of communist thought-control, in the Soviet Union, than the more pressing thought-control of western consumerism. Nor did he see the sexual revolution coming, not by a long shot; one wonders what the too-frequent taunter of the “pansy left” would have made of the fact that the gay movement was one of the most successful, because most militant, of the post-1960s liberation struggles.

But there is a deeper logic in Orwell’s essays, beneath the contradictions and inevitable oversights. The crisis that he was writing himself through in the 1940s was the crisis of the war and, even more confusingly, the postwar. It involved a kind of projection into the future of certain tendencies latent in the present. Orwell worries about the potential Sovietisation of Europe, but also the infection by totalitarian thinking of life outside the Soviet sphere – not just specific threats to specific freedoms, but to deeper structures of feeling. As the philologist Syme says to Winston Smith in Nineteen Eighty-Four: “Don’t you see that the whole aim of Newspeak is to narrow the range of thought? . . . Every year fewer and fewer words, and the range of consciousness is smaller.”

If Orwell was wrong in some sense about the long-term development of totalitarianism, he was right about its deepest intellectual intentions, about the rot it wished to create at the centre of thinking itself. And he was right that this rot could spread.

One solution would be to cordon off literature from life and politics entirely: this was, in some sense, the solution adopted by the writers of the previous generation – Eliot, James Joyce, D H Lawrence, Ezra Pound – whom Orwell calls the writers of the 1920s and we now call the high modernists. And yet he did not want to make a special plea for literature; in fact, of all the writers of his time, Orwell was constitutionally the least capable of making this separation. His own writing and politics were the fruit of his specific experience – of imperialism in Burma, of the conditions in the English coal mines, of the war in Spain. He insists on several occasions that “all art is propaganda” – the expression of a particular world-view. In Dickens’s case, for example, this is the world-view of a classic 19th-century bourgeois liberal, a world-view Orwell admires even as he sees its limitations.

For the Orwell of the early essays, the case of Henry Miller is the tough one. Because while Dickens’s politics are in the end congenial enough, Miller’s quietism is less so. “I first met Miller at the end of 1936, when I was passing through Paris on my way to Spain,” writes Orwell. “What most intrigued me about him was to find that he felt no interest in the Spanish war whatever. He merely told me in forcible terms that to go to Spain at that moment was the act of an idiot.” Orwell nonetheless went to Spain, and fought there. He was a writer who felt it was vital to let politics animate his work; Miller was the opposite.

And yet Orwell contrasts Miller favourably to W H Auden, who at this time in the poem “Spain” was miming the thoughts of the good party man about the “necessary murder”. Miller is so far removed from this sort of sentiment, so profound is his individualism and his conviction, that Orwell comes close to endorsing it: “Seemingly there is nothing left but quietism robbing reality of its terrors by simply submitting to it. Get inside the whale – or rather, admit that you are inside the whale (for you are, of course).” Except Orwell doesn’t really mean this. He may be inside the whale but he does not intend to stop disturbing its digestion, he does not intend to be any more quietistic.

What he admired above all in Miller was his willingness to go against the grain of the time. While all art is propaganda, it needn’t necessarily propagandise something correct. The important thing is that the writer himself believe it.

But there are certain things that you simply can’t believe. “No one ever wrote a great novel in praise of the Inquisition,” he asserts. Is that true? At almost the exact same moment, Jean-Paul Sartre (a writer who, Orwell thought, incorrectly, was “full of air”) was writing in What Is Literature?: “Nobody can suppose for a moment that it is possible to write a good novel in praise of anti-Semitism.” Is that true? It seems to have been a problem that leftist writers of the 1940s were going to face by sheer bluff assertion.

For Orwell the number of beliefs hostile to literary production seemed to expand and expand. Eliot’s “Four Quartets” is labelled “Pétainist” – a fairly strong term to hurl at a long experimental poem that doesn’t even rhyme. And Salvador Dalí, in “Benefit of Clergy”, is a “rat”.

As the war goes on, then ends, Orwell’s sense of peril grows sharper, and he looks at literature in a different way. He comes to think that no matter who wins, the world will find itself split again into armed camps, each of them threatening the others, none of them truly free – and literature will simply not survive. This is the landscape of Nineteen Eighty-Four and it is also the landscape of his later essays – “The Prevention of Literature”, “Politics and the English Language”, “Writers and Leviathan”.

There is even, momentarily, a kind of hallucination, in the curious short piece “Confessions of a Book Reviewer”, where some of Orwell’s old interest in the starving writer crops up, now mixed with the wintry gloominess of his later years: “In a cold but stuffy bed- sitting room littered with cigarette ends and half-empty cups of tea, a man in a moth-eaten dressing gown sits at a rickety table, trying to find room for his typewriter among the piles of dusty papers that surround it . . . He is a man of 35, but looks 50. He is bald, has varicose veins and wears spectacles, or would wear them if only his pair were not chronically lost.”

Who is this but Winston Smith, the failed hero of Nineteen Eighty-Four, figured as a book reviewer? Or who, conversely, is Winston Smith, but a book reviewer figured as the prisoner of a futuristic totalitarian regime?

With great doggedness, Orwell keeps delving into the question of literature’s position in society, and what might be done to keep it alive in a time of total politics. In “Writers and Leviathan”, dated 1948, he argues that writers must ultimately separate themselves from their political work. It’s a depressing essay and it ends – one wonders whether Orwell was aware of this – with an echo of the line of Auden’s he so reviled: the writer capable of separating himself from his political activity will be the one who “stands aside, records the things that are done and admits their necessity, but refuses to be deceived as to their true nature”.

Orwell was always a realist who knew that politics was a dirty business –
but he was never quite such a realist as here. The realm of freedom had finally shrunk to a small, small point, and it had to be defended. As Winston Smith says in Nineteen Eighty-Four, “Nothing was your own except the few cubic centimetres inside your skull.”

It is hard not to wonder whether the pessi­mism of this conclusion was partly a response to the art (or propaganda) Orwell was himself creating in those years. He had published Animal Farm in 1945; weakened by the tuberculosis that would kill him, he was writing Nineteen Eighty-Four in 1947-48. After the reception of Animal Farm, and with the direction Nineteen Eighty-Four was taking, it must have been clear to him on some level that the world was going to use these books in a certain way. And it did use them that way.

The socialist critique of Orwell’s late work seems essentially correct – they were not only anti-Stalinist but anti-revolutionary, and were read as such by millions of ordinary people (a fact that Orwell, who was always curious to know what ordinary people thought, would have had to respect). Out of “necessity” he had chosen a position, and a way of stating that position, that would be used for years to come to bludgeon the anti-war, anti-imperialist left.

That he had chosen honestly what seemed to him the least bad of a set of bad political options did not make them, in the long view of history, any better.

But what a wonderful writer he had become! That voice – once you’ve heard it, how do you get it out of your head? It feels like the truth, even when it’s not telling the truth. It is clear and sharp but unhurried; Orwell is not afraid to be boring, which means that he is never boring.

His voice as a writer had been formed before Spain, but Spain gave him a jolt – not the fighting nor his injury (a sniper had shot him through the throat in 1937), though these had their effects, but the calculated campaign of deception he saw in the press when he got back, waged by people who knew better. “Early in life I had noticed that no event is ever correctly reported in a newspaper,” Orwell recalled, “but in Spain, for the first time, I saw newspaper reports which did not bear any relation to the facts, not even the relationship which is implied in an ordinary lie. I saw great battles reported where there had been no fighting, and complete silence where hundreds of men had been killed . . . This kind of thing is frightening to me, because it often gives me the feeling that the very concept of objective truth is fading out of the world. After all, the chances are that those lies, or at any rate similar lies, will pass into history.”

This insight reverberates through Orwell’s work for the rest of his life. The answer to lies is to tell the truth. But how? How do you even know what the truth is, and how do you create a style in which to tell it? Orwell’s answer is laid out in “Politics and the English Language”: You avoid ready phrases, you purge your language of dead metaphors, you do not claim to know what you do not know. Far from being a relaxed prose (which is how it seems), Orwell’s is a supremely vigilant one.

It is interesting that Orwell did not go to university. He went to Eton, but loafed around there and, afterwards, went off to Burma as a police officer. University is where you sometimes get loaded up with fancy terms whose meaning you’re not quite sure of. Orwell was an intellectual and a highbrow who thought Joyce, Eliot and Lawrence were the greatest writers of his age, but he never uses fancy terms.

You could say that Orwell was not essentially a literary critic, or that he was the only kind of literary critic worth reading. He was most interested in the way that literature intersects with life, with the world, with groups of actual people. Some of his more enjoyable essays deal with things that a lot of people read and consume – postcards, detective fiction, “good bad books” (and poetry) – simply because a lot of people consume them.

Postwar intellectuals would celebrate (or bemoan) the “rise of mass culture”. Orwell never saw it as a novel phenomenon. He was one of the first critics to take popular culture seriously because he believed it had always been around and simply wanted attention. These essays are part of a deeply democratic commitment to culture in general and reading in particular.

His reading of writers who were more traditionally “literary” is shot through with the same commitment. Orwell had read a great deal, and his favourite writers were by many standards difficult writers, but he refused to appeal to the occult mechanisms of literary theory. “One’s real reaction to a book, when one has a reaction at all, is usually ‘I like this book’ or ‘I don’t like it,’ and what follows is a rationalisation. But ‘I like this book’ is not, I think, a non-literary reaction.” And the “rationalisation”, he saw, was going to involve your background, your expectations, the historical period you’re living through.

If we compare Orwell to his near-contemporary Edmund Wilson, who was in many senses a more sensitive critic, we see Orwell’s peculiar strength. At almost the exact same moment as Orwell, in early 1940, Wilson published a psychobiographical essay on Dickens in which he traced much of Dickens’s later development to his brush with poverty as a young man.

Orwell’s treatment is much more sociological and political, and in a way less dramatic than Wilson’s. Yet at one point Orwell encapsulates Wilson’s argument with a remarkable concision: “Dickens had grown up near enough to poverty to be terrified of it, and in spite of his generosity of mind, he is not free from the special prejudices of the shabby-genteel.” This is stark, and fair, and that “terrified” is unforgettable.

You can tie yourself in knots – many leftist intellectuals have done this over the years – trying to prove that Orwell’s style is a façade, an invention, a mask he put on when he changed his name from Eric Blair to “George Orwell”; that by seeming to tell the whole story in plain and honest terms, it actually makes it more difficult to see, it obfuscates, the part of the story that’s necessarily left out; that ultimately it rubber-stamps the status quo.

In some sense, intellectually, all this is true enough; you can spend a day, a week, a semester proving it. There really are things in the world that Orwell’s style would never be able to capture. But there are very few such things.

Orwell did not want to become a saint, but he became a saint anyway. For most of his career a struggling writer, eking out a living reviewing books at an astonishing rate, he was gradually acknowledged, especially after the appearance of Homage to Catalonia in 1938, to be a great practitioner of English prose. With the publication of Animal Farm – a book turned down by several of England’s pre-eminent houses because they did not want to offend Britain’s ally the Soviet Union – Orwell became a household name.

Then his influence grew and grew, so that shortly after his death he was already a phenomenon. “In the Britain of the 1950s,” the great cultural critic Raymond Williams once lamented, “along every road that you moved, the figure of Orwell seemed to be waiting. If you tried to develop a new kind of popular cultural analysis, there was Orwell; if you wanted to report on work or ordinary life, there was Orwell; if you engaged in any kind of socialist argument, there was an enormously inflated statue of Orwell warning you to go back.” In a way the incredible posthumous success of Orwell has seemed one of the more peculiar episodes in the cultural life of the west.

He was not, as Lionel Trilling once pointed out, a genius; he was not mysterious; he had served in Burma, washed dishes in a Parisian hotel, and fought for a few months in Spain, but this hardly added up to a life of adventure; for the most part he lived in London and reviewed books. So odd, in fact, has the success of Orwell seemed to some that there is even a book, George Orwell: the Politics of Literary Reputation, devoted to getting to the bottom of it.

When you return to his essays of the 1940s, the mystery evaporates. You would probably not be able to write this way now, even if you learned the craft: the voice would seem put-on, after Orwell. But there is nothing put-on about it here, and it seems to speak, despite the specificity of the issues discussed, directly to the present. In Orwell’s clear, strong voice we hear a warning. Because we, too, live in a time when truth is disappearing from the world, and doing so in just the way Orwell worried it would: through language. We move through the world by naming things in it, and we explain the world through sentences and stories. The lesson of these essays is clear: Look around you.

Describe what you see as an ordinary observer – for you are one, you know – would see them. Take things seriously.

And tell the truth. Tell the truth.

Keith Gessen is a novelist and critic

This article first appeared in the 01 June 2009 issue of the New Statesman, Big Brother

JOHN DEVOLLE/GETTY IMAGES
Show Hide image

Fitter, dumber, more productive

How the craze for Apple Watches, Fitbits and other wearable tech devices revives the old and discredited science of behaviourism.

When Tim Cook unveiled the latest operating system for the Apple Watch in June, he described the product in a remarkable way. This is no longer just a wrist-mounted gadget for checking your email and social media notifications; it is now “the ultimate device for a healthy life”.

With the watch’s fitness-tracking and heart rate-sensor features to the fore, Cook explained how its Activity and Workout apps have been retooled to provide greater “motivation”. A new Breathe app encourages the user to take time out during the day for deep breathing sessions. Oh yes, this watch has an app that notifies you when it’s time to breathe. The paradox is that if you have zero motivation and don’t know when to breathe in the first place, you probably won’t survive long enough to buy an Apple Watch.

The watch and its marketing are emblematic of how the tech trend is moving beyond mere fitness tracking into what might one call quality-of-life tracking and algorithmic hacking of the quality of consciousness. A couple of years ago I road-tested a brainwave-sensing headband, called the Muse, which promises to help you quiet your mind and achieve “focus” by concentrating on your breathing as it provides aural feedback over earphones, in the form of the sound of wind at a beach. I found it turned me, for a while, into a kind of placid zombie with no useful “focus” at all.

A newer product even aims to hack sleep – that productivity wasteland, which, according to the art historian and essayist Jonathan Crary’s book 24/7: Late Capitalism and the Ends of Sleep, is an affront to the foundations of capitalism. So buy an “intelligent sleep mask” called the Neuroon to analyse the quality of your sleep at night and help you perform more productively come morning. “Knowledge is power!” it promises. “Sleep analytics gathers your body’s sleep data and uses it to help you sleep smarter!” (But isn’t one of the great things about sleep that, while you’re asleep, you are perfectly stupid?)

The Neuroon will also help you enjoy technologically assisted “power naps” during the day to combat “lack of energy”, “fatigue”, “mental exhaustion” and “insomnia”. When it comes to quality of sleep, of course, numerous studies suggest that late-night smartphone use is very bad, but if you can’t stop yourself using your phone, at least you can now connect it to a sleep-enhancing gadget.

So comes a brand new wave of devices that encourage users to outsource not only their basic bodily functions but – as with the Apple Watch’s emphasis on providing “motivation” – their very willpower.  These are thrillingly innovative technologies and yet, in the way they encourage us to think about ourselves, they implicitly revive an old and discarded school of ­thinking in psychology. Are we all neo-­behaviourists now?

***

The school of behaviourism arose in the early 20th century out of a virtuous scientific caution. Experimenters wished to avoid anthropomorphising animals such as rats and pigeons by attributing to them mental capacities for belief, reasoning, and so forth. This kind of description seemed woolly and impossible to verify.

The behaviourists discovered that the actions of laboratory animals could, in effect, be predicted and guided by careful “conditioning”, involving stimulus and reinforcement. They then applied Ockham’s razor: there was no reason, they argued, to believe in elaborate mental equipment in a small mammal or bird; at bottom, all behaviour was just a response to external stimulus. The idea that a rat had a complex mentality was an unnecessary hypothesis and so could be discarded. The psychologist John B Watson declared in 1913 that behaviour, and behaviour alone, should be the whole subject matter of psychology: to project “psychical” attributes on to animals, he and his followers thought, was not permissible.

The problem with Ockham’s razor, though, is that sometimes it is difficult to know when to stop cutting. And so more radical behaviourists sought to apply the same lesson to human beings. What you and I think of as thinking was, for radical behaviourists such as the Yale psychologist Clark L Hull, just another pattern of conditioned reflexes. A human being was merely a more complex knot of stimulus responses than a pigeon. Once perfected, some scientists believed, behaviourist science would supply a reliable method to “predict and control” the behaviour of human beings, and thus all social problems would be overcome.

It was a kind of optimistic, progressive version of Nineteen Eighty-Four. But it fell sharply from favour after the 1960s, and the subsequent “cognitive revolution” in psychology emphasised the causal role of conscious thinking. What became cognitive behavioural therapy, for instance, owed its impressive clinical success to focusing on a person’s cognition – the thoughts and the beliefs that radical behaviourism treated as mythical. As CBT’s name suggests, however, it mixes cognitive strategies (analyse one’s thoughts in order to break destructive patterns) with behavioural techniques (act a certain way so as to affect one’s feelings). And the deliberate conditioning of behaviour is still a valuable technique outside the therapy room.

The effective “behavioural modification programme” first publicised by Weight Watchers in the 1970s is based on reinforcement and support techniques suggested by the behaviourist school. Recent research suggests that clever conditioning – associating the taking of a medicine with a certain smell – can boost the body’s immune response later when a patient detects the smell, even without a dose of medicine.

Radical behaviourism that denies a subject’s consciousness and agency, however, is now completely dead as a science. Yet it is being smuggled back into the mainstream by the latest life-enhancing gadgets from Silicon Valley. The difference is that, now, we are encouraged to outsource the “prediction and control” of our own behaviour not to a benign team of psychological experts, but to algorithms.

It begins with measurement and analysis of bodily data using wearable instruments such as Fitbit wristbands, the first wave of which came under the rubric of the “quantified self”. (The Victorian polymath and founder of eugenics, Francis Galton, asked: “When shall we have anthropometric laboratories, where a man may, when he pleases, get himself and his children weighed, measured, and rightly photographed, and have their bodily faculties tested by the best methods known to modern science?” He has his answer: one may now wear such laboratories about one’s person.) But simply recording and hoarding data is of limited use. To adapt what Marx said about philosophers: the sensors only interpret the body, in various ways; the point is to change it.

And the new technology offers to help with precisely that, offering such externally applied “motivation” as the Apple Watch. So the reasoning, striving mind is vacated (perhaps with the help of a mindfulness app) and usurped by a cybernetic system to optimise the organism’s functioning. Electronic stimulus produces a physiological response, as in the behaviourist laboratory. The human being herself just needs to get out of the way. The customer of such devices is merely an opaquely functioning machine to be tinkered with. The desired outputs can be invoked by the correct inputs from a technological prosthesis. Our physical behaviour and even our moods are manipulated by algorithmic number-crunching in corporate data farms, and, as a result, we may dream of becoming fitter, happier and more productive.

***

 

The broad current of behaviourism was not homogeneous in its theories, and nor are its modern technological avatars. The physiologist Ivan Pavlov induced dogs to salivate at the sound of a bell, which they had learned to associate with food. Here, stimulus (the bell) produces an involuntary response (salivation). This is called “classical conditioning”, and it is advertised as the scientific mechanism behind a new device called the Pavlok, a wristband that delivers mild electric shocks to the user in order, so it promises, to help break bad habits such as overeating or smoking.

The explicit behaviourist-revival sell here is interesting, though it is arguably predicated on the wrong kind of conditioning. In classical conditioning, the stimulus evokes the response; but the Pavlok’s painful electric shock is a stimulus that comes after a (voluntary) action. This is what the psychologist who became the best-known behaviourist theoretician, B F Skinner, called “operant conditioning”.

By associating certain actions with positive or negative reinforcement, an animal is led to change its behaviour. The user of a Pavlok treats herself, too, just like an animal, helplessly suffering the gadget’s painful negative reinforcement. “Pavlok associates a mild zap with your bad habit,” its marketing material promises, “training your brain to stop liking the habit.” The use of the word “brain” instead of “mind” here is revealing. The Pavlok user is encouraged to bypass her reflective faculties and perform pain-led conditioning directly on her grey matter, in order to get from it the behaviour that she prefers. And so modern behaviourist technologies act as though the cognitive revolution in psychology never happened, encouraging us to believe that thinking just gets in the way.

Technologically assisted attempts to defeat weakness of will or concentration are not new. In 1925 the inventor Hugo Gernsback announced, in the pages of his magazine Science and Invention, an invention called the Isolator. It was a metal, full-face hood, somewhat like a diving helmet, connected by a rubber hose to an oxygen tank. The Isolator, too, was designed to defeat distractions and assist mental focus.

The problem with modern life, Gernsback wrote, was that the ringing of a telephone or a doorbell “is sufficient, in nearly all cases, to stop the flow of thoughts”. Inside the Isolator, however, sounds are muffled, and the small eyeholes prevent you from seeing anything except what is directly in front of you. Gernsback provided a salutary photograph of himself wearing the Isolator while sitting at his desk, looking like one of the Cybermen from Doctor Who. “The author at work in his private study aided by the Isolator,” the caption reads. “Outside noises being eliminated, the worker can concentrate with ease upon the subject at hand.”

Modern anti-distraction tools such as computer software that disables your internet connection, or word processors that imitate an old-fashioned DOS screen, with nothing but green text on a black background, as well as the brain-measuring Muse headband – these are just the latest versions of what seems an age-old desire for technologically imposed calm. But what do we lose if we come to rely on such gadgets, unable to impose calm on ourselves? What do we become when we need machines to motivate us?

***

It was B F Skinner who supplied what became the paradigmatic image of ­behaviourist science with his “Skinner Box”, formally known as an “operant conditioning chamber”. Skinner Boxes come in different flavours but a classic example is a box with an electrified floor and two levers. A rat is trapped in the box and must press the correct lever when a certain light comes on. If the rat gets it right, food is delivered. If the rat presses the wrong lever, it receives a painful electric shock through the booby-trapped floor. The rat soon learns to press the right lever all the time. But if the levers’ functions are changed unpredictably by the experimenters, the rat becomes confused, withdrawn and depressed.

Skinner Boxes have been used with success not only on rats but on birds and primates, too. So what, after all, are we doing if we sign up to technologically enhanced self-improvement through gadgets and apps? As we manipulate our screens for ­reassurance and encouragement, or wince at a painful failure to be better today than we were yesterday, we are treating ourselves similarly as objects to be improved through operant conditioning. We are climbing willingly into a virtual Skinner Box.

As Carl Cederström and André Spicer point out in their book The Wellness Syndrome, published last year: “Surrendering to an authoritarian agency, which is not just telling you what to do, but also handing out rewards and punishments to shape your behaviour more effectively, seems like undermining your own agency and autonomy.” What’s worse is that, increasingly, we will have no choice in the matter anyway. Gernsback’s Isolator was explicitly designed to improve the concentration of the “worker”, and so are its digital-age descendants. Corporate employee “wellness” programmes increasingly encourage or even mandate the use of fitness trackers and other behavioural gadgets in order to ensure an ideally efficient and compliant workforce.

There are many political reasons to resist the pitiless transfer of responsibility for well-being on to the individual in this way. And, in such cases, it is important to point out that the new idea is a repackaging of a controversial old idea, because that challenges its proponents to defend it explicitly. The Apple Watch and its cousins promise an utterly novel form of technologically enhanced self-mastery. But it is also merely the latest way in which modernity invites us to perform operant conditioning on ourselves, to cleanse away anxiety and dissatisfaction and become more streamlined citizen-consumers. Perhaps we will decide, after all, that tech-powered behaviourism is good. But we should know what we are arguing about. The rethinking should take place out in the open.

In 1987, three years before he died, B F Skinner published a scholarly paper entitled Whatever Happened to Psychology as the Science of Behaviour?, reiterating his now-unfashionable arguments against psychological talk about states of mind. For him, the “prediction and control” of behaviour was not merely a theoretical preference; it was a necessity for global social justice. “To feed the hungry and clothe the naked are ­remedial acts,” he wrote. “We can easily see what is wrong and what needs to be done. It is much harder to see and do something about the fact that world agriculture must feed and clothe billions of people, most of them yet unborn. It is not enough to advise people how to behave in ways that will make a future possible; they must be given effective reasons for behaving in those ways, and that means effective contingencies of reinforcement now.” In other words, mere arguments won’t equip the world to support an increasing population; strategies of behavioural control must be designed for the good of all.

Arguably, this authoritarian strand of behaviourist thinking is what morphed into the subtly reinforcing “choice architecture” of nudge politics, which seeks gently to compel citizens to do the right thing (eat healthy foods, sign up for pension plans) by altering the ways in which such alternatives are presented.

By contrast, the Apple Watch, the Pavlok and their ilk revive a behaviourism evacuated of all social concern and designed solely to optimise the individual customer. By ­using such devices, we voluntarily offer ourselves up to a denial of our voluntary selves, becoming atomised lab rats, to be manipulated electronically through the corporate cloud. It is perhaps no surprise that when the founder of American behaviourism, John B Watson, left academia in 1920, he went into a field that would come to profit very handsomely indeed from his skills of manipulation – advertising. Today’s neo-behaviourist technologies promise to usher in a world that is one giant Skinner Box in its own right: a world where thinking just gets in the way, and we all mechanically press levers for food pellets.

This article first appeared in the 18 August 2016 issue of the New Statesman, Corbyn’s revenge