Double your cuts: the coalition is threatening to make a second round of cuts. Picture: Daniel Malka/Gallery Stock
Show Hide image

The economic consequences of George Osborne: covering up the austerity mistake

How did the coalition government manage to transform the media debate on macroeconomics so comprehensively - and what will happen now they have?

The coalition defined itself as a government of austerity or, as its members preferred, as a government with the courage to take the hard decisions necessary to deal with the deficit. In its first two years it did what it had promised to do – and more – and as a result inflicted palpable harm on the economy. The recovery was delayed, costing the average household the equivalent of at least £4,000. In 2012 the government departed from its earlier plans and eased up on austerity, but pretended it had not.

The numbers are stark. GDP per head, a far better indicator of prosperity than GDP alone, grew on average by just 1 per cent a year between 2010 and 2014. The average growth rate from 1950 to 2010 was close to 2.25 per cent. Even under the last Labour government, average growth was 1.5 per cent, and that period included the global financial crisis. The past few years, as we recovered from the crash, should have been a time of above-average, not below-average growth. Even growth in the past two years has been only average by historical standards.

A government entering an election with that kind of performance should be trying to avoid talking about its economic record at all costs. Yet the opposite is the case. Indeed, the Conservative Party has an election platform that promises to repeat exactly the same mistake it made 2010. As a macroeconomist, I find it very easy to explain the impact the government’s mistakes had on the economy. I find it much more difficult to understand how it might, in three weeks’ time, get away with them, let alone promise to make the same mistake again.

The first important point to note is that austerity was not forced on the coalition. There was no market pressure that required it to embark on rapid fiscal tightening. There was a government debt crisis in 2010 but it was confined to a few eurozone countries, for one simple reason: none of those countries has a central bank of its own. If the markets refused to fund their governments they could not ask their own central bank to do so instead. From 2010 until September 2012, the European Central Bank refused to play the role that economists call “lender of last resort” and as a result interest rates on Irish, Portuguese and Spanish government debt increased substantially. In September 2012, the ECB changed its mind and promised (with conditions) to act as a lender of last resort. Interest rates fell and the eurozone debt funding crisis came to an end.

Outside the eurozone, governments had no problem funding their deficits. Interest rates on UK debt and that of other countries fell steadily. Yet to listen to many City economists is to be told that we should not take the markets for granted. Had austerity not been imposed, these markets could have turned on us at any time, and therefore it was right to reduce the deficit sharply as a precautionary measure. There is, unfortunately, a good deal of self-interest in this advice. If we have to fashion our economic policy to appease an unpredictable market, it adds to the influence of those who profess to be able to interpret its mood.

So let us imagine what might have happened, had the UK not undertaken austerity in 2010 and if the markets had started to worry that it might default. That would have put upward pressure on interest rates, as markets required some compensation for the possibility of default. However, the Bank of England was at the same time buying large quantities of UK government debt under its quantitative easing (QE) programme, which was designed to keep rates low. Any market panic would have been quickly offset by the Bank’s actions as it bought more debt. Unlike eurozone countries, the UK can never “run out of money” and so is not at risk of default.

Embarking on austerity was a choice for the coalition, not something it was forced to do. But large deficits cannot be sustained permanently. At some point they need to be reduced. And yet, since the time of Keynes, standard economics has recognised that cutting government spending or raising taxes reduces aggregate demand. So is there ever a good time to reduce the deficit?

There is a simple answer to that question. Although cutting the deficit will reduce demand, this can be offset by the central bank cutting interest rates. Fiscal austerity need not damage the aggregate economy as long as monetary policy is able to push in the other direction. The big problem in 2010 was that this was impossible because interest rates were already as low as the Bank thought prudent. So there is one set of circumstances in which it is unwise to cut the deficit and these circumstances were exactly those that prevailed in 2010.

Although the Bank felt it could not cut interest rates any further, it did have the policy of QE. Could this substitute for the inability to cut short-term interest rates? The answer is that economists had very little idea, essentially because QE had not been tried before. To embark on austerity, and hope that the programme would offset its effects, was therefore a large risk to take.

What happened was that the recovery in output that seemed to be about to occur in 2010 did not materialise. George Osborne would say that this poor performance was the result of things outside his control, such as the eurozone crisis. However, here we can turn to the Office for Budget Responsibility for guidance. The OBR calculates that austerity reduced GDP growth by 1 percentage point in both of the first two years of the coalition government: therefore, the level of GDP was 2 points lower in the second year. As growth did not return until 2013, at the very least that indicates that austerity led to a cumulative output loss of 5 per cent of GDP, which is about £4,000 per household.

How firmly based is the OBR analysis? There are very good reasons for thinking that its numbers are rather conservative. They look at the average effect of austerity over the past but, as has been noted, monetary policy is often able to offset the impact of fiscal consolidation on output, whereas on this occasion monetary policy’s hands were tied. We also have good econometric evidence that austerity has a larger-than-average impact in periods of recession. So, you could easily double the £4,000 number.

Osborne originally intended to eliminate the deficit within five years. However, in 2012, with the recovery nowhere in sight and tax revenues lower than expected, he changed the plan. Since 2012 there has been  much less deficit reduction and, partly as a result, the recovery began – three years late – in 2013.

 

***

 

This is all straightforward economics of the kind taught to every economics undergraduate around the world. The government chose a policy that many economists said in advance would do considerable harm. When that harm materialised it had to change its policy. That should have meant the government suffered a large blow to its reputation. The delayed recovery is one reason why living standards have suffered, so this is hardly an academic issue. A government with this woeful record should not be campaigning on economic competence. So, how has it managed to turn complete failure into the appearance of success?

There are four critical steps in how this was achieved. The first was to equate government budgets with household budgets. A consequence of recession is that many individuals and firms have to tighten their belts, so it seems intuitive that governments should do the same. This will be painful but individuals know that putting off their own adjustment can make things worse. It is part of every economics student’s initial education to learn why this analogy between individuals and governments is wrong – but most people have not studied economics.

A second key step was to blame the deficit on Labour profligacy. You do not need an economist to tell you that the main reason for the increase in the deficit was the recession created by the financial crisis. It is the case that the later years of the Brown chancellorship were not as fiscally prudent as his earlier years. But just before the recession the government debt-to-GDP ratio was lower than in 1997, which hardly indicates profligacy. Some have tried to suggest in hindsight that 2007 was a massive boom year (implying the need to run a budget surplus) but most evidence suggests otherwise and that certainly was not what most people thought at the time. There is enough here to make the profligacy charge vaguely credible, however, to people who do not look at the numbers.

The third stage in the austerity deception was to pretend that the policy change in 2012 was not a change in policy. The truth is plain to see in the data, but it was vital for Osborne not to admit that he was easing up on austerity. If he had admitted to changing his policy, he would have had to say why: austerity was delaying the recovery. All this stuff about a “long-term economic plan” can be seen as part of the effort to cover up the reversal and, therefore, the austerity mistake.

Pretending there had been no change in policy also allowed the fourth and final stage of turning failure into success, which was the most audacious deception of all. This was to claim that the recovery in 2013 vindicated the austerity policy. To see how absurd this claim is, imagine that a government on a whim decided to close down half the economy for a year. That would be a crazy thing to do, and with only half as much produced, everyone would be much poorer. However, a year later when that half of the economy started up again, economic growth would be around 100 per cent. The government could claim that this miraculous recovery vindicated its decision to close half the economy down the previous year. That would be absurd, but it is a pretty good analogy to claiming that the recovery of 2013 vindicated the austerity of 2010.

This was how the government could turn economic failure into apparent political success. The strategy also had one further consequence. It redefined the meaning of what good macroeconomic policy was. If you asked any economist what the aim of government policy should be, he or she would probably say it was to increase the welfare of the public, or, more specifically, to raise standards of living. A government that had presided over the longest fall in real wages in modern UK history would be in deep trouble. However, for much of the media, the goal of macroeconomic policy has been redefined as how effective the government has been at reducing the deficit. Macroeconomics as portrayed by the media is so different from the macroeconomics of the textbooks that I call it “mediamacro”.

Nothing illustrates mediamacro better than Ed Miliband’s 2014 Labour conference speech, in which he forgot to mention the deficit. In terms of what influences national prosperity, the real news over the past five years has been the stagnation in UK productivity. Yet when David Cameron failed to mention the productivity slowdown in his conference speech, hardly any journalist bothered to highlight this huge omission. When Miliband forgot to mention the deficit even Jon Snow lambasted him.

How did the coalition government manage to transform the media debate on macroeconomic policy so comprehensively? I have some idea of the ingredients involved but much less idea of how important each is. Of course having a partisan press is important, if only because it is capable of setting agendas. It also helps that the BBC can be easily intimidated. When its former economics editor Stephanie Flanders dared suggest that a lack of productivity growth might be a problem, Iain Duncan Smith made a formal complaint.

There is a further problem with how the media generally get their economic expertise. The economists you are most likely to see in the media are those who work in the City. It is, after all, part of their job to get media exposure; they’re always on hand to give a reaction. To be fair, when it comes to the daily ups and downs of the market, they are also best qualified to play this role, though in fact no one knows why markets move from day to day. But on issues of macroeconomic policy, City economists can present a biased and distorted view.

At the beginning of 2014, the Financial Times conducted a survey of economists; one of the questions it asked was: “Has George Osborne’s ‘plan A’ been vindicated by the recovery?” As I have already suggested, this question has an obvious answer. The 2013 recovery could not possibly vindicate the 2010 austerity because it is exactly what you would have expected to happen after austerity initially reduced GDP growth and was eased as a result. Among the academics answering this question, there were ten clear nos and only two clear yeses. However, among the many City economists who answered the FT survey, the numbers of yes and no replies were more evenly balanced.

Granted, it is regrettable that academic economists cannot speak with complete unanimity on the matter, but a 2/10 split is as close to a consensus as these things go. It is also the case that almost all academic macroeconomists would argue that the cuts in public investment that occurred in 2010 were a grave mistake. As the New Statesman reported in 2012, many of the minority of economists who originally supported immediate austerity have since acknowledged that cutting public investment in 2010 and 2011 was a grave mistake. It was these cuts, such as halting repairs to schools or reducing spending on flood defences, which most damaged GDP.

The austerity mistake involves basic macroeconomics. Cutting spending will reduce demand and is not to be undertaken when interest rates cannot be cut to offset its impact. The Conservatives, if elected, plan further sharp austerity in the early years of the next parliament, at a time when interest rates are still expected to be at or near their floor. Whatever your views about the desirable size of the state in the long run, to cut spending when the economy is still vulnerable in this way is to take a huge risk. It is exactly the risk that materialised from 2010, except today there is not even a hint of market pressure to cut the deficit quickly. Being able to cover up the earlier mistake is bad enough. Planning to repeat it is pure folly.

Simon Wren-Lewis is a professor of economics at Oxford University

 Simon Wren-Lewis is is Professor of Economic Policy in the Blavatnik School of Government at Oxford University, and a fellow of Merton College. He blogs at mainlymacro.

This article first appeared in the 17 April 2015 issue of the New Statesman, The Election Special

JOHN DEVOLLE/GETTY IMAGES
Show Hide image

Fitter, dumber, more productive

How the craze for Apple Watches, Fitbits and other wearable tech devices revives the old and discredited science of behaviourism.

When Tim Cook unveiled the latest operating system for the Apple Watch in June, he described the product in a remarkable way. This is no longer just a wrist-mounted gadget for checking your email and social media notifications; it is now “the ultimate device for a healthy life”.

With the watch’s fitness-tracking and heart rate-sensor features to the fore, Cook explained how its Activity and Workout apps have been retooled to provide greater “motivation”. A new Breathe app encourages the user to take time out during the day for deep breathing sessions. Oh yes, this watch has an app that notifies you when it’s time to breathe. The paradox is that if you have zero motivation and don’t know when to breathe in the first place, you probably won’t survive long enough to buy an Apple Watch.

The watch and its marketing are emblematic of how the tech trend is moving beyond mere fitness tracking into what might one call quality-of-life tracking and algorithmic hacking of the quality of consciousness. A couple of years ago I road-tested a brainwave-sensing headband, called the Muse, which promises to help you quiet your mind and achieve “focus” by concentrating on your breathing as it provides aural feedback over earphones, in the form of the sound of wind at a beach. I found it turned me, for a while, into a kind of placid zombie with no useful “focus” at all.

A newer product even aims to hack sleep – that productivity wasteland, which, according to the art historian and essayist Jonathan Crary’s book 24/7: Late Capitalism and the Ends of Sleep, is an affront to the foundations of capitalism. So buy an “intelligent sleep mask” called the Neuroon to analyse the quality of your sleep at night and help you perform more productively come morning. “Knowledge is power!” it promises. “Sleep analytics gathers your body’s sleep data and uses it to help you sleep smarter!” (But isn’t one of the great things about sleep that, while you’re asleep, you are perfectly stupid?)

The Neuroon will also help you enjoy technologically assisted “power naps” during the day to combat “lack of energy”, “fatigue”, “mental exhaustion” and “insomnia”. When it comes to quality of sleep, of course, numerous studies suggest that late-night smartphone use is very bad, but if you can’t stop yourself using your phone, at least you can now connect it to a sleep-enhancing gadget.

So comes a brand new wave of devices that encourage users to outsource not only their basic bodily functions but – as with the Apple Watch’s emphasis on providing “motivation” – their very willpower.  These are thrillingly innovative technologies and yet, in the way they encourage us to think about ourselves, they implicitly revive an old and discarded school of ­thinking in psychology. Are we all neo-­behaviourists now?

***

The school of behaviourism arose in the early 20th century out of a virtuous scientific caution. Experimenters wished to avoid anthropomorphising animals such as rats and pigeons by attributing to them mental capacities for belief, reasoning, and so forth. This kind of description seemed woolly and impossible to verify.

The behaviourists discovered that the actions of laboratory animals could, in effect, be predicted and guided by careful “conditioning”, involving stimulus and reinforcement. They then applied Ockham’s razor: there was no reason, they argued, to believe in elaborate mental equipment in a small mammal or bird; at bottom, all behaviour was just a response to external stimulus. The idea that a rat had a complex mentality was an unnecessary hypothesis and so could be discarded. The psychologist John B Watson declared in 1913 that behaviour, and behaviour alone, should be the whole subject matter of psychology: to project “psychical” attributes on to animals, he and his followers thought, was not permissible.

The problem with Ockham’s razor, though, is that sometimes it is difficult to know when to stop cutting. And so more radical behaviourists sought to apply the same lesson to human beings. What you and I think of as thinking was, for radical behaviourists such as the Yale psychologist Clark L Hull, just another pattern of conditioned reflexes. A human being was merely a more complex knot of stimulus responses than a pigeon. Once perfected, some scientists believed, behaviourist science would supply a reliable method to “predict and control” the behaviour of human beings, and thus all social problems would be overcome.

It was a kind of optimistic, progressive version of Nineteen Eighty-Four. But it fell sharply from favour after the 1960s, and the subsequent “cognitive revolution” in psychology emphasised the causal role of conscious thinking. What became cognitive behavioural therapy, for instance, owed its impressive clinical success to focusing on a person’s cognition – the thoughts and the beliefs that radical behaviourism treated as mythical. As CBT’s name suggests, however, it mixes cognitive strategies (analyse one’s thoughts in order to break destructive patterns) with behavioural techniques (act a certain way so as to affect one’s feelings). And the deliberate conditioning of behaviour is still a valuable technique outside the therapy room.

The effective “behavioural modification programme” first publicised by Weight Watchers in the 1970s is based on reinforcement and support techniques suggested by the behaviourist school. Recent research suggests that clever conditioning – associating the taking of a medicine with a certain smell – can boost the body’s immune response later when a patient detects the smell, even without a dose of medicine.

Radical behaviourism that denies a subject’s consciousness and agency, however, is now completely dead as a science. Yet it is being smuggled back into the mainstream by the latest life-enhancing gadgets from Silicon Valley. The difference is that, now, we are encouraged to outsource the “prediction and control” of our own behaviour not to a benign team of psychological experts, but to algorithms.

It begins with measurement and analysis of bodily data using wearable instruments such as Fitbit wristbands, the first wave of which came under the rubric of the “quantified self”. (The Victorian polymath and founder of eugenics, Francis Galton, asked: “When shall we have anthropometric laboratories, where a man may, when he pleases, get himself and his children weighed, measured, and rightly photographed, and have their bodily faculties tested by the best methods known to modern science?” He has his answer: one may now wear such laboratories about one’s person.) But simply recording and hoarding data is of limited use. To adapt what Marx said about philosophers: the sensors only interpret the body, in various ways; the point is to change it.

And the new technology offers to help with precisely that, offering such externally applied “motivation” as the Apple Watch. So the reasoning, striving mind is vacated (perhaps with the help of a mindfulness app) and usurped by a cybernetic system to optimise the organism’s functioning. Electronic stimulus produces a physiological response, as in the behaviourist laboratory. The human being herself just needs to get out of the way. The customer of such devices is merely an opaquely functioning machine to be tinkered with. The desired outputs can be invoked by the correct inputs from a technological prosthesis. Our physical behaviour and even our moods are manipulated by algorithmic number-crunching in corporate data farms, and, as a result, we may dream of becoming fitter, happier and more productive.

***

 

The broad current of behaviourism was not homogeneous in its theories, and nor are its modern technological avatars. The physiologist Ivan Pavlov induced dogs to salivate at the sound of a bell, which they had learned to associate with food. Here, stimulus (the bell) produces an involuntary response (salivation). This is called “classical conditioning”, and it is advertised as the scientific mechanism behind a new device called the Pavlok, a wristband that delivers mild electric shocks to the user in order, so it promises, to help break bad habits such as overeating or smoking.

The explicit behaviourist-revival sell here is interesting, though it is arguably predicated on the wrong kind of conditioning. In classical conditioning, the stimulus evokes the response; but the Pavlok’s painful electric shock is a stimulus that comes after a (voluntary) action. This is what the psychologist who became the best-known behaviourist theoretician, B F Skinner, called “operant conditioning”.

By associating certain actions with positive or negative reinforcement, an animal is led to change its behaviour. The user of a Pavlok treats herself, too, just like an animal, helplessly suffering the gadget’s painful negative reinforcement. “Pavlok associates a mild zap with your bad habit,” its marketing material promises, “training your brain to stop liking the habit.” The use of the word “brain” instead of “mind” here is revealing. The Pavlok user is encouraged to bypass her reflective faculties and perform pain-led conditioning directly on her grey matter, in order to get from it the behaviour that she prefers. And so modern behaviourist technologies act as though the cognitive revolution in psychology never happened, encouraging us to believe that thinking just gets in the way.

Technologically assisted attempts to defeat weakness of will or concentration are not new. In 1925 the inventor Hugo Gernsback announced, in the pages of his magazine Science and Invention, an invention called the Isolator. It was a metal, full-face hood, somewhat like a diving helmet, connected by a rubber hose to an oxygen tank. The Isolator, too, was designed to defeat distractions and assist mental focus.

The problem with modern life, Gernsback wrote, was that the ringing of a telephone or a doorbell “is sufficient, in nearly all cases, to stop the flow of thoughts”. Inside the Isolator, however, sounds are muffled, and the small eyeholes prevent you from seeing anything except what is directly in front of you. Gernsback provided a salutary photograph of himself wearing the Isolator while sitting at his desk, looking like one of the Cybermen from Doctor Who. “The author at work in his private study aided by the Isolator,” the caption reads. “Outside noises being eliminated, the worker can concentrate with ease upon the subject at hand.”

Modern anti-distraction tools such as computer software that disables your internet connection, or word processors that imitate an old-fashioned DOS screen, with nothing but green text on a black background, as well as the brain-measuring Muse headband – these are just the latest versions of what seems an age-old desire for technologically imposed calm. But what do we lose if we come to rely on such gadgets, unable to impose calm on ourselves? What do we become when we need machines to motivate us?

***

It was B F Skinner who supplied what became the paradigmatic image of ­behaviourist science with his “Skinner Box”, formally known as an “operant conditioning chamber”. Skinner Boxes come in different flavours but a classic example is a box with an electrified floor and two levers. A rat is trapped in the box and must press the correct lever when a certain light comes on. If the rat gets it right, food is delivered. If the rat presses the wrong lever, it receives a painful electric shock through the booby-trapped floor. The rat soon learns to press the right lever all the time. But if the levers’ functions are changed unpredictably by the experimenters, the rat becomes confused, withdrawn and depressed.

Skinner Boxes have been used with success not only on rats but on birds and primates, too. So what, after all, are we doing if we sign up to technologically enhanced self-improvement through gadgets and apps? As we manipulate our screens for ­reassurance and encouragement, or wince at a painful failure to be better today than we were yesterday, we are treating ourselves similarly as objects to be improved through operant conditioning. We are climbing willingly into a virtual Skinner Box.

As Carl Cederström and André Spicer point out in their book The Wellness Syndrome, published last year: “Surrendering to an authoritarian agency, which is not just telling you what to do, but also handing out rewards and punishments to shape your behaviour more effectively, seems like undermining your own agency and autonomy.” What’s worse is that, increasingly, we will have no choice in the matter anyway. Gernsback’s Isolator was explicitly designed to improve the concentration of the “worker”, and so are its digital-age descendants. Corporate employee “wellness” programmes increasingly encourage or even mandate the use of fitness trackers and other behavioural gadgets in order to ensure an ideally efficient and compliant workforce.

There are many political reasons to resist the pitiless transfer of responsibility for well-being on to the individual in this way. And, in such cases, it is important to point out that the new idea is a repackaging of a controversial old idea, because that challenges its proponents to defend it explicitly. The Apple Watch and its cousins promise an utterly novel form of technologically enhanced self-mastery. But it is also merely the latest way in which modernity invites us to perform operant conditioning on ourselves, to cleanse away anxiety and dissatisfaction and become more streamlined citizen-consumers. Perhaps we will decide, after all, that tech-powered behaviourism is good. But we should know what we are arguing about. The rethinking should take place out in the open.

In 1987, three years before he died, B F Skinner published a scholarly paper entitled Whatever Happened to Psychology as the Science of Behaviour?, reiterating his now-unfashionable arguments against psychological talk about states of mind. For him, the “prediction and control” of behaviour was not merely a theoretical preference; it was a necessity for global social justice. “To feed the hungry and clothe the naked are ­remedial acts,” he wrote. “We can easily see what is wrong and what needs to be done. It is much harder to see and do something about the fact that world agriculture must feed and clothe billions of people, most of them yet unborn. It is not enough to advise people how to behave in ways that will make a future possible; they must be given effective reasons for behaving in those ways, and that means effective contingencies of reinforcement now.” In other words, mere arguments won’t equip the world to support an increasing population; strategies of behavioural control must be designed for the good of all.

Arguably, this authoritarian strand of behaviourist thinking is what morphed into the subtly reinforcing “choice architecture” of nudge politics, which seeks gently to compel citizens to do the right thing (eat healthy foods, sign up for pension plans) by altering the ways in which such alternatives are presented.

By contrast, the Apple Watch, the Pavlok and their ilk revive a behaviourism evacuated of all social concern and designed solely to optimise the individual customer. By ­using such devices, we voluntarily offer ourselves up to a denial of our voluntary selves, becoming atomised lab rats, to be manipulated electronically through the corporate cloud. It is perhaps no surprise that when the founder of American behaviourism, John B Watson, left academia in 1920, he went into a field that would come to profit very handsomely indeed from his skills of manipulation – advertising. Today’s neo-behaviourist technologies promise to usher in a world that is one giant Skinner Box in its own right: a world where thinking just gets in the way, and we all mechanically press levers for food pellets.

This article first appeared in the 18 August 2016 issue of the New Statesman, Corbyn’s revenge