Special Report - Are we better off as laggards?

The Chancellor is obsessed with the productivity gap between Britain and other leading industrial na

As a sales pitch to woo those all-important inward investors to our shores, it was not one of Gordon Brown's best efforts. More Cassandra than Del Boy, he proclaimed: "Productivity levels in the US are 40 per cent higher than in Britain, and 20 per cent higher in Germany than here".

Worse, he didn't exactly keep quiet about this claim. Instead, alongside his Calvinist zeal for the "work ethic", Brown made the need for higher productivity into a mantra late last year. In November he launched a series of "productivity roadshows" and hosted confessional seminars at the Department of Trade and Industry and the Treasury to spread the unhappy message. Rectifying Britain's laggardly performance will be a core theme of the March Budget.

But does Britain really need to pull its socks up as much as Brown says? Is there really still a huge productivity gap between us and our competitors, even after Thatcher's self-proclaimed "productivity miracle"?

Not everyone thinks so. Since Brown made this a big theme in his pre-Budget report last year - devoting 30 pages to it - a number of people have stepped forward to question his gloomy analysis. In a recent research paper, Lies, Damned Lies and Productivity Statistics, Graeme Leach, senior economist at the Institute of Directors, wrote: "Statistical deficiencies do not preclude the possibility that UK productivity levels are second only to the US. This seems unlikely, but is this caution merely the product of decades of doing Britain down in the media?"

Leach is not alone in questioning Brown's figures. The Institute for Fiscal Studies (IFS), the Trades Union Congress (TUC) and the Institute for Public Policy Research (IPPR) have all produced papers doing so. The TUC report, Productivity and Partnership, for example, points out that the most recent studies of the productivity gap with the US offer estimates of between 7 and 40 per cent.

The basic squabble is over what kind of definition should be used. This may sound an arcane topic, suitable only for economic pointy-heads. But it matters because, as Leach says, "if the gap is more of a mirage, then the opportunity for a productivity miracle is undermined".

What exactly do we mean by productivity? Statistical definitions are a bit like balloon animals. You've either got your basic, crude shape - not very sophisticated. Or, with a bit of imagination and sleight of hand, you can turn your material into a far more impressive puppy.

And so it goes with productivity figures. The standard definition of productivity is output per worker. This is the figure the government uses. What this ignores, however, is exactly how many hours the worker was beavering away to produce those widgets. If, for example, it took one worker ten hours to produce a car, and another one five hours, the output would be the same, but it is easy to see that one worker was more productive than the other. Tweak the equation by sprinkling "hours worked" into it and the gap with the US magically shrinks from 40 to 20 per cent, because the average American worker puts in longer hours.

And with a bit more clever tweaking, such as that employed by Rachel Griffith and Helen Simpson of the IFS in their paper Productivity and the Role of Government, the gap shrinks to nearly zero. This is because they also factor in both the age of Britain's physical capital and its dodgy quality. Workers, it seems, can sometimes legitimately blame their tools for their poor productivity levels.

Why then does Brown persist in putting the worst possible spin on British performance? There are good and bad reasons for his doing so. Poor productivity offers a handy scapegoat for the recession now hitting manufacturing. Ministers were particularly keen last year to blame poor productivity rather than the high pound for the problems at Rover's Longbridge factory. Far better, too, to talk, as Peter Mandelson did, about linking potential government subsidy to assist the ailing factory to "productivity improvements", rather than convey the impression that the government was engaged in old-style bailouts to prop up inefficient industries.

Making an issue of poor productivity shows Brown in a macho light for focusing on a tough long-term agenda. "To caricature Brown, you could say, 'In my first Budget I solved the unemployment problem, in my second I dealt with work incentives and now, in my third, I will lick the productivity problem'," observes Peter Robinson, senior economist at the IPPR.

Highlighting a productivity problem helps to get the issue higher up the political agenda. Indeed, the Treasury itself implicitly acknowledges that the figure it cites is inflated. "We know and you know there are a number of ways of looking at this," said John Kingman, head of the Treasury's productivity panel, at an IFS conference last year. He added that the high figure acts as "a way of focusing minds on the need for further supply-side reform which seems to us to be desirable". The seminars and the roadshows all helped to "build a public perception that this was an important priority".

And so it should be. Although it is possible to explain away why Britain has worse levels of productivity (shorter average hours, or ageing equipment and so on) this does not mean that we should ignore Britain's relative decline. Higher productivity does matter because, as Brown puts it, it is the "key to the stronger economy - to higher growth, more jobs and opportunities, and better living standards for us all".

How do we get to this economic utopia? There are lots of different routes. Margaret Thatcher thought she had found one and talked boldly of a "productivity miracle" in the 1980s. And although Peter Mandelson has claimed that "productivity relative to our main competitors did not improve during the Tory years", there is a body of evidence that contradicts him. Britain did see a marked reduction in the productivity gap, particularly in manufacturing, after a time of slow productivity growth, and even some decline, during the 1970s.

"It is hard to say Britain didn't improve," says Nicholas Crafts, professor of economics at the London School of Economics. However, some of that improvement came at a price. "The revival in manufacturing productivity growth stemmed mostly from reductions in employment; output rose at only 1.2 per cent per year during 1979-89," wrote Crafts in Britain's Relative Economic Decline 1850-1995,published by the Social Market Foundation.

Despite these improvements, Britain's relative economic decline was stalled, not reversed. Worse, official estimates of manufacturing productivity growth suggest that since 1994 even this progress has stopped, with spartan growth since then. Adair Turner, the director general of the Confederation of British Industry, argues: "The process of liberalising the economy in key ways, the productivity improvements in the privatised industries, labour market reforms and the relegitimisation of enterprise and business as key social values were all beneficial. So this government has to preserve them." But, he adds, "Thatcher's changes were probably a necessary but not sufficient condition for us to close the gap".

So what is still missing? For Turner there are two important elements. "Between 1979 and 1993 there were dramatic booms and busts. These are relevant to the productivity debate because in a more volatile macroeconomic environment it is more rational for businesses to focus on the short term."

The first way for Brown to help improve productivity, then, is to ensure that there is fiscal and monetary stability. "The more you strip out financial noise," argues Turner, "the more you can concentrate on the fundamentals." He suggests, however, that this stability may not be enough. "Does the pursuit of macroeconomic stability also require going into the euro? On balance I would say yes."

The second factor that Turner thinks significant is "the failure to grip the education system. We have a less skilled workforce compared with our major competitors. Therefore we need to focus on an education and skills agenda."

This kind of view, however, is dismissed by the McKinsey Global Institute, whose paper on British productivity helped jump-start the debate last year. It sees "low capital investment, poor skills and sub-scale operations" as merely secondary effects. Instead, McKinsey touts a neo-liberal agenda of ever more deregulation, arguing that the productivity gap is "the effect of regulations governing product markets and land use on competitive behaviour, investment and pricing".

Although this conclusion has been embraced by the Institute of Directors, there are not many others who subscribe to it. The Treasury's John Kingman says: "It [the McKinsey paper] was not something the government had commissioned and they have their own views. They are better on individual sectors than on the economy as a whole. They are more convincing at explaining the gap with the US instead of continental Europe."

The TUC has also duffed up McKinsey's findings. It points out that it is bizarre to say we are less productive than France and Germany because we are over-regulated. Those two economies, after all, are far more snarled up in red tape. The TUC argues that the real causes of poor performance are "under-investment, skill shortages and poor workplace relationships". While Britain was better at cost-cutting in the 1980s, Germany's higher labour productivity comes from a history of higher investments in human and physical capital.

The TUC is right to emphasise the need for higher levels of investment. However, its focus is still very much on improving manufacturing industry. Increasingly, experts recognise the crucial role of productivity in the service sector, which matters more than manufacturing, since the latter makes up only around a fifth of the overall economy.

Nicholas Crafts agrees that "we focus far too much on manufacturing. The lever for improving it is not necessarily the same as for services. It requires a different sort of human capital."

Mary O'Mahony, an economist at the National Institute of Economic and Social Research, is one of the few people who has attempted to quantify rigorously how Britain compares in this area. It's not all doom and gloom. Based on her data she found that, in sectors such as mining, utilities and construction, British labour is more productive than its French, German, American and Japanese counterparts. Overall, however, she concludes that "Britain does not generally enjoy a comparative labour productivity advantage in service sectors".

Figuring out ways to measure, or even boost, productivity in services is tough because what you mean by "output" is tricky. School class sizes are a good example of this. You could argue that the government's commitment to smaller classes, made in its five pre-election pledges, is a commitment to lower productivity in education because there will be more teachers ("input") to each child ("output"). But shouldn't "output" in education be measured by results? And, if so, which results: more exam passes, higher literacy levels, lower delinquency rates?

Or take retailing, in which France is far more productive than Britain. This is partly because there is more land available to build whopping big hypermarkets on the edges of towns. And they offer poorer service levels than British stores. There are no bag packers or people in unfashionable garb offering to show you where to find muesli. The French may be more productive in retailing, but are they really "better"?

A similar dilemma faces the hotel industry. Is a hotel "better" because guests have to make their own coffee and carry their own luggage? And again it is easier for a hotel to be more productive on a greenfield site than on an older one simply because you can design it so that, say, the kitchen is near to the dining room. Both France (which has the same population as Britain but twice the land) and America have a highly productive hotel sector, partly because they have the room for new edge-of-town development. One survey found that, for new hotels in Britain, the break-even occupancy rate was 80 per cent, compared with just 50 per cent in America.

Should we then rip up our planning laws as part of an effort to squeeze more productivity out of hotels? Most people would say not. But different issues are raised by efforts to forge industrial clusters, such as a mini-Silicon Valley in Cambridge. In this case, people like Adair Turner would argue that it may be worth compromising social welfare and allowing development to proceed in order to boost hi-tech productivity. What matters, Turner explains, is whether the industry is one which could have "a cumulative dynamic effect" on the rest of the economy.

The existence of these potential trade-offs suggests that the government needs to be careful about over-hyping the importance of higher productivity. A big part of the political problem is that long-run productivity performance can have nasty side-effects in the short term. There are often good reasons to prefer lower productivity. As Nicholas Crafts sees it, "productivity is a benchmark for performance, instead of an objective in itself ".

The additional danger is that, as Britain faces an economic downturn, efforts to get short-term productivity improvements are more likely to involve job losses or tougher conditions for workers - as they did at Rover's Longbridge plant - than expansionary investment to sustain economic development.

Speaking at the launch of the CBI's Fit for the Future campaign, Turner acknowledged this prognosis when he said "we need to improve more than productivity. If we simply improve productivity we will create a high unemployment economy."

This article first appeared in the 15 January 1999 issue of the New Statesman, A slight and delicate minister?

Show Hide image

Tweeting terror: what social media reveals about how we respond to tragedy

From sharing graphic images to posting a selfie, what compels online behaviours that can often outwardly seem improper?

Why did they post that? Why did they share a traumatising image? Why did they tell a joke? Why are they making this about themselves? Did they… just post a selfie? Why are they spreading fake news?

These are questions social media users almost inevitably ask themselves in the immediate aftermath of a tragedy such as Wednesday’s Westminster attack. Yet we ask not because of genuine curiosity, but out of shock and judgement provoked by what we see as the wrong way to respond online. But these are still questions worth answering. What drives the behaviours we see time and again on social media in the wake of a disaster?

The fake image

“I really didn't think it was going to become a big deal,” says Dr Ranj Singh. “I shared it just because I thought it was very pertinent, I didn't expect it to be picked up by so many people.”

Singh was one of the first people to share a fake Tube sign on Twitter that was later read out in Parliament and on BBC Radio 4. The TfL sign – a board in stations which normally provides service information but can often feature an inspiring quote – read: “All terrorists are politely reminded that THIS IS LONDON and whatever you do to us we will drink tea and jolly well carry on thank you.”

Singh found it on the Facebook page of a man called John (who later explained to me why he created the fake image) and posted it on his own Twitter account, which has over 40,000 followers. After it went viral, many began pointing out that the sign was faked.

“At a time like this is it really helpful to point out that its fake?” asks Singh – who believes it is the message, not the medium, that matters most. “The sentiment is real and that's what's important.”

Singh tells me that he first shared the sign because he found it to be profound and was then pleased with the initial “sense of solidarity” that the first retweets brought. “I don't think you can fact-check sentiments,” he says, explaining why he didn’t delete the tweet.

Dr Grainne Kirwan, a cyberpsychology lecturer and author, explains that much of the behaviour we see on social media in the aftermath of an attack can be explained by this desire for solidarity. “It is part of a mechanism called social processing,” she says. “By discussing a sudden event of such negative impact it helps the individual to come to terms with it… When shocked, scared, horrified, or appalled by an event we search for evidence that others have similar reactions so that our response is validated.”

The selfies and the self-involved

Yet often, the most maligned social media behaviour in these situations seems less about solidarity and more about selfishness. Why did YouTuber Jack Jones post a since-deleted selfie with the words “The outmost [sic] respect to our public services”? Why did your friend, who works nowhere near Westminster, mark themselves as “Safe” using Facebook’s Safety Check feature? Why did New Statesman writer Laurie Penny say in a tweet that her “atheist prayers” were with the victims?

“It was the thought of a moment, and not a considered statement,” says Penny. The rushed nature of social media posts during times of crisis can often lead to misunderstandings. “My atheism is not a political statement, or something I'm particularly proud of, it just is.”

Penny received backlash on the site for her tweet, with one user gaining 836 likes on a tweet that read: “No need to shout 'I'm an atheist!' while trying to offer solidarity”. She explains that she posted her tweet due to the “nonsensical” belief that holding others in her heart makes a difference at tragic times, and was “shocked” when people became angry at her.

“I was shouted at for making it all about me, which is hard to avoid at the best of times on your own Twitter feed,” she says. “Over the years I've learned that 'making it about you' and 'attention seeking' are familiar accusations for any woman who has any sort of public profile – the problem seems to be not with what we do but with who we are.”

Penny raises a valid point that social media is inherently self-involved, and Dr Kirwan explains that in emotionally-charged situations it is easy to say things that are unclear, or can in hindsight seem callous or insincere.

“Our online society may make it feel like we need to show a response to events quickly to demonstrate solidarity or disdain for the individuals or parties directly involved in the incident, and so we put into writing and make publicly available something which we wrote in haste and without full knowledge of the circumstances.”

The joke

Arguably the most condemned behaviour in the aftermath of a tragedy is the sharing of an ill-timed joke. Julia Fraustino, a research affiliate at the National Consortium for the Study of Terrorism and Responses to Terrorism (START), reflects on this often seemingly inexplicable behaviour. “There’s research dating back to the US 9/11 terror attacks that shows lower rates of disaster-related depression and anxiety for people who evoke positive emotions before, during and after tragic events,” she says, stating that humour can be a coping mechanism.

“The offensiveness or appropriateness of humor seems, at least in part, to be tied to people’s perceived severity of the crisis,” she adds. “An analysis of tweets during a health pandemic showed that humorous posts rose and fell along with the seriousness of the situation, with more perceived seriousness resulting in fewer humour-based posts.”

The silence

If you can’t say anything nice, why say anything at all? Bambi's best friend Thumper's quote might be behind the silence we see from some social media users. Rather than simply being uncaring, there are factors which can predict whether someone will be active or passive on social media after a disaster, notes Fraustino.

“A couple of areas that factor into whether a person will post on social media during a disaster are issue-involvement and self-involvement,” she says. “When people perceive that the disaster is important and they believe they can or should do something about it, they may be more likely to share others’ posts or create their own content. Combine issue-involvement with self-involvement, which in this context refers to a desire for self-confirmation such as through gaining attention by being perceived as a story pioneer or thought leader, and the likelihood goes up that this person will create or curate disaster-related content on social media.”

“I just don’t like to make it about me,” one anonymous social media user tells me when asked why he doesn’t post anything himself – but instead shares or retweets posts – during disasters. “I feel like people just want likes and retweets and aren’t really being sincere, and I would hate to do that. Instead I just share stuff from important people, or stuff that needs to be said – like reminders not to share graphic images.”

The graphic image

The sharing of graphic and explicit images is often widely condemned, as many see this as both pointless and potentially psychologically damaging. After the attack, BBC Newsbeat collated tens of tweets by people angry that passersby took pictures instead of helping, with multiple users branding it “absolutely disgusting”.

Dr Kirwan explains that those near the scene may feel a “social responsibility” to share their knowledge, particularly in situations where there is a fear of media bias. It is also important to remember that shock and panic can make us behave differently than we normally would.

Yet the reason this behaviour often jars is because we all know what motivates most of us to post on social media: attention. It is well-documented that Likes and Shares give us a psychological boost, so it is hard to feel that this disappears in tragic circumstances. If we imagine someone is somehow “profiting” from posting traumatic images, this can inspire disgust. Fraustino even notes that posts with an image are significantly more likely to be clicked on, liked, or shared.

Yet, as Dr Kiwarn explains, Likes don’t simply make us happy on such occasions, they actually make us feel less alone. “In situations where people are sharing terrible information we may still appreciate likes, retweets, [and] shares as it helps to reinforce and validate our beliefs and position on the situation,” she says. “It tells us that others feel the same way, and so it is okay for us to feel this way.”

Fraustino also argues that these posts can be valuable, as they “can break through the noise and clutter and grab attention” and thereby bring awareness to a disaster issue. “As positive effects, emotion-evoking images can potentially increase empathy and motivation to contribute to relief efforts.”

The judgement

The common thread isn’t simply the accusation that such social media behaviours are “insensitive”, it is that there is an abundance of people ready to point the finger and criticise others, even – and especially – at a time when they should focus on their own grief. VICE writer Joel Golby sarcastically summed it up best in a single tweet: “please look out for my essay, 'Why Everyone's Reaction to the News is Imperfect (But My Own)', filed just now up this afternoon”.

“When already emotional other users see something which they don't perceive as quite right, they may use that opportunity to vent anger or frustration,” says Dr Kirwan, explaining that we are especially quick to judge the posts of people we don’t personally know. “We can be very quick to form opinions of others using very little information, and if our only information about a person is a post which we feel is inappropriate we will tend to form a stereotyped opinion of this individual as holding negative personality traits.

“This stereotype makes it easier to target them with hateful speech. When strong emotions are present, we frequently neglect to consider if we may have misinterpreted the content, or if the person's apparently negative tone was intentional or not.”

Fraustino agrees that people are attempting to reduce their own uncertainty or anxiety when assigning blame. “In a terror attack setting where emotions are high, uncertainty is high, and anxiety is high, blaming or scapegoating can relieve some of those negative emotions for some people.”

Amelia Tait is a technology and digital culture writer at the New Statesman.