Mariana Mazzucato, professor in the economics of innovation at Sussex University.
Show Hide image

Public risks, private rewards: how an innovative state can tackle inequality

The winner of the inaugural New Statesman/Speri Prize in political economy on how an innovative state can tackle inequality.

This autumn, the inaugural NS/­Speri Prize was awarded to Mariana Mazzucato of the Science Policy Research Unit at the University of Sussex. It rewards “the scholar who has succeeded most effectively over the preceding two or three years in disseminating original and critical ideas in political economy to a wider public audience” and includes the invitation to deliver a lecture. This is an edited extract from that speech.

What makes the iPhone so smart? Was it only the genius of Steve Jobs and his team and the visionary finance supplied from risk-loving venture capitalists? No. In my book The Entrepreneurial State: Debunking Public v Private Sector Myths, I tell the missing part of that story by analysing the public funds that allowed the smartphone to be created. The research programmes that made the internet, touch-screen displays, GPS and the Siri voice control possible all had government backing.

The point is not to belittle the work of Jobs and his team, which was both essential and transformational. But we must be more balanced in the historiography of Apple and its founders, where not a word is mentioned of the collective effort behind Silicon Valley. The question is this: who benefits from such a narrow description of the wealth-creation process in the hi-tech sector today?

Over the past year, inequality has risen up the political agenda, with the Organisation for Economic Co-operation and Development documenting just how bad inequality is for growth. But the current debate is often focused only on redistribution. If policymakers want to get serious about tackling inequality, they need to rethink not only areas such as the wealth tax that Thomas Piketty is calling for but the received wisdom on how to generate value and wealth creation in the first place. When we have a narrow theory of who creates value and wealth, we allow a greater share of that value to be captured by a small group of actors who call themselves wealth creators. This is our current predicament and the reason why progressive parties on both sides of the Atlantic are struggling to provide a clear story of what has gone wrong in recent decades and what to do about it.

Let’s start with some definitions. First, the market. The path-breaking work of the historian Karl Polanyi teaches us that talk of “state intervention” in “free markets” is a historical fallacy. In his 1944 book The Great Transformation, Polanyi argued: “The road to the free market was opened and kept open by an enormous increase in continuous, centrally organised and controlled interventionism . . . Administrators had to be constantly on the watch to ensure the free working of the system.”

The public sector’s active role in shaping and creating markets is even more relevant in today’s “knowledge economy”. Traditional economic theory, which guides policymaking worldwide, justifies state intervention only to solve market failures. But what the state has done in the few countries that have succeeded in producing innovation-led growth has been to create new markets. Sectors such as the internet, biotechnology, nanotechnology and the emerging green economy have depended on direct, “mission-oriented” public investments, creating a new technological landscape – not only facilitating existing ones – with business following only after returns were clearly in sight. So why have we accepted such a biased story of the state’s role when, as the story of Apple shows, it has done so much more than “fix” market failures? What is the relationship between this false narrative of who the real risk-takers are and increasing inequality? Here are three areas we need to look at.


Socialising risks and rewards

The pretence that government only spends, regulates, administers and, at best, “de-risks” or “fixes” market failures prevents us from seeing that it has been a lead risk-taker and investor. As a result, government has socialised the risks but not the rewards. Some economists argue that the reward for the state comes through taxation. This, in theory, is right. Innovation-led growth should lead to an increase in tax revenue – but not if the companies that benefit the most from innovations don’t pay much tax compared to the income they generate, not only as a result of loopholes but also because of their continual lobbying for tax incentives and tax cuts that they say they need to foster innovation. It’s not a coincidence that groups such as the National Venture Capital Association helped convince the US government to reduce capital gains tax by 50 per cent in only five years in the late 1970s – an “innovation policy” later copied by Tony Blair’s government. (A policy that even Warren Buffett has admitted has had no effect on investment but lots on inequality.)

Similarly, in the name of promoting innovation, different types of tax “incentives” are constantly introduced – such as the “patent box” system, which allows companies to pay virtually no tax on profits generated from patented goods and services. By targeting the income generated from patents (which are, in effect, state-granted monopolies for 20 years), rather than the research that leads to them, such measures have little to no effect on innovation.


More symbiotic innovation ecosystems

Sharing risks and rewards also requires making sure that private-sector commitment on innovation increases. Of course businesses invest in research and development (R&D) but the emphasis is increasingly on the D, building on earlier public-sector investment in R.

As Bill Lazonick and I have argued in our recent work, in areas as different as pharma, IT and energy, large companies are spending an increasing proportion of profits on share buy-backs, to boost stock options and executive pay. Fortune 500 companies have spent a record $3trn in the past decade on share buy-backs – greatly outpacing R&D. Thus, a serious “life-sciences” strategy should not only be about government increasing its financing of pharma’s knowledge base but should involve government being confident enough to ask Big Pharma to invest more of its profits in research and human resources to address skills shortages.

When countries ask Google, Apple and Amazon to pay more tax, this should not only be because they use public roads and infrastructure but also because a significant part of the technologies that drive their record profits was publicly funded.

We hear a lot about how new technology hurts those without the skills required by the modern economy and that this is the key link between innovation and inequality. But where do skills come from? They are the result of investment – and today we have a massive crisis of investment.


A New Deal . . . and a more serious deal

What we need to kick-start investment is not only a new Keynesian deal, investing in areas such as infrastructure, but also more serious “deals” between business and government that benefit both sides. For example, how could the patent system better reflect the collective public-private contribution to innovations? In the US in 1980, the Bayh-Dole Act aimed to increase the commercialisation of science by allowing publicly funded research to be patented. Lawmakers were rightly wary that this could lead to taxpayers stumping up twice: first for the research (the US National Institutes of Health spends $32bn a year) and then for high prices of drugs. So they suggested that government put a cap on the prices of drugs that were publicly funded. Yet the US government has never exercised this right.

We should also reform the tax system to reward long-run value creation over value extraction, opening up the debate about risks and rewards: are there other tools that might offer a better deal for publicly funded investments and innovations? This might come in the form of keeping a “golden share” of the patents, or retaining some equity in companies that receive early-stage financing from government, or giving businesses loans with income-contingent repayments just as we do to students.

My point is not to argue for or against any one of these mechanisms but to start a broader discussion that begins with the view of the state as a market-maker, not only a fixer. There should be a recognition of the huge risks that this involves: for every successful government investment in areas such as the internet, there are failures in areas such as Concorde.

Some, including the think tank Nesta in the UK, have argued that any direct non-tax-based mechanisms for the state to reap back rewards for its risk-taking are “problematic” and suggest that corporate taxes are sufficient. This defence of the status quo, particularly in these times of austerity, seems unsustainable when what is at stake is the ability of business to capture a disproportionate share of value that was created collectively. In a world of big data – so celebrated by the innovation enthusiasts – surely we can create better “contracts” and deals between the public and private sectors, even if this means putting a dent in the profit-wage ratio that is rising at record levels (no, profits are not related to managerial performance).

So how can we change the narrative of the left from one of “redistribution” to one that champions value creation, in which both risks and rewards are shared more equally? Let’s first agree that the market is not a bogeyman forcing short-termism but a result of interactions and choices made by different types of public and private actors. We need to stop talking about the public sector “de-risking” and facilitating “partnerships” and talk more about the kind of public risk-taking that led to all the general-purpose technologies and great transformations of the past, a change of language from general “partnerships” to more detailed commitment about the kinds of partnerships that will lead to greater, not lower, private investment in long-run areas such as research and development and human capital formation.

Changing our understanding of how wealth is created, not only distributed, is the first step in building a more confident mission-oriented government – one that both fuels innovation, and builds the right kind of “deal” with business that gives the word “partnership” real meaning again. 

Mariana Mazzucato is RM Phillips Professor in the Economics of Innovation at SPRU, The University of Sussex, and author of The Entrepreneurial State: debunking public vs. private sector myths. You can watch the full 2014 New Statesman SPERI Prize Lecture here.

This article first appeared in the 19 December 2014 issue of the New Statesman, Christmas Issue 2014

An artist's version of the Reichstag fire, which Hitler blamed on the communists. CREDIT: DEZAIN UNKIE/ ALAMY
Show Hide image

The art of the big lie: the history of fake news

From the Reichstag fire to Stalin’s show trials, the craft of disinformation is nothing new.

We live, we’re told, in a post-truth era. The internet has hyped up postmodern relativism, and created a kind of gullible cynicism – “nothing is true, and who cares anyway?” But the thing that exploits this mindset is what the Russians call dezinformatsiya. Disinformation – strategic deceit – isn’t new, of course. It has played a part in the battle that has raged between mass democracy and its enemies since at least the First World War.

Letting ordinary people pick governments depends on shared trust in information, and this is vulnerable to attack – not just by politicians who want to manipulate democracy, but by those on the extremes who want to destroy it. In 1924, the first Labour government faced an election. With four days to go, the Daily Mail published a secret letter in which the leading Bolshevik Grigory Zinoviev heralded the government’s treaties with the Soviets as a way to help recruit British workers for Leninism. Labour’s vote actually went up, but the Liberal share collapsed, and the Conservatives returned to power.

We still don’t know exactly who forged the “Zinoviev Letter”, even after exhaustive investigations of British and Soviet intelligence archives in the late 1990s by the then chief historian of the Foreign Office, Gill Bennett. She concluded that the most likely culprits were White Russian anti-Bolsheviks, outraged at Labour’s treaties with Moscow, probably abetted by sympathetic individuals in British intelligence. But whatever the precise provenance, the case demonstrates a principle that has been in use ever since: cultivate your lie from a germ of truth. Zinoviev and the Comintern were actively engaged in trying to stir revolution – in Germany, for example. Those who handled the letter on its journey from the forger’s desk to the front pages – MI6 officers, Foreign Office officials, Fleet Street editors – were all too ready to believe it, because it articulated their fear that mass democracy might open the door to Bolshevism.

Another phantom communist insurrection opened the way to a more ferocious use of disinformation against democracy. On the night of 27 February 1933, Germany’s new part-Nazi coalition was not yet secure in power when news started to hum around Berlin that the Reichstag was on fire. A lone left-wing Dutchman, Marinus van der Lubbe, was caught on the site and said he was solely responsible. But Hitler assumed it was a communist plot, and seized the opportunity to do what he wanted to do anyway: destroy them. The suppression of the communists was successful, but the claim it was based on rapidly collapsed. When the Comintern agent Gyorgy Dimitrov was tried for organising the fire, alongside fellow communists, he mocked the charges against him, which were dismissed for lack of evidence.

Because it involves venturing far from the truth, disinformation can slip from its authors’ control. The Nazis failed to pin blame on the communists – and then the communists pinned blame on the Nazis. Dimitrov’s comrade Willi Münzenberg swiftly organised propaganda suggesting that the fire was too convenient to be Nazi good luck. A “counter-trial” was convened in London; a volume called The Brown Book of the Reichstag Fire and Hitler Terror was rushed into print, mixing real accounts of Nazi persecution of communists – the germ of truth again – with dubious documentary evidence that they had started the fire. Unlike the Nazis’ disinformation, this version stuck, for decades.

Historians such as Richard Evans have argued that both stories about the fire were false, and it really was one man’s doing. But this case demonstrates another disinformation technique still at work today: hide your involvement behind others, as Münzenberg did with the British great and good who campaigned for the Reichstag prisoners. In the Cold War, the real source of disinformation was disguised with the help of front groups, journalistic “agents of influence”, and the trick of planting a fake story in an obscure foreign newspaper, then watching as the news agencies picked it up. (Today, you just wait for retweets.)

In power, the Nazis made much use of a fictitious plot that did, abominably, have traction: The Protocols of the Elders of Zion, a forged text first published in Russia in 1903, claimed to be a record of a secret Jewish conspiracy to take over the world – not least by means of its supposed control of everyone from bankers to revolutionaries. As Richard Evans observes, “If you subject people to a barrage of lies, in the end they’ll begin to think well maybe they’re not all true, but there must be something in it.” In Mein Kampf, Hitler argued that the “big lie” always carries credibility – an approach some see at work not only in the Nazis’ constant promotion of the Protocols but in the pretence that their Kristallnacht pogrom in 1938 was spontaneous. (It is ironic that Hitler coined the “big lie” as part of an attack on the Jews’ supposed talent for falsehood.) Today, the daring of the big lie retains its force: even if no one believes it, it makes smaller untruths less objectionable in comparison. It stuns opponents into silence.

Unlike the Nazis, the Bolshevik leaders were shaped by decades as hunted revolutionaries, dodging the Tsarist secret police, who themselves had had a hand in the confection of the Protocols. They occupied the paranoid world of life underground, governed by deceit and counter-deceit, where any friend could be an informer. By the time they finally won power, disinformation was the Bolsheviks’ natural response to the enemies they saw everywhere. And that instinct endures in Russia even now.

In a competitive field, perhaps the show trial is the Soviet exercise in upending the truth that is most instructive today. These sinister theatricals involved the defendants “confessing” their crimes with great
sincerity and detail, even if the charges were ludicrous. By 1936, Stalin felt emboldened to drag his most senior rivals through this process – starting with Grigory Zinoviev.

The show trial is disinformation at its cruellest: coercing someone falsely to condemn themselves to death, in so convincing a way that the world’s press writes it up as truth. One technique involved was perfected by the main prosecutor, Andrey Vyshinsky, who bombarded the defendants with insults such as “scum”, “mad dogs” and “excrement”. Besides intimidating the victim, this helped to distract attention from the absurdity of the charges. Barrages of invective on Twitter are still useful for smearing and silencing enemies.


The show trials were effective partly because they deftly reversed the truth. To conspire to destroy the defendants, Stalin accused them of conspiring to destroy him. He imposed impossible targets on straining Soviet factories; when accidents followed, the managers were forced to confess to “sabotage”. Like Hitler, Stalin made a point of saying the opposite of what he did. In 1936, the first year of the Great Terror, he had a rather liberal new Soviet constitution published. Many in the West chose to believe it. As with the Nazis’ “big lie”, shameless audacity is a disinformation strategy in itself. It must have been hard to accept that any regime could compel such convincing false confessions, or fake an entire constitution.

No one has quite attempted that scale of deceit in the post-truth era, but reversing the truth remains a potent trick. Just think of how Donald Trump countered the accusation that he was spreading “fake news” by making the term his own – turning the charge on his accusers, and even claiming he’d coined it.

Post-truth describes a new abandonment of the very idea of objective truth. But George Orwell was already concerned that this concept was under attack in 1946, helped along by the complacency of dictatorship-friendly Western intellectuals. “What is new in totalitarianism,” he warned in his essay “The Prevention of Literature”, “is that its doctrines are not only unchallengeable but also unstable. They have to be accepted on pain of damnation, but on the other hand they are always liable to be altered on a moment’s notice.”

A few years later, the political theorist Hannah Arendt argued that Nazis and Stalinists, each immersed in their grand conspiratorial fictions, had already reached this point in the 1930s – and that they had exploited a similar sense of alienation and confusion in ordinary people. As she wrote in her 1951 book, The Origins of Totalitarianism: “In an ever-changing, incomprehensible world the masses had reached the point where they would, at the same time, believe everything and nothing, think that everything was possible and that nothing was true.” There is a reason that sales of Arendt’s masterwork – and Orwell’s Nineteen Eighty-Four – have spiked since November 2016.

During the Cold War, as the CIA got in on the act, disinformation became less dramatic, more surreptitious. But show trials and forced confessions continued. During the Korean War, the Chinese and North Koreans induced a series of captured US airmen to confess to dropping bacteriological weapons on North Korea. One lamented that he could barely face his family after what he’d done. The pilots were brought before an International Scientific Commission, led by the eminent Cambridge scientist Joseph Needham, which investigated the charges. A documentary film, Oppose Bacteriological Warfare, was made, showing the pilots confessing and Needham’s Commission peering at spiders in the snow. But the story was fake.

The germ warfare hoax was a brilliant exercise in turning democracy’s expectations against it. Scientists’ judgements, campaigning documentary, impassioned confession – if you couldn’t believe all that, what could you believe? For the genius of disinformation is that even exposure doesn’t disable it. All it really has to do is sow doubt and confusion. The story was finally shown to be fraudulent in 1998, through documents transcribed from Soviet archives. The transcripts were authenticated by the historian Kathryn Weathersby, an expert on the archives. But as Dr Weathersby laments, “People come back and say ‘Well, yeah, but, you know, they could have done it, it could have happened.’”

There’s an insidious problem here: the same language is used to express blanket cynicism as empirical scepticism. As Arendt argued, gullibility and cynicism can become one. If opponents of democracy can destroy the very idea of shared, trusted information, they can hope to destabilise democracy itself.

But there is a glimmer of hope here too. The fusion of cynicism and gullibility can also afflict the practitioners of disinformation. The most effective lie involves some self-deception. So the show trial victims seem to have internalised the accusations against them, at least for a while, but so did their tormentors. As the historian Robert Service has written, “Stalin frequently lied to the world when he was simultaneously lying to himself.”

Democracy might be vulnerable because of its reliance on the idea of shared truth – but authoritarianism has a way of undermining itself by getting lost in its own fictions. Disinformation is not only a danger to its targets. 

Phil Tinline’s documentary “Disinformation: A User’s Guide” will be broadcast on BBC Radio 4 at 8pm, 17 March

This article first appeared in the 19 December 2014 issue of the New Statesman, Christmas Issue 2014