Support 100 years of independent journalism.

  1. Politics
  2. The Staggers
9 June 2017updated 09 Sep 2021 4:45pm

Did the pollsters do a better job with this election, or did they just influence it?

Forecasts were closer to results than in recent votes, but we can't be sure why that's the case.

By Paul Goodwin

Political forecasters got a bad press following David Cameron’s shock election victory in 2015, Brexit and Donald Trump’s win in the US. But last night the prediction of John Curtice’s exit poll for the BBC, ITV and Sky news proved to be astonishingly accurate. So is it time to change our view of the political predictors? We need to be careful.

THAT Exit poll…

Strictly speaking, the exit poll wasn’t a forecast. It was an exercise in estimating what was going on when people were voting – a bit like trying to estimate today’s average temperature for the whole of Britain by sampling thermometer readings at a few chosen locations. The true forecasts came from the polls taken days before the election. Pollsters argue their results are snapshots of opinion at a particular time not forecasts, but most people treat them as forecasts. So, as forecasts, how did they do?

Four polls taken on 6 to 7 June estimated the Conservative at between 41 per cent and 46 per cent (the average was 44.2 per cent). For Labour the estimates ranged from 33 per cent to 40 per cent (on average: 35.8 per cent). We now know that the final figures were 42.4 per cent and 40.2 per cent, respectively.

So did the polls underestimate Labour’s performance?

Superficially, it looks like the polls underestimated Labour’s performance, but most of us ignore the small print – polls have a margin of error. Even then, they only claim to capture the right figure within the margin of error with a probability of 95 per cent. Margins of error vary depending on the number of people questioned and the party’s share of the vote. They are greatest when the share is 50 per cent so predictability is at its lowest. Typically, the margin is two to three percentage points either way. Taking this into account, the polls did rather better. But margins of error and probabilities don’t make good headlines. We feel uncomfortable with uncertainty and yearn for the false certainty of single numbers.

Even if an election result falls outside the margin of error,  you can’t say that a single poll was wrong. If it tells us that there’s a 95% chance that Labour will get 33% to 39% of the vote it’s telling us there’s a 5% chance the result will fall outside this range. If Labour achieves 40% then this might be that one in twenty election we would expect to be outside the range –we just can’t tell.

Sign up for The New Statesman’s newsletters Tick the boxes of the newsletters you would like to receive. Quick and essential guide to domestic and global politics from the New Statesman's politics team. The New Statesman’s global affairs newsletter, every Monday and Friday. The best of the New Statesman, delivered to your inbox every weekday morning. The New Statesman’s weekly environment email on the politics, business and culture of the climate and nature crises - in your inbox every Thursday. A handy, three-minute glance at the week ahead in companies, markets, regulation and investment, landing in your inbox every Monday morning. Our weekly culture newsletter – from books and art to pop culture and memes – sent every Friday. A weekly round-up of some of the best articles featured in the most recent issue of the New Statesman, sent each Saturday. A weekly dig into the New Statesman’s archive of over 100 years of stellar and influential journalism, sent each Wednesday. Sign up to receive information regarding NS events, subscription offers & product updates.

The problem is that the pollsters want to make their forecasts palatable and saleable so they supress the known uncertainties. Their headline figures have a false scientific exactitude. But there’s a cost -their reputations suffer when these figures are badly askew. If their findings were presented more honestly, we’d have fewer surprises during those long sleepless election nights.

Content from our partners
How automation can help telecoms companies unlock their growth potential
The pandemic has had a scarring effect on loneliness, but we can do better
Feel confident gifting tech to your children this Christmas

But then perhaps they influenced it…

However, election forecasters face another problem – the curse of the self-destructive prediction. In the 2001 election the polls suggested that Tony Blair would win by a landslide with 45 per cent to 48 per cent of the vote. One bookmaker even paid out to people who’d bet on a Labour victory before the day’s voting had been completed. Although Blair had a clear victory, he was disappointed to get only a 40.7 percent share – oddly close to that achieved by Jeremy Corbyn. When political analysts scoured the data to find out what had gone wrong they found that the predictions of a substantial Labour win had discouraged many Labour supporters from bothering to vote. This time we may find that Labour’s recent upward trajectory in the polls had the opposite effect by encouraging many pro-Corbyn young people to vote for the first time. Polling still has a dynamic role to play in elections – and we could do well to educate ourselves better on how they work.

Professor Paul Goodwin’s new book, Forewarned, is published next month by Biteback