Support 110 years of independent journalism.

  1. Politics
  2. The Staggers
9 June 2017updated 09 Sep 2021 4:45pm

Did the pollsters do a better job with this election, or did they just influence it?

Forecasts were closer to results than in recent votes, but we can't be sure why that's the case.

By Paul Goodwin

Political forecasters got a bad press following David Cameron’s shock election victory in 2015, Brexit and Donald Trump’s win in the US. But last night the prediction of John Curtice’s exit poll for the BBC, ITV and Sky news proved to be astonishingly accurate. So is it time to change our view of the political predictors? We need to be careful.

THAT Exit poll…

Strictly speaking, the exit poll wasn’t a forecast. It was an exercise in estimating what was going on when people were voting – a bit like trying to estimate today’s average temperature for the whole of Britain by sampling thermometer readings at a few chosen locations. The true forecasts came from the polls taken days before the election. Pollsters argue their results are snapshots of opinion at a particular time not forecasts, but most people treat them as forecasts. So, as forecasts, how did they do?

Four polls taken on 6 to 7 June estimated the Conservative at between 41 per cent and 46 per cent (the average was 44.2 per cent). For Labour the estimates ranged from 33 per cent to 40 per cent (on average: 35.8 per cent). We now know that the final figures were 42.4 per cent and 40.2 per cent, respectively.

So did the polls underestimate Labour’s performance?

Superficially, it looks like the polls underestimated Labour’s performance, but most of us ignore the small print – polls have a margin of error. Even then, they only claim to capture the right figure within the margin of error with a probability of 95 per cent. Margins of error vary depending on the number of people questioned and the party’s share of the vote. They are greatest when the share is 50 per cent so predictability is at its lowest. Typically, the margin is two to three percentage points either way. Taking this into account, the polls did rather better. But margins of error and probabilities don’t make good headlines. We feel uncomfortable with uncertainty and yearn for the false certainty of single numbers.

Even if an election result falls outside the margin of error,  you can’t say that a single poll was wrong. If it tells us that there’s a 95% chance that Labour will get 33% to 39% of the vote it’s telling us there’s a 5% chance the result will fall outside this range. If Labour achieves 40% then this might be that one in twenty election we would expect to be outside the range –we just can’t tell.

Select and enter your email address The New Statesman's quick and essential guide to the news and politics of the day. The best way to sign up for Morning Call is via Your weekly guide to the best writing on ideas, politics, books and culture every Saturday. The best way to sign up for The Saturday Read is via
  • Administration / Office
  • Arts and Culture
  • Board Member
  • Business / Corporate Services
  • Client / Customer Services
  • Communications
  • Construction, Works, Engineering
  • Education, Curriculum and Teaching
  • Environment, Conservation and NRM
  • Facility / Grounds Management and Maintenance
  • Finance Management
  • Health - Medical and Nursing Management
  • HR, Training and Organisational Development
  • Information and Communications Technology
  • Information Services, Statistics, Records, Archives
  • Infrastructure Management - Transport, Utilities
  • Legal Officers and Practitioners
  • Librarians and Library Management
  • Management
  • Marketing
  • OH&S, Risk Management
  • Operations Management
  • Planning, Policy, Strategy
  • Printing, Design, Publishing, Web
  • Projects, Programs and Advisors
  • Property, Assets and Fleet Management
  • Public Relations and Media
  • Purchasing and Procurement
  • Quality Management
  • Science and Technical Research and Development
  • Security and Law Enforcement
  • Service Delivery
  • Sport and Recreation
  • Travel, Accommodation, Tourism
  • Wellbeing, Community / Social Services
Visit our privacy Policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications.

The problem is that the pollsters want to make their forecasts palatable and saleable so they supress the known uncertainties. Their headline figures have a false scientific exactitude. But there’s a cost -their reputations suffer when these figures are badly askew. If their findings were presented more honestly, we’d have fewer surprises during those long sleepless election nights.

But then perhaps they influenced it…

However, election forecasters face another problem – the curse of the self-destructive prediction. In the 2001 election the polls suggested that Tony Blair would win by a landslide with 45 per cent to 48 per cent of the vote. One bookmaker even paid out to people who’d bet on a Labour victory before the day’s voting had been completed. Although Blair had a clear victory, he was disappointed to get only a 40.7 percent share – oddly close to that achieved by Jeremy Corbyn. When political analysts scoured the data to find out what had gone wrong they found that the predictions of a substantial Labour win had discouraged many Labour supporters from bothering to vote. This time we may find that Labour’s recent upward trajectory in the polls had the opposite effect by encouraging many pro-Corbyn young people to vote for the first time. Polling still has a dynamic role to play in elections – and we could do well to educate ourselves better on how they work.

Professor Paul Goodwin’s new book, Forewarned, is published next month by Biteback

Content from our partners
What you need to know about private markets
Work isn't working: how to boost the nation's health and happiness
The dementia crisis: a call for action