New Times,
New Thinking.

  1. Politics
  2. Brexit
1 June 2016

A shift towards Brexit? Don’t bet on it

There is reason to be sceptical of the surge to Leave, says Peter Kellner. 

By Peter Kellner

Here is a mid-course correction. Two weeks ago I argued that telephone polls are providing a better guide than online surveys to the state of play in the EU referendum. Overall, taking phone polls together, I still hold to this, and for the reasons I gave. However, individual phone polls are giving cause for concern, including the two latest, for they are proving extremely erratic.

Within the past fortnight, online polls have been consistent: all have shown the contest close to level-pegging, as they have all year, with both remain and leave on 50 per cent, plus or minus two – hence somewhere between a four-point “remain” lead (52-48 per cent one way) and a four point “leave” lead (52-48 per cent the other way). This amount of variation is what would be expected if the public’s views are not changing, given the laws of probability and the inevitable sampling fluctuations.

In contrast, recent telephone polls are all over the place. Excluding “don’t knows”, they range from a 60-40 per cent lead for “remain” (Ipsos-Mori) to a 52-48 per cent lead for “leave” (ICM). The ICM figures are especially striking. This week, as two weeks ago, ICM conducted simultaneous online and telephone surveys for the Guardian. The two online surveys suggest that nothing has changed: both report a four-point lead for “leave”. The two telephone polls tell a completely different story: a ten-point “remain” lead has been wiped out by 7 per cent swing in a fortnight.

This is more than a matter of curious statistical interest. Ipsos-Mori’s figures led to a sharp rise in the value of sterling; ICM’s figures prompted an equally sharp fall. 

ONLINE SAMPLES ARE BIGGER THAN PHONE SAMPLES

Part of the problem with telephone polls is that they are more expensive to conduct than online surveys. As a result, their samples tend to be smaller. They are mostly either 800 (ORB’s polls for the Daily Telegraph) or 1,000 (everyone else). Conventional statistical theory indicates a margin of error of 3-4 points for each side – and therefore a margin of error of 6-8 points on the gap between remain and leave.

In contrast, online surveys generally question twice as many people: 1,600-2,000. This does not eliminate random sampling error, but it does reduce it, and so reduce the risk of freak outliers (or “rogue” polls as they are sometimes called). This goes some way to explaining why the online polls have told a story of broad stability, why the telephone surveys oscillate more, generate dramatic headlines and affect the currency markets.

Give a gift subscription to the New Statesman this Christmas from just £49

However, we can go a bit further to make sense of this week’s two telephone polls showing a marked move towards “leave”.

ORB’s EFFECTIVE SAMPLE SIZE: 433

First, ORB. Not only does it start with a smaller sample than other companies; it then reduces the effective sample even more, for it bases its headline figures on those who say they are certain to vote. In ORB’s latest poll, this turnout filter reduces the sample to just 433 (226 certain to vote “remain”, 207 to vote “leave”, leading to a headline figure of 52-48 per cent). The margin of error is more than five percentage points for each side and 10-points-plus on the gap between them.

Is ORB right to use such an aggressive turnout filter? Polling companies disagree, and past evidence is inconclusive on the best way to identify which respondents in any given sample will actually take the trouble to vote. For the record, ORB’s latest figures indicate a 55-45 per cent lead for “remain” among all those who state a preference, and a 54-46 per cent lead if the turnout filter used by some other companies is applied. (ORB asked people how likely they were to vote on a scale of 1-10. 144 people who said they would vote “remain” responded with a score of 7, 8 or 9; just 104 “leave” supporters gave the same range of responses. ORB excluded them from its final in-out figures; some other companies would take them into account and, in this instance, double the “remain” lead from four to eight points – and give the Telegraph a less dramatic story)

ICM’S UNUSUALLY LARGE PARTY VOTING ADJUSTMENT

As for ICM’s latest figures, something looks odd about its telephone poll. Over the years, ICM has pioneered the use of adjustments to improve the accuracy of its projections – partly by its use of turnout filters, and partly by making judgements about “don’t knows”. ICM suspects that sometimes these conceal shy Tories, and help to explain the way polls, going back to 1992, have often understated Conservative support. ICM has usually been vindicated, outshining most other companies in the 1997, 2001 and 2010 general elections.

However, I wonder whether ICM’s adjustments this time point to problems with its latest telephone sample. Normally the voting adjustment moves the party lead by two to four points in the Conservatives’ direction. (Admission: this is not a precise calculation, but my impression from watching ICM surveys over the years.) What happened this time is different. After weighting its raw data to make its sample look like Britain’s electorate, it reports a six point Labour lead over the Conservatives (37-31 per cent, after excluding don’t knows). But after adjusting for turnout and “don’t knows”, ICM reports a four pointConservative lead: 36-32 per cent. Thus ICM’s adjustment alters the party lead by ten points. I cannot recall ICM making anything like this scale of adjustment before.

For comparison, its simultaneous online poll has a more normal adjustment, increasing the Conservative lead from two points (34-32 per cent) to five (36-31 per cent).

It is greatly to ICM’s credit that it hasn’t been tempted to massage its figures. By placing its full data on its site, people like me can look under the bonnet of the headline numbers; so full marks for honesty and transparency. But a ten-point voting adjustment suggests that there is something – how shall we put this? – unusual about its phone sample this time. Some have suggested that phone polls have a problem obtaining a good sample over a bank holiday weekend, especially one that falls at the beginning of half term. I don’t know whether this is true, though I can think of one or two odd results from phone polls in the past conducted over bank holiday weekends.

DON’T PANIC

Bottom line: the evidence is not – at any rate, not yet – conclusive that there has been a shift to “leave” in the past fortnight. I believe that “remain” is still modestly ahead and that not much has changed in the past fortnight. But if the next few days see a decline in the “remain” vote is a majority of both phone and online polls, then that judgement will need to be revised. Meanwhile, companies such as ORB that use aggressive turnout filters should start with much bigger samples – at least 1,500 in my view – and so reduce the risk random fluctuations in their results that may have little or nothing to do with what is actually happening in the country.

This article originally appeared on Peter Kellner’s blog, the Politics Counter.

Content from our partners
Putting citizen experience at the heart of AI-driven public services
Skills policy and industrial strategies must be joined up
How the UK can lead the transition to net zero