The 2017 “Youthquake” general election was a “myth”, according to a recent statement by the British Election Study (BES) team. This was a bold claim and one that directly challenges the conventional wisdom. Responding quickly, the former YouGov president Peter Kellner warned that the BES evidence supporting the conclusion was rather thin. Kellner’s caveat was timely and correct, but things are actually even thinner than he said.
In 2015, pollsters were too likely to assume young people would vote, because the young people who participated in polls were more enthusiastic and engaged with politics than the average. As a result, during the 2017 election campaign, pollsters differed as to the importance they placed on the youth vote. YouGov, which focused on improving its sample of young people, turned out to be one of the most accurate in forecasting the result. With Labour winning university towns and inner city constituencies, commentators branded the party’s surge a “youthquake”.
The BES “no Youthquake” conclusion is based on analyses of data gathered in the BES 2017 national face-to-face survey of 2,194 respondents in 234 constituencies. This works by interviewing the survey respondents in their own homes, which nowadays is difficult to do since response rates have fallen dramatically over time: less than half (46 per cent) of those who were originally selected to participate completed an interview.
As pollster Anthony Wells stated in his reflections on the BES team’s claim, researchers like the face-to-face survey because it enables them to check if respondents actually voted at the election. The difficulty in 2017, however, was that only 1,475 of the 2,194 respondents – or 67 per cent – were checked, according to our analysis of the BES data. In other words, there is no validated report of the voting for nearly one-third of the overall sample. This missing data further compounds the problem of the low response rate.
Another problem concerns the number of young people in the study and how they are distributed across the country. Altogether, there are only 157 respondents aged 18 to 24 in the survey. Nearly half (45 per cent) of the 234 constituencies sampled do not have any respondents in this age bracket. This means that for the 197 constituencies for which validated voting data are available, 61 per cent do not have any under-25s in the survey.
Taken together, these sampling problems mean that there is a very high level of uncertainty associated with statistical estimates of turnout by different age groups and that we need to be careful about the claims that we make. Indeed, as Kellner rightly observes, even if one were to make the decidedly unwarranted assumption that the BES survey is a perfect random sample, the small number of 18-24 year olds means that the survey’s estimate that 48 per cent of them voted in 2017 could easily be far off the mark. All pollsters have to apply what are called “confidence intervals” to their estimates of turnout and other measures from surveys. This interval captures the uncertainty arising from the fact that we are using a sample to try to work out what the entire electorate is doing. The problem with the BES survey is that the confidence interval for the 18-24 year-old voting percentage is very large, varying from at least 33 per cent to 63 per cent. Actually it is even worse, because besides having only a small number of young people from a limited number of constituencies, the sample is biased by the failure to validate the votes of one-third of those interviewed.
The survey results also paradoxically imply that young people are both honest about their lack of voting, and more likely to pretend they voted than other age groups. The survey asked respondents if they voted in the election and then, for some of them, subsequently validated this answer from official records. However, comparing these two measures points toward problems with the sample. Some 63 per cent of the 18 to 25 year olds in the unweighted data say that they voted, but only 49.6 actually did participate in the election according to the validated vote. This gap is huge compared with other age groups, and it is normally explained by a “social desirability” bias among respondents, a tendency for people to feel guilty about not voting and so they end up claiming that they did when in fact they did not. The results imply that 18 to 25 year olds suffered more from this bias than did older age groups. But at the same time significantly more of them claimed not to have voted— indicating no bias—than any other age group. So, the results do not fit with what we know about how different respondents answer questions, suggesting there are serious problems with the sample.
When the data are weighted then this discrepancy between self-reporting and validated voting is much reduced, but that is because the answers of a very limited number of young voters are given a big boost by the weighting exercise—their responses count a lot more than the responses of older people. If they are unrepresentative of young people in general, this exercise downgrades the actual youth vote in the wider electorate. The moral is that you can’t weight your way out of a bad sample.
Of course, increased turnout is only one part of Youthquake advocates’ claim. Another consideration concerns massive age differences in Labour (and Conservative) support in 2017. Here, we are on firmer ground. BES face-to-face survey data for 2015 and 2017 illustrates the pattern. Although all age groups were more likely to vote for Labour in 2017 than two years earlier, fully 72 per cent of the 18-24 age group reported that they did so, a 32 point increase on 2015. This is more than twice the increase for any other age category.
Other national surveys also indicate young people gave much stronger support for Labour in 2017 than in 2015. The percentage of 18-24 year olds voting Labour climbed from 43 to 62 per cent in the Ipsos-MORI election surveys, and from 41 to 64 per cent in our own Essex CMS election surveys.
Similarly, the 2015 and 2017 BES internet panel surveys (which have huge sample sizes of 30,000 plus respondents) show a 30 per cent jump in Labour support among voters under 25 year olds. Overall, the increase in youth support for Labour in these surveys averages a “quake-sized‘ 26 per cent.
Although most politicians care a great deal about winning votes, they care even more about winning seats in Parliament. Analyses of the 2017 constituency-level election results and census data reveal that the likelihood that Labour would win a seat was strongly correlated with the percentage of young people in that constituency. This relationship holds up in statistical analyses that control for several other relevant factors such as educational level, ethnic composition, home ownership and social class. The probability that Labour wins a constituency in 2017 climbs as the percentage of 18 to 29 year olds increases. The probability of a Conservative victory is a mirror image, with the likelihood of a Tory victory falling as the percentage of young people in a constituency grows. These results suggest that Labour’s surprising 2017 successes in constituencies like Canterbury and Norwich South owed much to the heavy concentrations of young people in these areas.
Since national election studies began over a half-century ago, researchers have struggled to measure voter turnout accurately. Simply stated, a non-trivial number of survey respondents say they voted when they did not. Although it would be unfair to characterize the British electorate as “a nation of liars”, every national election study conducted since 1964 has overestimated turnout, with face-to-face surveys typically doing so by margins of 10 to 15 per cent. BES researchers have tried to address this problem by checking the reported voting behavior of survey respondents with official records. But the effort fell well short in 2017 and, combined with other problems of survey design and execution, this means that efforts to accurately gauge the turnout rate of young people moves analysts from a situation of calculable risk to the realm of wild uncertainty.
However, all is not lost. Researchers are on stronger ground when they analyse patterns of party support across age groups in 2017 in comparison with 2015 (or earlier years). Here, the BES face-to-face data tell the same story as data gathered in several other surveys — young people were much more likely to vote Labour in 2017 than was the case in 2015.
This finding is bolstered by other recent analyses we have conducted that show that there were very sharp age gradients in respondents’ attitudes towards party leaders, judgments about party performance on important issues and partisan identifications. These are major variables driving electoral choice and in every case young people were much more pro-Labour than were older voters. Analyses that do not use election survey data complement these findings by showing that the percentage of young people in a constituency was strongly related to the probability that Labour would win a seat in 2017.
In sum, available evidence clearly testifies that there was a widely unexpected and politically consequential “Youthquake” in 2017. Psephologists would be wise to study its nature and causes and consider its possible consequences.
Professors Harold Clarke, Matthew Goodwin and Paul Whiteley are the authors of Brexit: Why Britain Voted to Leave the EU, and are working with Marianne Stewart on a new book on British politics.