5 November 2020

Did this election really ‘blow up’ the polling industry?

By

The results of the US Presidential election are still being counted, but one talking point among many has been the accuracy of the polls.

The post-mortem can’t begin until the votes are counted and we know the ‘right’ answers, but we can already cut through the noise and hot takes, some of which border on – to touch on another talking point – misinformation.

In situations like these, pollsters are often accused of having set a narrative, in this case “confidently” predicting a “landslide” for Joe Biden, which doesn’t really stack up. Pollsters on average suggested a Biden win in the popular vote (by 7-8 points in depending on the average) and most were – as usual – cautious with caveats.

Communication is important for pollsters these days, but if others (notably the commentariat) simply ignore the caveats – or spin a narrative based on outliers – there is little pollsters can do. Sure, there was one poll in Wisconsin putting Joe Biden 17 points ahead, and yes it got a lot of retweets, but there were dozens more that showed it much, much closer.

What do we know at this stage? The swings in vote share from 2016 in the county-level results that we do have suggest that Biden will win the popular vote by a mid-single digit percentage point margin nationally. Depending on exactly where it lands, it may yet be in the ballpark of a routine polling error. Not ideal, but not terrible.

The bigger concern is the performance of state polling, not just because (as In 2016) it appears to much less accurate than national polling, but because the misses seem to follow a very similar pattern to the 2016 state-level errors. The obvious question is naturally why the changes introduced in the time since haven’t fully fixed the evident problems of four years ago.

The ‘four errors’ of polling

Causes in polling errors generally fall into four categories, namely that the samples are unrepresentative (sample bias), that the people pollsters think are likely to vote are different from those who actually do (differential turnout), that people aren’t honest (misreporting, or the “Shy Trump” theory) or that they change their minds after being polled (late swing).

The state level errors raise the possibility of the first of these – that the samples contain too few of Donald Trump’s supporters, even after controlling for the demographics (notably education level) that strongly correlate with his support. This is a difficult problem to fix, though not necessarily impossible.

The ‘Shy Trump’ theory would also fit here, but isn’t something for which any credible evidence was found in 2016 (and, for what it’s worth, has also been dismissed by Trump himself). The variation whereby some Trump voters simply avoid taking part in polls altogether – perhaps viewing pollsters as part of a hostile establishment – is much likelier, but would be a form of sample bias, not misreporting.

There is one complication, though. Some pollsters weight by past vote, meaning that their samples would have the right proportion of 2016 Trump voters. If they had too few, then the 2016 and 2020 errors ought to cancel. Since they didn’t it may be that there was some unknown split within 2016 Trump voters.

As such, it could also point to an unexpected turnout differential between Trump and Biden supporters, although turnout models are complicated and so it will really be down to individual pollsters to assess.

There are also context-specific factors that could be relevant to turnout. Given the unprecedented skew in vote choice between early and on-the-day votes, getting true relative turnout of people who say they’ve already voted, and those who say they hadn’t voted but would, really matters.

Or it could be that Democrats doing less in-person campaigning than usual mattered.

There are also political science discussions, such as Trump’s improved performance among ethnic minorities, which aren’t causes of polling errors per se, but which are relevant if polls fail to pick sufficiently to reflect them for any reason (which would matter in some areas).

Pollsters will no doubt look carefully at all of these things.

But, contrary to some suggestions, polling has not got any less accurate over time. It’s just that there has been a relatively high frequency of close, high-stakes contests, where even a normal-sized polling error is sufficient to change the story significantly.

The problem is that once a narrative takes hold, it’s hard to dislodge with facts. Public and media attitudes to polling often seem to see-saw between blind faith and blind distrust at times like now, neither of which really reflect the reality of polling accuracy.

Ideally we would reach a happy middle-ground whereby it was widely understood that the only way to measure what people think, or how they intend to vote, is to ask them scientifically, but that the science has its limits, and can never tell you an election result with certainty.

Just don’t hold your breath.

US election live events

Next week the Centre for Policy Studies welcomes two of the leading light of American political analysis to chew over this extraordinary election cycle.  

Andrew Sullivan is among America’s most experienced and influential political commentators and authors. Join CapX’s editor-in-chief Robert Colvile and Andrew as they review the 2020 US election and discuss how the result will shape domestic and global politics. (Monday November 9: 17.00 – 18.00)

Frank Luntz is one of his country’s most respected communications experts. Join Robert in conversation with Dr Luntz as they explore how America voted, and what it means for the country’s future (Friday November 13: 17.00 – 18.00)

Click here to subscribe to our daily briefing – the best pieces from CapX and across the web.

CapX depends on the generosity of its readers. If you value what we do, please consider making a donation.

Matt Singh is the founder of NumberCruncher Analytics.