With conference season over, we’re beginning to get the first indications of how the annual party gatherings affected voting intention. Much has been said and written, but what did the public actually make of it all?
The three polls we have so far, with changes compared with the same pollster’s last poll before 15th September, are:
BMG: Con 38 (+1) Lab 39 (+1) LD 10 (-1) UKIP 4 (-3)
Opinium: Con 39 (=) Lab 39 (+1) LD 7 (=) UKIP 6 (=)
YouGov: Con 41 (+1) Lab 37 (+1) LD 9 (-2) UKIP 4 (=)
Looking at the changes, the most noticeable thing is how little movement there is in the main party vote shares, with both Labour and the Tories both being indicated within a point of where they started in all three polls.
In fact the only thing that even approaches a pattern in these numbers is that Labour and the Tories between them seem to have gained slightly at the expense of the Lib Dems and UKIP, though the main parties are still several points below the 84.5 per cent of the Great Britain vote they collectively polled at the general election.
These figures represent a statistically insignificant swing of less than 0.2 per cent from Conservative to Labour over the conference season.
This is not the impression you might have got from the much retweeted polls conducted during conference season, a couple of which showed sharp swings in opposite directions, each of which was subsequently reversed. This was readily foreseeable, because as well as random error, polls are also subject to non-random error such as response bias, where one party’s supporters are differentially likely to participate in a poll, and which can be unstable around events such as conferences.
To get an idea of how much public opinion actually changes, we need to smooth through both types of volatility by taking an average of polls. There are different ways of doing this, but when looking at month-to-month changes, they tend to give similar results.
Historical polling data compiled by Mark Pack shows that over the 35 years back to the 1983 election, there have been only 23 instances – about 5 per cent of the data points within that period – where the swing between the two main parties exceeded 3 per cent month-to-month. Nearly half of monthly swings have been less than 1.5 per cent. In other words, although the last 12 months have seen some unusual stability in voting intention polls, in a way it hasn’t been that unusual.
The three biggest month-to-month swings were all in the early Blair years, two of them post-election bounces after his landslides in 1997 and 2001, and a dramatic (though short-lived) swing against him at the time of the fuel crisis in 2000.
Conferences have sometimes had big impacts. When Neil Kinnock took over as opposition leader from Michael Foot in 1983, a 6 per cent swing to Labour followed, as did a 5 per cent swing to the Conservatives in 2007 when Gordon Brown U-turned on holding a general election. But on average, September and October are associated with only a very small increase in polling volatility compared with the rest of the year.
An important caveat here is that methodological changes, particularly in this century, have gradually reduced measured volatility, complicating longer-term comparisons. But however polling is done, a few things seem to hold – short term moves are often noise, genuine moves often happen gradually, and large, sudden, moves are quite rare.
In a way, these numbers surprise more than they should. A look back at swings between general elections since the war shows that only two – 1945 and 1997 – produced swings much above 5 per cent. The median swing over an entire parliament during that period was 2.5 per cent (and from 1983 onwards, 2.1 per cent). So while it’s perfectly possible for opinion to shift massively from month-to-month, it would be odd if it were to happen regularly.
That is why I’m instinctively wary of the suggestion that “voting intention won’t tell you much at this stage”. Small moves in voting intention early in a parliament won’t tell you much, but big ones usually will. If polls haven’t moved much from the last election, that is useful information. If they indicate big and sudden moves, they are right more often than they’re wrong.
For example, despite ultimately being the least accurate voting intention polls in a generation, polling still provided an accurate early indicator of the Lib Dem collapse, the UKIP surge and SNP surge in the 2010-2015 parliament. And before most pollsters had a chance to retool after the 1992 debacle, they were still able to show the scale of the damage the ERM crisis did to the Conservatives.
The same applies even with big moves that were reversed before the next election, such as the Corbyn slump, Cleggmania, the SDP surge in the early 80s and the Green surges in 2014-2015 and in 1989. In each case the polls also picked up the reversal, even if in some cases they didn’t quite pick it up fully.
Where people do tend to be misled is where they obsess over small or transient moves. This can happen at any time, but especially around set piece events, where the narrative is that the event matters, therefore it ought to move the dial one way or the other. Yet 2018 has so far been a reminder that often, in reality, the voters have other ideas.