7 December 2018

The problem with claiming Facebook ads or overspending caused Brexit

By

Why did that happen? When seeking to explain events it may be tempting, even logical, to think in terms of cause and effect. Certainly to anyone with a background in the natural sciences, it is intuitive. The same experiment under the same conditions produces the same result.

But when it comes to social or political science, the conditions are never the same, so we often can’t easily discern (let alone prove) cause and effect. The discussion often arises in relation to election or referendum results.

For example, what “caused” Brexit?

First of all, we need to be clear about the question. We want to explain why 51.9 per cent of valid votes were for Leave, but as opposed to what? Are we asking why that number is higher than it might have been in some other country, or than at some point in the past, or what? Or we could be asking to what extent, if at all, the campaign period made a difference to the result.

Also, what does “cause” mean? Could any effect greater than the margin of victory be said to be a cause? Arguably so, but in that case, many outcomes will have multiple causes.

This week the Independent splashed with analysis suggesting that Facebook advertising in the final days “very likely” shifted 800,000 votes – just enough to flip the outcome from Remain to Leave. This analysis was rapidly taken apart on account of its heroic assumptions and apparent calculation errors.

But even disregarding holes, there is a deeper problem when it comes to thinking about cause and effect in politics. Rather than the earlier example of a replicable experiment, here we are trying to determine what caused a number of people to behave in a certain way.

Asking people why they voted is problematic, because deciding how (or whether) to vote is an individual and often complex decision. Some people will always vote for the same party, some will be able to rationalise their vote choice accurately, but many others won’t.

That often forces psephologists to infer behavioural causes from other factors, such as timing, or other individual-level data. For example, if people had been asked at the time whether they had seen an ad, it would be possible to compare the behaviour of those who had with those who had not. Even that approach wouldn’t be watertight, because people may misreport or misremember what adverts they had seen, but at least you’d have a hypothesis and a way to test it.

History, including recent history, includes examples of various election outcomes and theories about what might have “swung” the result. In the US, the question has often been raised of whether Russian interference contributed to the 2016 surprise. That theory is hard to test, due to it not being linked to a specific event, however there is evidence that the Comey letter (and its aftermath) helped Donald Trump.

Back in the UK, the largely unexpected Conservative majority at the 2015 election has often been attributed to English voters being spooked by the prospect of what the Conservatives relentlessly branded “Ed Miliband propped up by the SNP”. Likewise, the similar result in 1992 has been put down to Labour’s Sheffield rally – viewed by some as overconfident – or the Sun’s polling day front page attacking Neil Kinnock. But the data suggests that in both cases, not much “happened”, the polls were simply wrong the whole time.

Even when something did happen at the last moment, such as in 1970, it can be misattributed. Harold Wilson’s surprise defeat was almost certainly due to bad economic data published in the final days, and not, as has been claimed, due to England’s defeat in the World Cup.

The commonality between each of these examples was that the result was not widely expected. That often leads people to search for a cause for the surprise, particularly those on the losing side, whose sense of disappointment is greater when their hopes of victory have been dashed.

This, combined with the view among some Remain supporters that the Brexit vote wasn’t legitimate, mean that theories seeking to rationalise the outcome can get a lot of attention. Yet the reality is likelier to be that levels of and changes in public opinion weren’t fully reflected in polling, rather than being themselves irregular.

When something big happens, people will want to know why – often theories can be tested. But often, they can’t, and the lack of definitive conclusions means debates that go on and on and on.

Matt Singh is the founder of Number Cruncher Analytics.