17 November 2016

Academics need to understand the countries in which they live

By Terrence Casey

On June 23, I watched as Britain’s referendum on European Union defied the polling data and Brexit won. My political science colleagues in Britain flooded social media in a combination of woe and disbelief.

Clearly, the voters – deceived by the Leave campaign – did not understand Brexit’s implications. How else could such a wrong-headed idea have won the election?

Flash forward to this fall, as I taught a course on American Politics & Government during the presidential election.

I guided my students though standard textbook fare about the relationship between demographics and voting behaviour, regions of historic Democratic or Republican strength, the key swing states, the role of money in elections, and the importance of a ground game in presidential campaigns.

Donald Trump’s campaign was, to say the least, novel in its approach, promising to change the political dynamic. Would this be enough to upset the fundamentals that political scientists have studied for so long? I didn’t think so.

So on the day before the election, having spent weeks scouring Real Clear Politics, 538.com or Larry Sabato’s Crystal Ball, I drew up my own electoral map, which I (foolishly) shared with my students and Facebook friends.

In this scenario, Hillary Clinton would get 304 electoral votes to 234 for Trump. Wrong again. My prediction was a mirror image of the actual result.

As after the Brexit vote, social media has oozed with anger, frustration and disbelief. And both have exposed an uncomfortable professional problem: we academics truly do not understand the countries in which we live.

In the aftermath of the election, pollsters and prognosticators are being denigrated. However, the polls were not in the broadest sense wrong.

Polls take samples of the population and provide margins of error within which we are 95 per cent confident the actual value resides. Probability and uncertainty remain. And even the best polls may produce a result outside of the margin of error: after all, the pollsters are saying up front that there is a 1 in 20 chance that just that will happen.

With the US election, some polls, specifically the LA Times/USC tracking poll and Investor’s Business Daily, consistently had Trump ahead. Yet even they were not entirely correct: Hillary Clinton won the popular vote, after all.

But still – some scholars saw through the haze. The American Political Science Association published a batch of predictive models prior to the election: Stony Brook University’s Helmut Norpoth’s Primary Model forecast “a near-certain Trump victory” back in February, and Emory University’s Alan Abramowitz’s “Time for Change” forecast put Trump’s odds of winning at 2-to-1.

The majority of other forecasts, both purely academic and from more popular sites (such as those put together by Nate Silver and The New York Times’s Upshot) overwhelmingly – and incorrectly – predicted a Clinton win.

The problem is how one interprets all of this data, and what additional adjustments and assumptions you need to move from raw numbers to a political result.

The Brexit vote in the UK, and Trump’s victory in the US, show that the problems of interpretation are not isolated.

For the average voter to not understand the political preferences of their fellow citizens is one thing. For social scientists, it is professional malpractice.

Why are we getting it so wrong? Trump supporters might charge that this is liberal academia fixing it for Democrats.

Yet straightforward manipulation for ideological reasons does not seem to be a particularly compelling argument to me.

Yes, academia is chock-a-block with liberals (I’m conservative, albeit of a #NeverTrump variety). But most political scientists have a vested professional interest in correct analysis, regardless of whether they support the outcome.

A more likely explanation for why we saw a plethora of false predictions is the corrosive effects of the Pauline Kael Syndrome, named after the New York Times film critic.

Kael pointed out that Nixon’s 1972 victory came as a particular shock since no one she knew had voted for him. The quote is often used to indict liberal remoteness – yet Kael was, in fact, making a self-aware assessment that she lived in a Manhattan bubble.

Self-aware or unaware, an intellectual bubble is still just that. Too many political scientists and other academics circulate in professional and social circles where they too do not know anyone who voted for Trump.

And rather than introspection about our failures of perception as scholars, the dominant reaction among these academics – both over Brexit and Trump – has been to blame the people for being so stupid.

Understanding the social, economic and political dynamics of our own country is the primary job of many academic social scientists. In the aftermath of Brexit and Trump, we need to more openly recognize that we are failing as scholars if our own societies are an unknown country.

Terence Casey is Professor of Political Science at the Rose-Hulman Institute of Technology and former Executive Director of the British Politics Group at the American Political Science Association.