Skip to content

Why political scientists should continue to (fail to) predict elections?

The results from the British elections last week already claimed the heads of three party leaders. But together with Labour, the Liberal Democrats and UKIP, there was another group that lost big time in the elections: pollsters and electoral prognosticators. Not only were polls and predictions way off the mark in terms of the actual vote shares and seats received by the different parties. Crucially, their major expectation of a hung parliament did not materialize as the Conservatives cruised into a small but comfortable majority of the seats. Even more remarkably, all polls and predictions were wrong, and they were all wrong pretty much in the same way. Not pretty.

This calls for reflection upon the exploding number of electoral forecasting models which sprung up during the build-up to the 2015 national elections in the UK. Many of these models were offered by political scientists and promoted by academic institutions (for example, here, here, and here). At some point, it became passé to be a major political science institution in the country and not have an electoral forecast. The field became so crowded that the elections were branded as ‘a nerd feast’ and the competition of predictions as ‘the battle of the nerds’. The feast is over and everyone lost. It is the time of the scavengers.

The massive failure of British polls and predictions has already led to a frenzy of often vicious attacks on the pollsters and prognosticators coming from politicians, journalists and pundits, in the UK and beyond. A formal inquiry has been launched. The unmistakable smell of schadenfreude is hanging in the air. Most disturbingly, some respected political scientists have voiced a hope that the failure puts a stop to the game of predicting voting results altogether and dismissed electoral predictions as unscientific.

mudde afonso

This is wrong. Political scientists should continue to build predictive models of elections. This work has scientific merit and it has public value. Moreover, political scientists have a mission to participate in the game of electoral forecasting. Their mission is to emphasize the large uncertainties surrounding all kinds of electoral predictions. They should not be in the game in order to win, but to correct on others’ too eager attempts to mislead the public with predictions offered with a false sense of precision and certainty.

The rising number of electoral forecasts done by political scientists has more than a little bit to do with a certain jealousy of Nate Silver – the American forecaster who gained international fame and recognition with his successful predictions of the US presidential elections. (By the way, this time round, Nate Silver got it just as wrong as the others). For once, there was something sexy about political science work, but the irony was, political scientists were not part of it. And if Nate, who is not a professional political scientist, can do it, so can we – academic experts with life-long experience in the study of voting and elections and hard-earned mastery of sophisticated statistical techniques. So the academia was drawn into this forecasting thing.

And that’s fine. Political scientists should be in the business of electoral forecasting because this business is important and because it is here to stay. News outlets have an insatiable appetite for election stories as voting day draws near, and the release of polls and forecasts provides a good excuse to indulge in punditry and sometimes even meaningful discussion. So predictions will continue to be offered and if political scientists move away somebody else will take their place. And the newcomers cannot be trusted to have the public interest at heart.

Election forecasts are important because they feed into the electoral campaign and into the strategic calculations of political parties and of individual voters. Voting is rarely an act of naïve expression of political preferences. Especially in an electoral system that is highly non-proportional, as the one in the UK, voters and parties have a strong incentive to behave strategically in view of the information that polls and forecasts provide. (By the way, ironically, the one prognosis that political scientists got relatively right – the exit poll – is the one that probably matters the least as it only serves to satisfy our impatience to wait a few more hours for the official electoral results.)

Hence, political scientists as servants of the public interest have a mission to offer impartial and professional electoral forecasts based on state of the art methodology and deep substantive knowledge. They must also discuss, correct and when appropriate trash the forecasts offered by others.

And they have one major point to make – all predictions have a much larger degree of uncertainty than what prognosticators want (us) to believe. It is a simple point that experience has been proven right times and again. But it is one that still needs to be pounded over and over as pollsters, forecasters and the media get easily carried away.

It is in this sense that commentators are right: predictions, if not properly bracketed by valid estimates of uncertainty, are unscientific and pure charlatanry.  And it is in this sense that most forecasts offered by political scientists at the latest British elections were a failure. They did not properly gauge the uncertainty of their estimates and as a result misled the public. That they didn’t predict the result is less damaging than the fact they pretended they could.

Since the bulk of the data doing the heavy-lifting in most electoral predictive models is poll data, the failure of prediction can be traced to a failure of polling. But pollsters cannot be blamed for the fact that prognosticators did not adjust the uncertainty estimates of their predictions. The tight sampling margins of error reported by pollsters might be appropriate to characterize the uncertainty of polling estimates (under certain assumptions) of public preferences at a point in time, but they are invariably too low when it comes to making predictions from these estimates. Predictions have other important sources of uncertainty in addition to sampling error and by not taking these into account prognosticators are fooling themselves and others. Another point forecasters should have known: combining different polls reduces sampling margins of error, but if all polls are biased (as they proved to be in the British case), the predictions could still be seriously off the mark.

Offering predictions with wide margins of uncertainty is not sexy. Correcting others for the illusory precision of their forecasts is tedious and risks being viewed as pedantic. But this is the role political scientists need to play in the game of electoral forecasting, and being tedious, pedantic and decidedly unsexy is the price they have to pay.

Published inRisk and probabilityThe professionVoting and elections

Be First to Comment

Leave a Reply

Your email address will not be published.