What We Got Wrong In Our 2015 U.K. General Election Model

472460286-e1431066127191

Polls for the UK election were all very off. Ben Lauderdale offers some possible reasons in FiveThirtyEight:

The only thing we can say on our behalf is that in comparative terms, our forecast was middle of the pack, as no one had a good pre-election forecast. Of course the national exit poll, while not as close to the target as in 2010, was far better than any pre-election forecast.

Steve Fisher at ElectionsEtc came closest to the seat result: his 95 percent prediction intervals nearly included the Conservative seat total, although they missed the Liberal Democrat interval substantially, just as ours did. Several other forecasts were further away than we were.

The most obvious problem for all forecasters was that the polling average had Labour and the Conservatives even on the night before the election. This was not just the average of the polls, it was the consensus. Nearly every pollster’s final poll placed the two parties within 1 percentage point of each other. Based on the polling average being level, we predicted Conservatives to win by 1.6 percentage points on the basis of the historical tendency of polls to overstate changes from the last election. This kind of adjustment is helpful for understanding how the 2010 result deviated from the national polls on election day, as well as the infamous 1992 U.K. polling disaster, when the polls had the two parties even before the election and the Tories won by 7.5 percentage points. The Conservative margin over Labour will be smaller than that when the 2015 totals are finalized, but not a lot smaller (currently it is 6.4 with all but one constituency declared). So our adjustment was in the right direction, but it was not nearly large enough. Part of the reason Fisher did better is that he applied a similar adjustment, but made it party-specific, leading to a larger swingback for the Tories than for other parties because of that 1992 result.

Before the election, we calculated expectations for three measures of performance — absolute seat error, individual seat error, and Brier score — based on the uncertainty in our forecast. (More about how those three errors are defined can be found in this article.) We have now calculated each of these quantities for our forecasts, given the final results. We did not do as well as we had expected to do by any of these measures. Our absolute seat error was 105 . We incorrectly predicted 63 individual seats (out of 632 in England, Wales and Scotland). Our Brier score was 96 (the best possible score would have been 0, and the worst 632). Not good.

More here.