This article is more than 1 year old
US election pollsters weren't (very) wrong – statistically speaking
Media open season on prediction number crunchers
The media have spoken and they are unimpressed: the polls – this time the US presidential polls – got it woefully wrong, and therefore burn the witch! Or in the absence of witches, let's set fire to Nate Silver, who until around 2am (GMT) on November 9 was the doyen of US polling and go-to person for all US forecasts.
At which point, shares in Silver were found to be seriously over-valued. Wise in hindsight, the press turned on the pollsters adding other failures – the UK General Election of 2015 as well as the more recent vote on the EU referendum – to the charge sheet. Is this a pattern?
Suddenly commentators discovered all manner of methodological inaccuracy: pollsters should have factored in the effect of social media, the fact mobile phones made it harder to create representative samples of phone-based sampling, the presence of "shy voters" – either just generally reluctant to respond to pollsters or, worse, skewed in their bashfulness. Not to mention the possibility of some Trump voters actively setting out to mislead pollsters.
Well, quite. All sorts of things might impact forecasts, which is why researchers compensate for known skews through the use of weighting. That is the origin of the ludicrous Donald Trump assertion that Democrats were over-sampling particular voting cohorts. Because as a statistician, you do compensate, and every time you analyse outcomes you discover new factors to compensate for.
Although – and here there is an issue for pollsters – where the event you are forecasting is unique, or does not happen very often, or the key skews are changing, you won't know what you should have been compensating for until after you fall into the bear trap.
Whatever the underlying causes, in the days after the poll there was no shortage of journalists queuing up to put the boot into the forecasters. For the Wall Street Journal on 9 November, Joe Flint and Lukas Alpert were quick to claim (subscription needed) that only two polls consistently showed a Trump lead had been "right".
But they weren't.
So where did the polls go wrong?
Let's start with percentages. Silver's FiveThirtyEight is a poll aggregator, which took the results of hundreds of polls across the US and throughout the campaign to come up with a forecast.
In the last day or so before polling, FiveThirtyEight had Clinton on 48.5 per cent, Trump on 44.9 per cent, which is not all that far from the actual outcome. Clinton is currently on 48.2 per cent, Trump on 46.4 per cent and, as the last few votes are totted up (we still do not have a final tally), it looks likely that Clinton's popular vote victory will widen to approximately 2.5 million, with the final vote percentages even closer to forecast.
In other words, a real Clinton margin of +2 per cent, which – allowing for the standard statistical error quoted, of +/- 3 per cent – is well in line with the majority of polls released in the days before the election.
So where, if at all, did the polls go wrong? Following the UK's 2015 election, concerns about polling inaccuracy were sufficiently widespread to trigger an investigation by the British Polling Council and the Market Research Society.
Dismissing a number of possible factors, including postal and overseas voting and a late swing, they decided polling had been unrepresentative, had failed adequately to compensate for age differences in voting and, perhaps to a lesser extent, level of education. Although their conclusions were hedged about with caveats.
The group wrote (PDF): "It is unlikely that bias in the vote intention estimate can be accounted for by a single direct cause. It is more plausible that the underlying causal model linking survey participation to vote choice is a complex multi-variable system, comprising interacting direct and indirect effects."
Education levels and voting patterns
It is too early to be anywhere near as specific about the US election. However, given this analysis, as well as subsequent analyses of the Brexit result, we should be prepared for the possibility of age and education playing an increasingly important role in influencing how individuals make political decisions.
This is particularly relevant to the US electoral system, which includes a "transformation process" – voting is for a college of electors, who in turn vote for the President. Each tranche of electors is decided at state level, so in most cases, if you win the state election by a single vote, you pocket every elector from that state.
A national poll result is therefore interesting, but largely irrelevant to the outcome.
Despite Trump claims of anti-Republican "rigging", this system is significantly pro-Republican in its bias, with true-red states such as Wyoming providing three electors for 250,000 votes cast – while California returns 55 delegates (for the Democrats) for 13.7 million votes. Go figure!
A similar effect is at work in UK elections, in which parties with a strong regional base survive electoral reverse with a far higher level of representation than those whose vote is more widely distributed. Thus, at 26 per cent of the national vote, Labour will continue to return around 200 seats, while the Lib Dems hang on to somewhere between 20 and 50.
Significant under-estimates of the Trump vote appear to correlate closely with high levels of low education achievement
While the US polls got it more or less right nationally, they appear to have been less accurate at state level. In fact, they got it particularly wrong in those swing states such as Ohio, Florida, and Pennsylvania where Trump's victory eventually assured his route to the White House.
At one level, failure to predict the winner in those states is not surprising. These were mostly tight contests, with the margin in Florida and Wisconsin at or below 1 per cent, which would be hard to predict under any circumstance.
However, something else appears to have been going on, for almost without exception, in these states, state polling significantly under-estimated levels of Trump support (elsewhere, state polling appears to have significantly under-estimated Clinton supporters).
The culprit? It is too early to say, but early theories are focusing – again – on education level. At any rate, significant under-estimates of the Trump vote appear to correlate closely with high levels of low education achievement.
QED. The polls missed this one factor, but otherwise they got everything more or less right. Problem? What problem? There is much to commend this view, as well as the corollary, that Nate Silver was an easy scapegoat for media excess pre-election.
In addition to the vote forecast, FiveThirtyEight maintained a probability forecast of likely outcomes, based on modelling thousands of potential outcomes. In the 48 hours before the polls opened, that figure, based on polling data, touched around 35 per cent for Trump.
This is a one-in-three chance of Trump winning. In plain English, better than the chance of tossing a coin twice and getting two heads. That's not bad for Clinton, but far from the slam dunk reported by many media.
Nor could anyone reading Silver's on-site commentary have been in any doubt about his own caution. As he wrote, just four days before the election: statistically speaking, Trump was well within reach of victory.
Are polls bad for democracy?
Much blame, then, for the media, but the pollsters are not entirely innocent. First, because as forecasts approach 50/50, so the size of the standard error reaches a maximum. And that maximum, regularly quoted as 3 per cent, is only 3 per cent if no unexpected other factors are at work. All those "interacting direct and indirect effects" cited by the British Polling Council. So the real standard error to individual polls may be closer to 4 or 5 per cent. We have no way of knowing.
But second, forget the fine words about caveats and caution. Pollsters operate in the real world and in the real world, they should understand that there is a massive difference between forecasting a Presidential Election and sales of toothpaste. The former matters greatly, and will inevitably be an immediate part of the media circus, subject to spin and hype and misrepresentation: it will be used in ways that toothpaste data never will.
There remains nagging concern that all this polling is not good for democracy, that polling impacts electoral outcomes and from the 2015 election to the referendum campaign to this latest election, polls showing one side definitely ahead may have had the contrary effect of suppressing that side's vote; ironically, even, creating the exact opposite result to that forecast. Evidence for this is thin on the ground, but it is an accusation regularly laid by professional politicians and therefore one we should take seriously.
So no. Mostly, pollsters did not get it very wrong. At least not statistically speaking. But when it comes to the bigger picture they failed abysmally, failed to understand the wider context within which they work. They are guilty, in the end, of the most awful academic naivety and in that respect, if no other, they need to do much better. ®