For those who now think Nate Silver is god, here’s a question: Can Nate Silver make a prediction so accurate that Nate Silver himself doesn’t believe it?
Yes, he can–and he did. Silver famously predicted the results of Election 2012 correctly in every state. Yet while his per-state predictions added up to the 332 electoral votes that Obama won, Silver himself predicted that Obama’s expected electoral vote total was only 313. Why? Because Silver predicted that Silver would get some states wrong. Unpacking this (pseudo-)paradox can help us understand what we can and can’t learn from the performance of poll aggregators like Nate Silver and Princeton’s Sam Wang in this election.
(I mention Silver more often than Wang because Silver is more famous–though I would bet on Wang if the two disagreed.)
Silver’s biggest innovation was to introduce modern quantitative thinking to the world of political punditry. Silver’s predictions come with confidence estimates. Unlike traditional pundits, who either claim to be absolutely certain, or say that they have no idea what will happen, Silver might say that he believes something with 70% confidence. If he makes ten predictions, each with 70% confidence, then on average three of those predictions will turn out to be wrong. He knows that some of them will be wrong, but he doesn’t know which ones will turn out to be wrong. The same basic argument applied to his state-by-state predictions–they were made with less than 100% confidence, so he knew that some of them were likely to be wrong. That explains why he predicted 313 total electoral votes for Obama, while at the same time making individual state predictions adding up to 332 electoral votes–because he expected some of his state predictions would turn out to be wrong. Silver’s own numbers imply an 88% confidence that he would get at least one state wrong.
In short, Nate Silver got lucky, and his good luck will lead some of his less numerate readers to misunderstand why he was right. To see why, imagine a counterfactual world in which Silver’s three lowest-confidence state predictions had gone the other way, and Romney had won Florida, Virginia, and Colorado. Obama would still have won the election–as Silver predicted with 91% confidence–but Silver would have gotten fewer kudos. Yet this scenario would have better illustrated the value of statistical thinking, by showing how statistical reasoning can get the big picture right even if its detailed predictions are only right most of the time.
In fact, the improbable match between Silver’s state-by-state prediction and the actual results is an argument against the correctness of Silver’s methodology, because it implies that his state-by-state confidence estimates might have been too low. We can’t say for sure, based on only one election, but what evidence there is points toward the conclusion that Silver’s confidence estimates were off. It’s also interesting that Sam Wang’s methodology–which I prefer slightly over Silver’s–led to higher confidence levels. (Sam predicted an Obama victory with essentially 100% confidence.)
In the next election, statistical analysis will be much more central to the discussion. We can already see the start of the kind of “Moneyball war” that we saw in baseball, where cigar-chomping oldtimers scoffed that mere number crunching could never substitute for gut feeling–and meanwhile the smarter oldtimers were figuring out how to integrate statistical thinking into their organizations, and thriving as a result. In 2016, we can expect a cadre of upstart analysts, each with their own “secret sauce”, who claim to have access to deeper truth than the mainstream analysts have. But unlike in baseball, where a hitter comes to the plate hundred of times in a season, allowing statistical prediction methods to be tested on large data sets, presidential elections are rare, and there are only a handful of historical elections that had good polling data. So we won’t be able to tell the good analysts from the bad by looking at their track records–we’ll have to rely on quantitative reasoning to see whose methodology is better.
When Nate Silver is less lucky next time–and we can predict with fairly high confidence that he will be–please don’t abandon him. The good analysts, like Sam Wang and Nate Silver, are better than traditional pundits not because they are always right but because, unlike traditional pundits, they tell you how much confidence you should have in what they say.