Tuesday, November 20, 2012

Nate Silver, Superforecaster

Pick your own sports hero analogy. Nate Silver, who runs the blog fivethirtyeight, is the Michael Jordan of election forecasting, or Muhammed Ali, or Peggy Fleming, or Michael Phelps, or Cy Young. Or Serena and Venus Williams rolled into one.

If election forecasting were baseball, Nate Silver nearly pitched a perfect game in November of 2008; the results in 49 out of 50 states, as I remember it, followed his most likely outcome. If that was nearly a perfect game, then four years later, he pitched a perfect game without an opposing batter ever making contact with the ball. The results in every state, including Florida, followed his most likely outcome. He did so much better than everyone else, he really isn't playing the same game.

But people misunderstand something about forecasting when they say Silver predicted on November 5th, 2012 that Obama would win the next day.  What Nate said is that Obama had a 90.9% chance of winning.  Obama did in fact win, which is to say Silver was 90.9% right and 9.1% wrong.

Here's another way to think about it. If we reran November 6th 1,000 times, in the style of the movie Groundhog Day, Silver said that, most likely, Romney would win 91 times and Obama would win 909 times.

Silver clearly knows a lot, but what really sets him apart from most prognosticators is that he knows what he doesn't know. That is, he knows he can't predict the future with perfect accuracy, so he designed his model around probabilities, not predictions.

There's a hint, however, that his model overstates the odds of the underdog, suggesting the model is overly cautious. This time around, in six states Silver gave the underdog (Romney in five cases; Obama in one) a chance of winning between 15 and 49.7%. Yet in none of those cases did the underdog prevail. Even with long odds like this, one might expect one upset, or maybe even two. In reality, since results in different states are highly correlated - had Romney upset Obama in Ohio, for example, he almost certainly would have also won Florida - you would expect either very few upsets or a whole lot of them. Nevertheless, it's curious that there was only 1 upset out of presidential elections in 100 states (50 in 2008; 50 in 2012).

There are plenty of angles Silver could pursue in enhancing his model; to cite just a few, he could consider incorporating data about voter registration, early voting, the extent of voter suppression and get-out-the-vote efforts, different economic and polling data,etc. While he does that, he might also consider re-calibrating the odds produced by his current model.

No comments:

Post a Comment