How Did Nate Silver (And Everyone Else) Get Trump So Wrong: The Flip-Flopping Polster

The Flip-Flopping Pollster
How poor have the election forecasters been this year? It is a topic many are discussing given the large number of upsets we’ve had during the Primaries. For example, statistician Nate Silver (who started the campaign season proclaiming Trump had <2% chance of being nominated) by March 1 predicted with 94% probability that Trump would win Alaska (he lost).
Silver then predicted on March 8 with >99% probability that Clinton would win Michigan (she lost). Silver again predicted on May 3 with 90% probability that Clinton would win Indiana (she lost). But there is another issue besides being wrong, which is how much model flip-flopping is occurring just up tothese elections.
The most proximate example is Silver stating this past Sunday that Cruz had a 65% chance to win Indiana; the next day (Monday, the eve of the election) and with little new data, he “adjusts” that to Trump having a 69% chance to win! That’s horrible! And this mistrust of forecasters going beyond Indiana, since we’ll show below that in 14 of 63 state elections so far, likely voters and have had this sort of shoddy slip-flopping to contend with. So an important question we all must have this year is given the more narrow polling margins for the general election, the degree to which these forecasters have not been able to make sense of their own models, and the large amount of flip-flopping anyway, how will we ever believe anyone’s single forecast representation of who will win in November? Whenever a forecaster repeatedly (here, here) and guardedly blames “special circumstances” for her or his deteriorating performance, that is further sign that trouble is brewing up and is being masqueraded as everything is awesome.

This post was published at Zero Hedge on 05/06/2016.