As we wait for the courts to sort out the 2020 Presidential election, I’m taking the opportunity to use the data we have to rebuke a man who sorely needs it – Nate Silver of FiveThirtyEight.com. Mr. Silver has gone all-in with selling the Mainstream Polling Kool-aid, saying “I’m amazed that polls are as good as they are” when it’s painfully obvious that many polls had a terrible year.
But Mr. Silver has enjoyed a great deal of fame and fortune based on his claim to being a top-level poll analyst. Silver even boasted a 100% accuracy in the 2012 election, which depends on a frankly dishonest set of conditions. More on that in a little bit.
For now, it’s important to recognize that Nate Silver was a decent-competent baseball stat guy who started a blog to write about opinions, and in 2008 he posted his predictions for the 2008 Presidential election. Time magazine was impressed and claimed Silver to be one of the World’s 100 Most Influential People in 2009, despite Silver just starting to project elections. To put it another way, major media never paid attention to any of the successful polling organizations such as Gallup, Pew, or Rasmussen. It seems Silver’s stew of polls was the new fashion.
In 2010, the New York Times took over operation of Silver’s politics blog, FiveThirtyEight.com, which represented a jump in Silver’s finances and tied Silver’s operation to the policies and political orientation of the NYT, well known as a far left media platform.
Silver’s fame reached its apogee when he claimed perfect accuracy in his 2012 forecast. It is that bovine fecality which moves me to challenge Silver’s claims, especially given Silver’s habit of using misleading definitions to make his work look better than it is. Silver claimed to be right in 49 out of 50 states in 2008, then 50 for 50 in 2012. The facts, when looked at in more standard conditions, prove Silver a liar.
Silver’s method is to use a selected number of polls he likes, weight them according to his personal preferences, then take an average of those selected and weighted polls to create a projection. That in itself is fine, except that because he completely ignores polls he does not like and tweaks numbers to suit his mood, Silver’s method is not a true aggregation of polls, which Silver has never admitted.
But that would be a minor issue, if Silver agreed to be judged by his specific percentage predictions, but that is not what Silver does. Silver expects to be judged purely on whether he called the winner of a state, not the support levels he projects. This is completely dishonest.
To see what I mean, consider that people with absolutely no statistical training or attention to polls would have no trouble predicting that the Democrat would win states like California, New York, New Jersey, Oregon and such, or that the Republican would win states like Wyoming, Oklahoma, the Dakotas, and so on. In every election, the decision comes down to a handful of states and almost everyone is aware of which states those are. So since most elections come down to five to six states, that means anyone can predict at least 44 states and give a 50-50 guess on the rest, so 88% to 94% success in predicting who will win states is likely for an ordinary person with no special knowledge. Silver was certainly aware of that, but used semantics to sell his results for better than they were.
Poll groups and analysts don’t like people paying close attention to when they get things wrong, and I understand that, because polling is difficult to get right a lot of the time, and success in polling is relative. But it’s also true that polls try to give themselves a pass too often, and sometimes their reason for not wanting to be graded comes off as an excuse. Polls are run throughout the year, but we are not supposed to grade any but the final poll, which is kind of silly when at least some polls make big changes at the end of the campaign to make their final projection look more accurate.
So OK, I don’t grade mid-campaign polls, because to do so would require an arbitrary standard, and I don’t want to go there. But every poll can be judged on three points for accuracy – the support reported for the Democrat, the support reported for the Republican, and the reported spread between the two candidates. Simple test, was the number reported within the margin of error? If yes, pass and you get credit. If not, fail.
I would also mention that in 2008 and 2012, Silver did not make predictions about the electoral votes from congressional districts in Maine and Nebraska. This matters because the Presidential election is not just one big national vote, not even fifty different votes to represent each state, but fifty-six different elections in the fifty states, D.C., and five congressional districts with a single electoral vote in each. That means that instead of going 49 for 50 in 2008 and 50 for 50 in 2012, Silver actually went 49 for 56 in 2008 and 51 for 56 (he did call D.C.) in 2012. So even Silver’s basic easy-level call is not true.
Anyway, I mean to grade Silver’s calls for the four Presidential elections on the simple basis of his specific support projections; the projected support for each candidate and the spread on the elections he called. In 2012, it appears Silver only made projections on the spread in each state, and as I said Silver only called every race starting in 2016.
I’m going to use the old grading standard – under 60% is an F, 60.0% to 61.9% is a D-, 62.0% to 67.9% is a D, 68.0% to 69.9% is a D+, 70.0% to 71.9% is a C-, 72.0% to 77.9% is a C, 78.0% to 79.9% is a C+, 80.0% to 81.9% is a B-, 82.0% to 87.9% is a B, 88.0% to 89.9% is a B+, 90.0% to 91.9% is an A-, 92.0% to 97.9% is an A, and 98% and up is an A+.
So let’s start with 2008. In fifty states, Silver called Obama’s support right in 30 of 50 states (60.0% for a D-), and McCain’s support right in 33 of 50 states (66.0% for a D), and the spread right in 20 of 50 states (40.0% for an F) (using a standard 3.0% margin of error). For the whole 2008 election, Silver gets 83 right out of 150 for 55.3%. Grade: F
So instead of a glorious 49 out of 50 for 98% accuracy, when we look at the details, Silver gets 55.3% right for a failing grade.
Move on now to 2012, when Silver bragged about getting them all right. First, keep in mind that in 2012 Silver only bothered to project the spread between Obama and Romney. So OK, that’s where we look.
When we dig into those, we see that Silver called the spread within a 3.0% margin of error in 26 of 50 states, for 52.0%. Another F.
If you are scoring at home, Silver through two elections is 109 right out of 200, for a collective 54.5% grade.
So on we go to 2016.
In terms of measuring Clinton’s support, Silver did a better job, getting 41 out of 56 correct for 73.2% and a solid C. In terms of measuring Trump’s support, Silver managed to get 25 out of 56 correct for 44.6%, and another F. In terms of spread, Silver called 18 out of 56 right, a shabby 32.1%. For the 2016 election, Silver gets 84 out of 168 right for 50.0% and another F.
Silver’s running score through three elections is now 193 right out of 368 for 52.5%.
Now on to 2020. If the numbers stay up, that’s supposed to be a win for Biden, so surely that helps Silver, right?
In 2020, Silver called Biden’s support right 31 times out of 56 times for 55.4%,
and Trump’s support right 38 times out of 56 times for 67.9%. This assumes the numbers do not change, as a reminder. Finally, Silver calls the spread right 17 times out of 56 races for 30.4%. Overall, Silver gets 86 times out of 168 for 51.2%, another F.
Silver’s final grade overall is 279 right out of 536 for 52.1% overall.
Not something anyone would brag about.
But at least now we know why Silver doesn’t want to be judged by his detailed projections.
But wait, there’s more.
If you look at this graphic,
you will see Silver has presented a snakelike ribbon of states. The idea is to show the line of states from strongest Trump to strongest Biden. This is important because this has to do with how Silver expected states to play out. But if you look at the actual margins of victory, things are quite a bit different. In some ways, the misleading graphics Silver posts actually helped fool him into missing the data underneath all the assumptions.
Also, if you click on the links I posted showing the forecasts, you can see that in addition to posting projected support levels, Silver also presents a weather-person kind of forecast in the alleged probability of winning the state or district. This matters because when Silver gets a call wrong, he will still fall back on his probability of winning to say he was really right.
In the end, it’s no big deal. Silver took a hit the last two elections and is no longer the golden leader for the Left in terms of polling, but the next time you see Silver pretend he is an expert in calling elections, the facts you learned here should give you reason to chuckle.
Nate Silver 2008 final prediction
Nate Silver 2012 projection by state
Nate Silver 2016 projection by state
Nate Silver 2020 projection by state