About State Polls

So if you have been reading my series of poll articles, by now you understand that there are three basic types of polls, what sort of business or group does polling and how that is performed, and you know the typical accuracy issues each poll type and group character has to address. You know that polls are often worded to advance narratives, and that media will report polls in whatever way seems to best to advance their preferred politics.

But when we discuss a Presidential election, we must always understand that the election depends on winning the Electoral College, and the Electoral College is won, basically, by winning states. Which means that when we discuss polls, we must focus first and foremost on state polls.

One of the funny things about 2016, is how the polling groups and various people who make their fame off polls, continue – against all evidence – to insist the polls were right in 2016. This claim is important to them, because admitting they were wrong in 2016 opens a nightmare problem for them – if they were wrong in 2016, they have to change their procedures to address the errors … which they obviously do not understand. But since the polls deny they made mistakes, it’s necessary to examine their 2016 performance to confirm whether the polls did or did not get the results correct.

So here is the question: how do we determine if the polls were correct? To answer that reasonably, we need to examine the scientific process, which is to compare the experimental results against the control. When pressed, it turns out there are really just two controls that polling groups and celebrity analysts are willing to accept – calling the winner of an election according to polls, and the support spread between the candidates projected by the poll compared to the support difference in the election result. I would be okay with this, but note that this means no poll but the last one can be graded in this definition, which I find suspiciously convenient, given the way some polls make drastic changes in their last release. But fine, for here I will not grade any poll but the last one in the campaign presented by a poll group or poll entity (more in a couple paragraphs about what those phrases mean). But I will therefore note that whenever someone presents a mid-campaign poll with a phrase like ‘if the election were held today’, they are presenting a false contention, since the polls refuse to have their poll reports compared to actual election results except for the last one.

There are, however, four easy to determine metrics which can be used to grade an election poll:

1. Did the poll predict the winner?
2. Did the poll predict the percentage support of the winner within its margin of error?
3. Did the poll predict the percentage support of the next highest candidate within its margin of error?
4. Did the poll predict the percentage spread between the top two candidates within its margin of error?

These can be answered from the published poll results for any poll, and from that a simple grade can be awarded on the basis of how many of the metrics proved accurate – all 4 right is an A, 3 out of 4 is a B, 2 out of 4 is a C, 1 out of 4 is a D, and 0 out of 4 is an F. In a future article I will discuss the grades for the polls making predictions on the aggregate popular vote level, but for here I will address the results from state polling.

Before going further, however, I need to explain what is meant by ‘poll groups’ and ‘poll entities’. A poll group is one of those groups which release polls to the public. A common example would be NBC/WSJ/Marist, which conducted polls in twelve (12) states in 2016. A poll entity would be each of those separate entities, in this case NBC News, the Wall Street Journal, and Marist University would each be considered an entity. This is important because some entities, like SurveyUSA, partnered with a variety of other entities to release polls, and so we should regard the results both ways.

With that said, according to the links for the 2016 Presidential Election at RealClearPolitics, there were 300 final polls taken at the state level, ‘final polls’ being the last poll taken before the election, usually within ten days of the election. 130 poll groups performed these polls, and 149 poll entities were involved. The following basic results were produced by these polls:

Clinton was undercounted in 230 of the state polls, or 76.7%. Trump was undercounted in 277 of the state polls, or 92.3%. When compared to margin of error, Clinton was undercounted by more than the published margin of error in 117 state polls, or 39.0%. Trump was undercounted by more than the published margin of error in 212 polls, or 70.7%. In both Clinton’s case and Trump’s case, polls consistently assumed more negative opinion than actually existed, but Trump was undercounted far more often than was Clinton.

101 of the 130 poll groups only released polls for one state. 107 of the 149 poll entities were involved in only one state. Only 20 poll groups released polls for three or more states, while only 29 polling entities were involved with three or more states.

When all final polls are graded, the average grade for the polling groups was a 2.010, of a C-. The average grade for the polling entities was 1.997, or a D+. When only groups or entities with 3+ plus are considered, the poll group average drops from 2.010 C- to 1.977 D+, while the poll entity average drops from 1.997 D+ to 1.985 D+.

I would note here that I am not trying to trash all polls. My research did reveal five polling groups and eight polling entities which scored higher than a 2.0 C-. They are as follows:

Poll Groups
Trafalgar Group (3.625 B+ average, 8 states polled in 2016);
Colby College/Survey USA (2.667 C+ average, 3 states polled in 2016);
JMC Analytics (2.667 C+ average, 3 states polled in 2016);
Monmouth (2.357 C average, 14 states polled in 2016); and
Emerson (2.323 C average, 31 states polled in 2016)

Poll Entities
Landmark Communications (4.00 A+ average, 3 states polled in 2016);
Trafalgar Group (3.625 B+ average, 8 states polled in 2016);
Opinion Savvy (3.200 B average, 5 states polled in 2016);
Colby College (2.667 C+ average, 3 states polled in 2016);
JMC Analytics (2.667 C+ average, 3 states polled in 2016);
Survey USA (2.615 C+ average, 13 states polled in 2016);
Monmouth (2.357 C average, 14 states polled in 2016); and
Emerson (2.323 C average, 31 states polled in 2016)

I would observe that their 2016 success does not guarantee the same results in 2020 – I would like to carve out some time and see how these guys did in 2012 and 2008, for example, but it does show that while many polls are weak in their rigor and standards, some do manage to accomplish solid results.

Weekend Caption Contest™ Winners Week of June 19, 2020
Wizbang Weekend Caption Contest™