Poll Malarkey 2016 – Tricks of the Shade

‘But the polls were right’. Soon after the election of President Trump, the media which had spent more than a year laughing at him realized they had to explain themselves. This was going to be especially hard for major networks, who started celebrating the expected win by Hillary Clinton a day or two before the election actually happened. It did not take long for pundits and self-praising ‘experts’ to decide that a plate of crow was not acceptable to them, so they quickly fell back onto the claim that the polls themselves had actually been just fine, and the issue was just a matter of interpretation.

Malarkey.

The polls were not ‘right’ except by using a very strained standard, specifically that the final prediction – on average – from the polls was not far enough from the actual results to be called wrong. To understand that logic, imagine trying to tell your boss that making the wrong call on a deal was actually right because your numbers were almost correct. The plain fact is that poll aggregation, while popular these days, often hides important information and replaces good judgment with groupthink, and in 2016, fumbles the call.

To demonstrate, I looked up the polls in RealClearPolitics.com from 2016, and found 173 polls from May to November 2016, from 25 different polling groups. There are many more that RCP chose not to use, but these will do for the demonstration. Here is the average result from those polls:

3,215 respondents, with a 2.975% average Margin of Error (MOE), Clinton collected an average of 43.3%, Trump collected an average of 39.2%, Other candidates collected an average of 9.8%, and 7.6% were undecided.

For comparison, the actual election results were 48.2% Clinton, 46.1% Trump, 4.4% Other.

The people trying to defend the polls will tell us that’s not bad, since the polls gave Clinton a 4.1% lead, and she ‘won’ the popular vote by 2.1%. With a 2.975% MOE, the 2.0 difference is within the range.

Not. So. Fast. In fact, before I explain why that is wrong, I need to explain how some of the details work out. Namely, the Margin of Error.

The math behind polling uses Calculus, which is math based on ranges and limits. Scientists use Calculus to determine things like acceleration and velocity, while polling analysts use it to project opinion support or opposition to an issue, plan, or candidate. There is always a degree of inaccuracy in polling calculations, which is why a Margin of Error exists. I might quibble about the MOE cited by polls, since they don’t ever show the math of how they get their numbers, but for here we can accept their MOE as a valid standard to test the reliability of their outcomes.

But because election polling focuses on support for each candidate, the MOE applies to each candidate, and not to the spread. With that in mind, we see that the polls on average undercalled Clinton’s support by 4.9 points (fail), and undercalled Trump’s support by 6.9 points (fail).

Poll supporters would likely claim that polls change in numbers over time, and using an average for the whole election season is not giving a valid picture. So OK, let’s take a look. Here are the polls on average for the last week of each month starting in June 2016:

June: 5 polls, 40.4% Clinton to 36.0% Trump with 10.6% Other, 13.0% Undecided, 2.98% MOE
July: 4 polls, 42.5% Clinton to 38.3% Trump with 10.3% Other, 8.9% Undecided, 2.58% MOE
Aug: 8 polls, 41.6% Clinton to 37.8% Trump with 11.1% Other, 9.5% Undecided, 3.00% MOE
Sept: 9 polls, 43.6% Clinton to 41.0% Trump with 9.8% Other, 5.6% Undecided, 2.97% MOE
Oct: 11 polls, 45.7% Clinton to 42.7% Trump with 6.9% Other, 4.7% Undecided, 2.54% MOE
Nov: 15 polls, 45.4% Clinton to 42.2% Trump with 7.1% Other, 5.3% Undecided, 2.57% MOE

OK, so measuring just the last set of polls against the election results, Clinton’s support was underjudged by 2.8 points, while Trump’s support was underjudged by 3.9 points, both outside the stated MOE.

That’s a fail and no two ways about it.

But it gets worse. Here are the popular vote results in summary from past Presidential elections:

2012: 51.1% to 47.2%
2008: 52.9% to 45.7%
2004: 50.7% to 48.3%
2000: 48.4% to 47.9%
1996: 49.2% to 40.7% (affected by strong 3rd party candidate)
1992: 43.0% to 37.4% (affected by strong 3rd party candidate)
1988: 53.4% to 45.6%
1984: 58.8% to 40.6%
1980: 50.7% to 41.0% (affected by strong 3rd party candidate)
1976: 50.1% to 48.0%
1972: 60.7% to 37.5%

What those numbers say, is that since 1972, when Richard Nixon curb-stomped George McGovern, the only elections where the loser got less than 40.6% of the vote was when a strong 3rd-party candidate was in the race. That was a clear warning sign that the polls were giving way too much credit to minor-party candidates who were not ever a factor in the outcome. The polls which – as a group – never gave Trump even 40 percent of the vote until September were ignoring warnings that their basic assumptions – driven by their methodology – were just plain wrong. Also, the fact that Hillary Clinton could not approach an average of 50% support, let alone keep it, should have been a serious warning that the race was very much in doubt.

So let’s go back and take a close look at those assumptions, especially the claim that Hillary Clinton “won” the popular vote.

When two teams play a game of baseball, one team usually gets more hits, and the team which gets more hits often wins the game. But the game is decided by runs, not how many hits you get. In business, it’s great to make a lot of sales, but you only stay in business if you make enough profit to have some money left after you pay all your bills. And in Presidential elections, the popular vote is very important but you need to win the popular vote in each state to win that state’s electoral votes, and THAT is what wins a Presidential election. It makes no sense to say a candidate “won” the popular vote, because that wins nothing.

To use a poll properly, an analyst needs to understand the data, not only in what he originally wants to see, but also to look for surprises that may reveal unexpected opportunities or threats. The whole reason a candidate pays for internal polls, for example, is to understand the opinion terrain so positions can be planned for maximum effect and words used to persuade voters to support you. Assumptions can hurt you badly, as many would-be leaders have found at great pain.

So when a poll comes out, a candidate’s team should look not only at their support and support for their opponent, but also the field of additional candidates and the number of undecided voters. Which brings me to the too-often ignored factor of shadow.

Shadow is the shade covering a campaign, that warns the matter is very much unresolved. Shadow is the undecided part of the response added to the MOE of the poll. The spread between the top two candidates is then measured against the shadow to rate the decisive quality of the moment.

To see how this looks in practice, let’s go back and reapply the shadow effect to those month-end poll results we saw earlier:

June: 13.0% undecided plus 3.0% MOE. 4.4% spread versus 16.0% shadow = 21.6% resolved
July: 8.9% undecided plus 2.6% MOE. 4.2% spread versus 11.5% shadow = 26.8% resolved
Aug: 9.5% undecided plus 3.0% MOE. 3.8% spread versus 12.5% shadow = 23.3% resolved
Sept: 5.6% undecided plus 3.0% MOE. 2.6% spread versus 8.6% shadow = 23.2% resolved
Oct: 4.7% undecided plus 2.5% MOE. 3.0% spread versus 7.2% shadow = 29.4% resolved
Nov: 5.3% undecided plus 2.6% MOE. 3.2% spread versus 7.9% shadow = 28.8% resolved

A simple glance at those results show that the election remained very much in doubt throughout the whole campaign, and the undecided portion of the vote was strong all the way to November. The low turnout numbers were predictable all along, but the analysts ignored them, just as they ignored the overweight of minor-party candidate support. The ignorance of the shadow effect is just one example of how poor diligence and a biased set of assumptions led to blunders in the 2016 election by poll analysts.

Wizbang Weekend Caption Contest™
Weekend Caption Contest™ SSD ed. 10 Winners