People are always hungry to know the future, and this is true in politics as much as anywhere else. And the media is certainly aware that many predictions are self-fulfilling if enough people believe them, so some models get a lot of attention. With regard to the 2020 election, I found some interesting models which I will discuss here. No psychics, but some interesting analysis.
There are four models for Presidential elections to discuss; Allan Lichtman, Ray Fair, Helmut Norpoth, and Moody’s Analytics. I start with Dr. Lichtman.
Dr. Allan Lichtman is a historian who teaches at American University. Notably, Dr. Lichtman is very active politically and despises President Trump. Dr. Lichtman is said to have “correctly predicted the last nine presidential elections”, and a year ago he said the Democrats’ only chance to win the White House in 2020 depended on impeaching the President.
Though I tried hard, I could not find where Lichtman actually made all those predictions. I found references to predictions for 2016, for 2012, for 2008, and one source which says Lichtman has “prospectively” forecasted the election winner since 1984, and claims the model works as far back as 1860 (although given the subjective nature of his ‘keys’ and the benefit of hindsight in applying a model, I consider that claim unsupported), but cannot confirm he actually predicted the 1984 through 2004 elections with a public release.
Lichtman’s model uses what he calls “keys”: a set of thirteen true/false statements which should determine if the incumbent party keeps or loses the White House. Those keys are as follows:
1. Party Mandate (does the incumbent party gain seats in the House in midterm election)
2. Contest (no serious contest for incumbent party nomination)
3. Incumbency (the incumbent party candidate is the sitting President)
4. Third party (there is no significant 3rd party/independent campaign)
5. Short-term economy (no recession during election campaign)
6. Long-term economy (Real per-capita economic growth =/> mean growth last 2 terms)
7. Policy change (major changes in national policy)
8. Social unrest (no sustained social unrest during term)
9. Scandal (no major scandal)
10. Foreign/military failure (no major military/foreign failure)
11. Foreign/military success (achieves major foreign/military success)
12. Incumbent charisma (incumbent is charismatic or hero)
13. Challenger charisma (challenger is not charismatic or hero)
Lichtman is now saying he expects Trump to lose, but I found some of his statements interesting, especially this one: “There were four keys solidly locked in against the president” prior to the outbreak, Lichtman explained. “It takes six to predict defeat. But this was before [the pandemic]. Key 1: Party mandate. Key 9: Scandal. Key 11: Foreign/military success. Key 12: incumbent charisma. That’s four false keys locked in.”
I can point to Trump’s success in the trade negotiations with China as a “major foreign success” and the replacement of NAFTA with the USMCA will affect millions of Americans to their benefit, and I can point to the vindication from the Mueller Report to counter claims of “scandal”, but what I found most interesting is that Lichtman seemed to forget his own rules – it takes seven false statements to lose, not six. If seven statements are true, then Lichtman’s model still supports Trump even as he says otherwise.
The obvious problem I see, is that Lichtman does not consider hard data, but applies subjective values. His well-known bias against President Trump also works against trusting his statements about the 2020 election. Finally, I found Lichtman’s claim to success to deserve scrutiny. While Lichtman agrees he got 1960 wrong, when I applied his model and my own knowledge of history to past elections, his model was also wrong in 1992, 1964, 1924 and 1920. Five wrong guesses is the worst record for any of the modelers discussed here.
On now to Dr. Fair.
Dr. Ray Fair teaches Economics at Yale University, and in 2002 he published a book titled “Predicting Presidential Elections and Other Things”, in which he discussed a formula by which he believed that the two-party popular vote split in Presidential elections could be predicted. This was based on his 1978 academic paper which claimed to be effective as far back as 1892. His actual predictions since 1978 have been less successful than Dr. Fair seems willing to admit. A Washington Post article from 1994, for example, states that “Mr. Fair has predicted five elections, with a score of 2-2-1 – no better than a coin flip. He missed 1992, predicting a near-landslide for Mr. Bush with 58 percent of the two-party vote. He also missed 1976, when he predicted a 56 percent victory for President Ford.” That same article observed that in 1980 “Mr. fair’s equations produced contradictory predictions that were not resolved until after the election”. Ouch.
Dr. Fair waffles a bit on his own site, claiming that his predictions started in 1980, which evades his blown call in 1976.
Fair’s model depends on GDP growth in the first 3 quarters of the election year, GDP growth during the first 15 quarters of the administration, awards a bonus for quarters (during the first 15 quarters with GDP growth greater than an annual rate of 3.2%), and adds an incumbency factor. The result produced is the Democratic share of the two-party presidential vote.
The Fair model at this time says Biden will beat Trump according to the media, but Fair’s own site only shows January projections with Trump still ahead 52.9-45.6 using the 2-party vote only.
Fair’s comment in April is as follows:
“I did not make an economic forecast with my US model this time because the model has nothing to say about the effects of pandemics. I could try to subjectively constant adjust the estimated equations, but this would only be guessing.”
The projection I get using the new economic numbers is 50.4-49.6 Biden using those numbers, but keep in mind that this is very close and the electoral outcome would depend on how specific states worked out.
On now to Dr. Norpoth.
Dr. Norpoth teaches Political Science at Stony Brook University. His website has archived links to his predictions since 2008, as well as his 2020 predictions. Dr. Norpoth bases his predictions on primary election results, arguing that early-season voting tells us who has the lead and should be able to maintain it. He claims his model works all the way back to 1912, in the same manner that Lichtman and Fair’s models may be applied to past elections. Dr. Norpoth claims to have gone “5 for 6 since 1996” in predicting elections.
Dr. Norpoth’s predictions generally work from 2008 – 2016, but fail a bit in predicting actual percent support. Here are Norpoth’s predicted levels and actual results:
2008: Obama 50.1, McCain 49.5 (actual Obama 53.7, McCain 46.3)
2012: Obama 53.2, Romney 46.8 (actual Obama 52.0, Romney 48.0)
2016: Trump 52.5, Clinton 47.5 (actual Clinton 51.1, Trump 48.9)
Not terribly bad, but he was off a bit in 2016.
In 2020, Dr. Norpoth predicts Trump will win re-election, but does not publish a projected share of the vote.
Finally, there is the model from Moody’s Analytics. This is where things get interesting.
The Moody’s Analytics study is authored by Mark Zandi, Dan White and Bernard Yaros. The study uses two-year changes in income growth, home prices, gasoline and other price behaviors.
What makes Moody’s interesting to me, is that where the other models focus on the national aggregate popular vote, Moody’s makes state-by-state projections of the Electoral College, which is how the actual election is decided. Moody’s focuses on economic growth in each region to forecast the outcome. The study is used using the same economic data that Moody’s uses for financial advice to their clients, so they work hard to get accurate information.
Moody’s says they have been making predictions since 1980, which I found difficult to confirm, although I did find confirmations Moody’s has modeled Presidential elections since at least 2000 and got the 2008 and 2012 elections, and – this was amusing – got 2016 badly wrong.
The Reuters article also supported Moody’s claim to correctly calling election outcomes since 1980.
Moody’s was candid about their 2016 loss, but provide some interesting insight:
In more specific terms, Moody’s addressed the element of gasoline prices on their prediction:
“The most glaring example of this in 2016 was our gasoline price variable, which contributed to our prediction of a Clinton victory. Beginning in 2014, gasoline prices experienced their largest two-year decline leading up to a presidential election. Historically, two-year declines in gasoline prices have a strong statistical relationship with incumbent parties maintaining control of the White House. Therefore, we used the two-year decline in gasoline prices as an independent variable in the 2016 election model, and it was enough to offset many other explanatory variables that were working against Clinton at the time. e. However, if we had shortened the time frame for the decline in gasoline prices from two years to one year, the 2016 model would have instead predicted a Trump win. This owed, at least in part, to the timing of the decline in gasoline prices.”
Moody’s, in short, is one of the few analyst groups willing to re-examine their procedures when they get a prediction wrong. Moody’s has also expanded their model from one general model to three sub-models which form a more detailed whole, using personal economic variables, the stock market, and unemployment. Moody’s also considers different levels of turnout in its analysis. Their study therefore reports nine possible scenarios using three models and three levels of voter turnout.
I also found the Moody’s study interesting, because their report showed the different levels of support by state, again using economic data to show those trends. I will be using that data in a later post discussing state contests.
It would be easy to take the models I like and trash the ones I don’t like, but each has some value and each has some trouble that should be examined. Lichtman correctly observes that incumbency is important, and party loyalty is an indicator of turnout, but his subjective values call his pronouncements into question, especially since Lichtman often updates his projections, so that a projection made early in the campaign may be changed later if Lichtman worries he was wrong at first.
Fair deserves a lot of respect for demonstrating the impact of economics on Presidential elections, but he has not properly addressed his early failures, which by his rules include 2016, since Fair did not predict Trump would receive a smaller part of the popular vote in the actual event.
Norpoth is the only modeler to pay close attention to primary elections and their effect on party turnout in the general election. But he makes the same mistake as Lichtman and Fair, in projecting the party share of the national aggregate popular vote, rather than the electoral count.
Moody’s is my favorite, not only for focusing on state results and the electoral impact, but also in showing their work and using clearly defined metrics. Moody’s did qualify their support for Trump in 2020 by observing that the current economic conditions are unprecedented, since they were caused by the pandemic lockdowns rather than a result of policy.