As it stands right now, it appears that Hillary Clinton will end up winning the popular vote for President over Donald Trump by at least 1 percentage point (we’re at around 1.1 percentage points now). It’s possible it heads up to 1.5 percentage points, as more votes are counted in “blue” states like California, but for now let’s just assume it ends up rounding to 1 point. Assuming that’s the case, how did the polls do? Let’s go to FiveThirtyEight.com and RealClear Politics (RCP) and take a look.
- First off, FiveThirtyEight.com had Clinton leading Trump in the national polling aggregate by 3.9 points. RCP, which has somewhat of a right-leaning bias, had Clinton leading Trump by 3.3 points. Both ended up missing the actual results (a 1.1-point lead for Clinton as of now) badly — by around 2.8 points and 2.2 points, respectively.
- As for individual polls conducted through November 6 or 7, right-wing Rasmussen of all pollsters actually came very close, with its final poll showing Clinton +2. Reuters/Ipsos and Bloomberg had Clinton +3, while most others (NBC/WSJ, Gravis, Fox, ABC/WaPo, Economist/YouGov, CBS) had Clinton +4. The Times-Picayune had Clinton +5, while Monmouth actually had Clinton +6. Huge #FAIL on their parts, obviously. One poll that actually had Trump winning was IBD/TIPP, at Trump +2. Nope. And then there was that absurd LA Times pseudo-“poll,” which had Trump +3. Uhhh….no.
- The prediction markets didn’t do very well either, with PredictWise having Trump at around an 11% chance the day before the election. PredictIt.org also blew it big time, with Clinton at about an 80/20 chance of winning as of the day before the election. Nope.
- How about state polls of the presidential race? Here in Virginia, PPP had Clinton +5 in its final poll, and Clinton won by…5 points. Nailed it! A few other state polls were also pretty close, with Gravis at +5, CNU at Clinton +6 and Hampton University at Clinton +4. For its part, Roanoke College was a bit high, with Clinton +7, as was Survey Monkey (Clinton +8). But overall, not too bad for Virginia.
- North Carolina, in contrast, was a mess, with the RCP final polling average at +1.0 for Trump, but with the final results at +3.8 points for Trump. As for individual polls, North Carolina-based PPP had Clinton +3 in NC, and seemed quite confident that she’d win the Tarheel State. Not. Other #FAILs included Quinnipiac, which had Clinton +2 in NC; Gravis, which had Clinton +1; Siena College, which had it tied; and SurveyMonkey, which had it as an absurd Clinton +7. WRAL-TV/SurveyUSA had Trump winning NC, but were way too high in his margin, which they pegged at Trump +7. The closest was probably the Republican “Trafalgar Group” (whoever they are), which had Trump +5 in its final NC poll. All in all, as I said, a mess.
- Florida, which Trump won by 1.3 points, was also pretty messy in terms of polling. For instance, the “Trafalgar Group” had Trump +4; Quinnipiac had Clinton +1, Fox 13/Opinion Savvy had Clinton +4, Gravis had Clinton +1, and CNN/ORC had Clinton +2. As for the RCP polling average, it had Trump +0.2 in FL, which was off by 1.1 points. Overall…not great.
- Let’s look at the aggregators/forecasters on the presidential race. Basically, they were all wrong, with FiveThirtyEight giving Clinton a 71.4% shot of becoming president, the NY Times “Upshot” at 85% for Clinton, Princeton Election Consortium at 99%+ for Clinton, etc. Now, Clinton WILL end up winning the popular vote, but still…she will not be president, and that’s what really counts. At least FiveThirtyEight had a significant chance of a Trump victory, which the Princeton Election Consortium and the NY Times really didn’t. And Nate Silver took a LOT of s*** for that, which in retrospect was wildly unfair, undeserved, you name it. The fundamental problem with all these aggregators and forecasters, though, is that if the input to their models – the polls, mainly – are off, then their models will be too. Garbage in, garbage out basically. Maybe they all should just ditch their approaches and go with AU Professor Allan Lichtman’s 13 “Keys”, which accurately predicted a Trump victory on November 8? Only half kidding here…
- As for the Senate races, well…nothing to write home about there either. For instance, FiveThirtyEight.com had Wisconsin at an 81.7% chance for Democrat Russ Feingold, who…lost. FiveThirtyEight.com also had Democrat Katie McGinty with at 61.7% chance of winning Pennsylvania, and she lost as well. Other than that, FiveThirtyEight.com actually did pretty well, forecasting that Catherine Cortez Masto would win NV; that Evan Bayh would lose Indiana; that Deborah Ross would lose NC; that Patrick Murphy would lose FL; and that Jason Kander would lose MO
The bottom line: polls overwhelmingly missed the results of the 2016 election, as did prediction markets and aggregators, forecasters, you name it. Oh, and political “analysts” too; David Plouffe, for instance, predicted on November 5, ” HRC well north of 300 in two days.” And on October 30, Plouffe wrote the now-infamous tweet, for which he later apologized: “Clinton path to 300+ rock solid. Structure of race not affected by Comey’s reckless irresponsibility. Vote and volunteer, don’t fret or wet.” As for Sabato’s “Crystal Ball,” they ended up writing a “mea culpa,” which stated that “The Crystal Ball is shattered,” while vowing that “we must make sure the Crystal Ball never has another year like this.”
We could go on all day like this, as just about everyone was wrong on this one. The question is, now what? If polls are flawed – as they have shown repeatedly in this country, as well as in other countries from Israel to Canada to the UK to Colombia – then I’m not sure they’re much use as input into forecasting models or analyses. So maybe we should stop trying to figure out who’s going to win, spending so much time and energy on “horse race” coverage, and instead focus on actual substance, issues, the candidates’ ethics (or, in the case of Trump, utter lack thereof), etc? Nah, BORING – and even worse, low ratings, few clicks/eyeballs, etc. Sigh…