2016 Presidential Election Polling: How wrong were they?
I used the data compiled from Real Clear Politics and came up with an average.
Hillary was predicted to win, on average, by 5.04 points.
Some major pollsters were off by huge amounts such as CNN which had Clinton winning by 24 points with an margin of error of 3.5 (looooooool). Actually, CNN had the biggest c*ckup. Bloomberg's was pretty damn bad, too.
From the major "core" pollster, the "big 11", they were off by an average of 3.3 points. How did they get it wrong so badly? And why did LA Times get it right with their new method?
Pew had some great ideas. The Silent Trumpers seems to be one of them. People ashamed and brow beaten so much by their peers that they answered dishonestly. It was such a big deal that it caused the election to be lost by Hillary. So, obviously, this is a significant element.
I think the embarrassed Trumper is part of it. There were some articles showing how automated polls had greater Trump support than those conducted by a person, which does support that a lot of people who voted for Trump weren't comfortable telling another person that.
They are wrong. I clearly did the math in the OP. And before you call it an arithmetic error, I used the built in math functions in Excel. So it's impossible for it be wrong due to a simple arithmetic error.
Clearly, Real Clear Politics has a math problem.
Can you replicate their math?
Last edited by dadudemon on Nov 30th, 2017 at 11:35 PM
What is there to trust? My work is open and visible. You're acting like it is magic what I'm doing. Sorry, this math is very easy to do. Again, I challenge you to duplicate my efforts and come up with a number other than 5.04%.
Right. That was the entire point of using RCP. They had the best aggregation of polls. Obviously, I do not care for their math because it is not visible. Their number does not match with the data which means they are doing something to fudge their numbers.
HOWEVER!!!! They cited their sources. So perhaps their math is wrong, sure. But that doesn't mean the data they've meticulously compiled is wrong which is where the value is in RCP, imao.
Yeah these articles were from over a year ago and I don't remember where they came from, otherwise I'd link them for you so you could read them. There were quite a few, though. It was an interesting phenomenon that was kind of passed off as just statistical noise before the election, but then after the election gained a bit more traction because of the outcome.