Divergent Polls

Divergent Polls Legacy blog posts Sampling Error

Monday’s edition of National Journal’s Hotline listed seven headlines from the last few days that “say it all,” as they put it, about recent political polling:

· “Divergent Opinion Polls Reflect New Challenges to Tracking Vote” (Wall Street Journal, 9/20).
· “Seesaw of Polls Leaves Lots for Debate” (Newark Star Ledger, 9/20).
· “Varying Polls Reflect Volatility, Experts Say” (New York Times, 9/18).
· “Wide Gap Among Poll Results Mystifies Campaigns, Pundits” (Washington Times, 9/20).
· “Conflicting Polls Confuse Voters, Pros” (Chicago Tribune, 9/18).
· “Why Voter Surveys Don’t Agree” (Christian Science Monitor, 9/20).
· “Despite Disparity, Pollsters Back Surveys” (Detroit News, 9/19).

These are all good reviews –Harwood’s WSJ piece is arguably a must-read – yet all seem to miss the most basic source of much of the “divergence:” statistical sampling error, the random variation associated with looking at a sample rather than the entire population.

Yes, the conflict between polls seems a bit greater this year, and issues like response rates, likely-voter screens, party ID weighting and the like are worthy topics for discussion, but my sense is that much of the recent confusion comes from a basic mistake. Most observers wrongly assume that the “margin of error” applies to the spread between the candidates. EvenOne notable exception is the WSJ’s Al Hunt, who in a column on polling last Friday wrote, “Poll watchers must remember that the best survey has a three or four-point margin of error; that means if it shows the race even, one or the other candidate actually could be up by a half-dozen.”

Well, no. Actually, one of those candidates could be up by more than that.That’s right, since the margin of error applies separately to each candidate’s support, the margin of error effectively doubles when applied to the margin between candidates.

Consider another example: If a single survey with a sampling error of 3% (based on a 95% confidence level) shows Bush at 49%, we know with 95% confidence that if every voter in the country were interviewed for that survey, Bush’ support would lie somewhere between 46% and 52%. If the same survey has Kerry at 42%43%, his support could range from 39%40% to 46%. Thus, the 49% to 42% survey tells us with 95% confidence that the race could be anywhere from a dead heat to a 1412-point Bush lead.

Of course, I’m oversimplifying a bit. Sampling error is really a grey concept, not black or white. When it comes to measuring the spread between candidates, the odds are still good that most polls will fall within a narrower range, but the bigger point is inescapable: Most poll readers overestimate the precision of polls of random sample surveys, and those of use who conduct and report on polls are not doing enough to enlighten them.

As Ana Marie Cox wrote earlier this year:

“Pollsters typically present their poll results in press releases that do little to educate journalists as to the meaning of the numbers they contain….If Zogby and Bennett don’t talk about margins of error or methodology, why should anyone else?”

Amen.

Bonus finding: Yes, Ana Marie Cox, aka Wonkette, has (had?) a little-known double life as a statistics geek. Who knew?

CORRECTION (9/24): Oy. The comments below are correct. I completely misread Al Hunt’s quote. He was just using a different range. Double apologies to Mr. Hunt since he not only got it right, but was one of the few to actually raise the issue of sampling error in the coverage the last week. My bad.

Second, I obviously goofed up my own example. I had initially written it using a +/- 3.5% margin of error (since that is more typical with the smaller samples of likely voters reported recently) but then decided the rounding made it too hard to follow. For some inexplicable reason I didn’t change all the numbers. My bad again. In the spirt of the blogosphere, I’ve corrected the numbers leaving my ugly mistakes in place.

Thanks to Sasha and Phil for the fact checking. Apologies to all. One more thing to atone for.

[Continue reading about sampling error here]

Mark Blumenthal

Mark Blumenthal is the principal at MysteryPollster, LLC. With decades of experience in polling using traditional and innovative online methods, he is uniquely positioned to advise survey researchers, progressive organizations and candidates and the public at-large on how to adapt to polling’s ongoing reinvention. He was previously head of election polling at SurveyMonkey, senior polling editor for The Huffington Post, co-founder of Pollster.com and a long-time campaign consultant who conducted and analyzed political polls and focus groups for Democratic party candidates.