Exit Polls: What We Know

Exit Polls Legacy blog posts

The apparent problems with Tuesday’s exit polls were obviously topic number one with MP’s readers yesterday. Before jumping into what went "wrong" and why, it is important for all of us to be clear about what we know and what we do not. We might start by considering why the networks do exit polls in the first place. I see three:

1) To provide an analytical tool to provide journalists and the general public with a greater understanding of the election results, who voted for whom, why, etc.

2) To provide data to help the networks project winners, especially in states where one candidate holds a huge lead

3) To provide networks and other news organizations some advance notice of the likely election outcome well before the polls close so they can plan their coverage accordingly.

The National Election Pool (NEP) designed procedures for conducting the exit polls to fulfill these three missions: Interviewers phone in raw data from completed questionnaires at several intervals during the day. To provide networks guidance on likely outcomes, NEP releases partial data several times during the afternoon. These are the numbers that get leaked and appear all over the Internet. As I wrote Tuesday, my understanding is that unlike the final numbers, these early releases are not weighted to reflect the actual turnout that day.

Shortly before the polls close, the interviewers call in tabulated results, as well as some measure of actual turnout at their precinct. NEP uses the actual turnout data to weight each state’s poll, so that the regional distribution of respondents matches the actual turnout for that day. The last report and turnout data allows for a tabulation provided to the networks just before the polls close that they use for projections. If the margin is well outside the margin of error (typically 4% for a state) the networks will use the exit poll alone to call the state.

As Martin Plissner, a former executive political director of CBS News, wrote yesterday in Slate,

Exit-poll surveys in some 29 states showed margins for George Bush or John Kerry great enough to conclude that the chances the leading candidate losing was essentially zero. On that basis, when the polls closed in those states and before any votes were counted, 16 of them were placed in the president’s corner and 13 in the senator’s. They tended to be places like Kansas and Rhode Island.

Needless to say, if the lead is within (or even close to) statistical sampling error, the networks will not make a projection on the basis of the exit poll alone. To enable projections in these cases, NEP also does tabulations of actual returns obtained for a larger random sample of precincts (often referred to as "key precincts"). They also continue to use and update the exit poll. As returns start to come in, the exit pollsters weight each individual precinct sample by the actual vote cast by all voters at that precinct. Thus, as the night wears on, the accuracy of the exit poll gradually improves.

Why bother with the exit poll when real votes are available? The poll helps analysts determine the size and preferences of key subgroups with increasingly greater precision. What is the vote among Independents? African Americans? Young voters? New registrants? How do those patterns compare with pre-election expectations? Knowing the answers to those questions helps guide those at the network "decision desks" in making projections.

Also, weighting the poll by the actual vote improves its accuracy for its third and most important mission: providing an analytical tool for journalists and the rest of us who want to interpret and explain the election outcome. When a final result for a state is available, the exit pollsters weight the entire sample to match the vote results (there is often a mismatch due to drawing a sample of precincts rather than the entire state). That is the reason bloggers and others noticed that exit poll results posted on CNN and other news sites changed overnight. It was not a conspiracy, just standard practice.

So, given this procedure, what can we say about what seemed to go so wrong?

With respect to the second mission, correctly calling outcomes, the exit polls did well. "No wrong projections [of winners] were made; the projections were spot on,"
said Joe Lenski of Edison Research (the company that conducted the NEP exit poll along with Mitofsky International) to the Washington Post’s Richard Morin.

True enough, but what about all those mid-day numbers everyone saw on the Internet? The official answer from people like Lenski mirrored what you heard from me on Tuesday, that mid-day numbers are less reliable and only reflect the views of those who have voted so far. Actually, they went a bit farther. "The leaking of this information without any sophisticated understanding or analysis," said Lenski, " [made] it look inaccurate." They were "about as accurate as they usually are," wrote Plissner, adding "the problem was that…the exit polls were being seen by thousands of people who didn’t know how to read them….like any sophisticated weapon, they are dangerous in the hands of the untrained."

Is that fair?

I went back and looked at the numbers that Jack Shafer posted on Slate at 12:15 p.m. Pacific Time (3:15 Eastern). He posted results for 10 states, but most focused on Ohio and Florida which both showed Kerry one percentage point ahead of George Bush. Of course, both Kerry "leads" were well within sampling error and, given the smaller mid-day sampling, also within an acceptable range of the actual result.

But there is something else interesting about these results: Kerry’s standing against Bush in all ten states surpassed what he received on election night. At 4:28 Pacific Time (7:38 Eastern, Shafer posted more recent numbers for an even larger list of states, 16 in all, plus the national result (51% Kerry, 48% Bush). Again, the same pattern: Kerry’s performance on the partial exit polls surpassed his ultimate performance nationally and in 15 of 16 states. So whatever was happening, it was not just the random variation due to sampling error. If you don’t believe me, try flipping a coin and see how often you can get heads to come up 16 of 17 times.

The NEP officials seem to concede there was some Democratic bias in the early numbers. An Associated Press story from yesterday said:

The NEP had enough concerns that its early exit polls were skewing too heavily toward Kerry that it held a conference call with news organizations mid-afternoon urging caution in how that information was used. Early polls in New Hampshire, Pennsylvania, Minnesota and Connecticut were then showing a heavier Kerry vote than anticipated.

Pollsters anticipate a post-mortem to find out why that happened. Some possibilities: Democrats were more eager to speak to pollsters than Republicans, or Kerry supporters tended to go to the polls earlier in the day than Bush voters.

The same AP story reported that after the initial release showing Kerry ahead by three points, "as the day wore on, later waves of exit polling showed the race tightening." You can see that pattern in Jack Shafer’s numbers for Ohio, Florida and Pennsylvania. However, did the final surveys just before the polls closed continue to show a consistent Kerry bias? I cannot answer that question, although it would be easy enough if we had the final results shared with the networks just before the polls closed in each state. I am sure there are reporters among readers of this blog who saw the data releases just before the polls closed. Perhaps someone can email me and set me straight.

The issue is not whether the decision desks at the networks paid any attention to the small Kerry leads in the early Ohio and Florida, but whether news organizations relied on them in planning coverage and discussing the race in the late afternoon. It was not just bloggers. Very serious reporters from very serious media outlets jumped to the conclusion that Kerry was running the table, just like all those "unsophisticated" bloggers.

And then there is the issue of why the networks and NEP gave no consideration to the virtual certainty that these numbers would make their way into the public domain. It ought to be obvious by now that giving exit polls to 500 or so reporters, editors and producers — all of whom have phones and computers — is essentially the same as putting them in the public domain. It was not exactly a surprise that the leaked exit polls would be all over the Internet, yet they had no strategy to help the consumers of leaked numbers understand what they were looking at.

In their post-mortems, the networks need to consider that far more Americans consumed raw exit polls in their partial, dirty, unreliable state than will ever examine the final cross-tabulations now available. Too many came away from the experience convinced that exit polls are biased and unreliable. Confidence in the final numbers, the ones we all rely on to understand the election, has been seriously shaken. We simply cannot blame that on the bloggers.

If partial exit poll data is "dangerous in the hands of the untrained," and we choose to leave it lying around where the "unsophisticated" will play with it, doesn’t it make sense to at least publish a warning label?

Mark Blumenthal

Mark Blumenthal is the principal at MysteryPollster, LLC. With decades of experience in polling using traditional and innovative online methods, he is uniquely positioned to advise survey researchers, progressive organizations and candidates and the public at-large on how to adapt to polling’s ongoing reinvention. He was previously head of election polling at SurveyMonkey, senior polling editor for The Huffington Post, co-founder of Pollster.com and a long-time campaign consultant who conducted and analyzed political polls and focus groups for Democratic party candidates.