Impressions on the Exit Poll Report

Exit Polls Legacy blog posts

I wanted to read the full 77 pages of the Edison/Mitofsky report in full before weighing in.  It took much of the day, but here are some first impressions.

This report will not answer every question nor assuage every doubt, but credit is due for its uncharacteristic degree of disclosure. Having spent much time recently reviewing the scraps in the public domain on past exit polls, I am impressed by the sheer volume of data and information it contains. Yes, the report often accentuates the positive — not surprising for an internal assessment — but it is also reasonably unflinching in describing the exit poll errors and shortcomings. Yes, this data has been long in coming, but better late than never. We should keep in mind that we are dealing with organizations that are instinctively resistant (to put it mildly) to "hanging out [their] dirty underwear."

Say what you will about its conclusions, this report is loaded with never before disclosed data: The final exit poll estimates of the vote for each state along with the sampling error statistics used internally, exit poll estimates of senate and gubernatorial contests and their associated errors in every state, other "estimator" data used on election night, tabulations of "within precinct error" (WPE) for each state going back to 1988, and a very thorough review of the precinct level characteristics where that error was highest in 2004.

For tonight, let me review the three things that stand out most for me in this report:

First, the report confirms a great deal we suspected about the exit poll interviewers – who they were, how they were recruited and trained (pp. 49-51). The interviewer is where the rubber hits the road for most surveys. While exit polls do involve "secret ballots," the interviewer is responsible for randomly selecting and recruiting voters according to the proscribed procedure and keeping an accurate tally of those they missed or that refused (something to keep in mind when analyzing the completion rate statistics). The interviewer is also the human face of the exit poll, the person that makes a critical first impression on potential respondents.

The report confirms that interviewers were often young mostly inexperienced.  Interviewers were evaluated and hired with a phone call and trained with a 20-minute "training/rehearsal call" and an interviewer manual sent via FedEx. They were often college students — 35% were age 18-24, half were under 35. Perhaps most important, more than three quarters (77%) had never before worked as exit poll interviewers. Most worked alone on Election Day.

One obvious omission: I may have missed it, but I see no comparable data in the report on interviewer characteristics from prior years. Was it not available? Also, the report mentions a post-election telephone survey of the interviewers (p. 49).  It would seem logical to ask the interviewers about their partisan leanings, especially in a post-hoc probe, but the report makes no mention of any such measure.

Second: There was no systematic bias in the random samples of precincts chosen in each state. The proof of this is relatively straightforward: Replace the interviews in each precinct with the actual votes and compare the sample to the complete count. There were errors, as with any sample, but they were random across the 50 states (see pp. 28-30). If anything those errors favored Bush slightly. Blogger Gerry Dales explains it well:

In other words, when the actual vote totals from the sampled precincts were used, they did successfully represent the overall voting population quite well. Had they sampled too many Democratic leaning precincts, then when the actual vote results were used rather than the exit poll results, the estimate would not have provided a very good estimate of the final vote count (it would have overstated Kerry’s support). The problem was not in the selection of the sample precincts- it was that the data in the chosen precincts was not representative of the actual voting at those precincts [Emphasis added].

Third and finally: The centerpiece of the report concerns the investigation of that "within precinct error" (WPE) found on pages 31 to 48. If you have time to read nothing else, read that. The authors review every characteristic that correlates with a greater error. They found higher rates of "within precinct error" favoring Kerry in precincts with the following characteristics:

  • An interviewer age 35 or lower
  • An interviewer with a graduate degree
  • A larger number of voters, where a smaller proportion were selected
  • An interviewer with less experience
  • An interviewer who had been hired a week or less prior to the election
  • An interviewer who said they had been trained "somewhat or not very well."
  • In cities and suburbs
  • In swing states
  • Where Bush ran stronger
  • Interviewers had to stand far from the exits
  • Interviewers could not approach every voter
  • Polling place officials were not cooperative
  • Voters were not cooperative
  • Poll-watchers or lawyers interfered with interviewing
  • Weather affected interviewing

The report pointedly avoids a speculative connecting of dots. They apparently preferred to present "just the facts" and leave the conjecture to others.  Unfortunately, none of the characteristics above, by itself, "proves" the Kerry supporters were more likely than Bush supporters to participate in the poll. However, it is not hard to see the underlying attitudes and behaviors at work might create and exacerbate the within-precinct bias.

Consider age, for example. What assumptions might a voter make about a college student approaching with a clipboard? Would it be crazy to assume that student was a Kerry supporter? If you were a Bush voter already suspicious of the media, might the appearance of such an interviewer make you just a bit more likely to say no, or to walk briskly in the other direction? Would it be easier to avoid that interviewer if they were standing farther away? What if the interviewer were forced to stand 100 feet away, among a group of electioneering Democrats – would the Bush voter be more likely to avoid the whole group?

Writing in the comments section of the previous post, "Nathan" made a reasonable hypothesis about the higher error for interviewers with advanced degrees:

Voters (almost certainly accurately) concluded that interviewers were liberal and thus Kerry voters were more likely to talk to them…throw in any sort of additional colloquy engaged in between the interviewers and interviewees and there you have it.

Now consider the Kerry voter approaching the same college student interviewer. Might that voter feel something opposite of a Bush voter — a bit more trusting or sympathetic toward the interviewer? And suppose the randomly selected voter did not want to participate but his wife – a Kerry supporter – eagerly volunteers to take his place.  Would the less experienced interviewer be more likely to bend the selection rules so she could take the poll?

The problem with all of this speculation – plausible as it may be – it that it is nearly impossible to prove to anyone’s satisfaction. That is the nature of non-response. We know little about those who refuse because…we did not interview them.

I want to try to answer my friend Gerry Dale’s observation (on his blog and in the comments section here) about the pattern of response rates by the partisanship of the precinct (I think he is too quick to dismiss "differential non-response"), but it’s really late and I need to get some sleep. I’ll post tomorrow morning…promise.

Also, if anyone has any questions about the report, please post them or email me. It is a bit dense and technical and, at very least, I can help translate.

Mark Blumenthal

Mark Blumenthal is the principal at MysteryPollster, LLC. With decades of experience in polling using traditional and innovative online methods, he is uniquely positioned to advise survey researchers, progressive organizations and candidates and the public at-large on how to adapt to polling’s ongoing reinvention. He was previously head of election polling at SurveyMonkey, senior polling editor for The Huffington Post, co-founder of Pollster.com and a long-time campaign consultant who conducted and analyzed political polls and focus groups for Democratic party candidates.