AAPOR: Exit Poll Presentation

Exit Polls General Legacy blog posts

Unfortunately, the sleep deprivation experiment that was my AAPOR conference experience finally caught up with me Saturday night.  So this may be a bit belated, but after a day of travel and rest, I want to provide those not at the AAPOR conference with an update on some of the new information about the exit polls presented on Saturday.  Our lunch session included presentations by Warren Mitofsky, who conducted the exit polls for the National Election Pool (NEP), Kathy Frankovic of CBS News, and Fritz Scheuren of the National Opinion Research Center National Organization for Research and Computing (NORC) at the University of Chicago.

Mitofsky spoke first and explicitly recognized the contribution of Elizabeth Liddle (that I described at length a few weeks ago).  He described “within precinct error” (WPE) the basic measure that Mitofsky had used to measure the discrepancy between the exit polls and the count within the sampled precincts:  “There is a problem with it,” he said, explaining that Liddle, “a woman a lot smarter than we are,” had shown that the measure breaks down when used to look at how error varied by the “partisanship” of the precinct.  The tabulation of error across types of precincts – heavily Republican to heavily Democratic – has been at the heart of an ongoing debate over the reasons for the discrepancy between the exit poll results and the vote count.

Mitofsky then presented the results of Liddle’s computational model (including two charts) and her proposed “within precinct Error_Index” (all explained in detail here).  He then presented two “scatter plot” charts.  The first showed the values of the original within precinct error (WPE) measure by the partisanship of the precinct.  Mitofsky gave MP permission to share that plot with you, and I have reproduced it below.

The scatter plot provides a far better “picture” of the error data than the table presented in the original Edison-Mitofsky report (p. 36), because it shows the wide, mostly random dispersion of values.  Mitofsky noted that the plot in WPE tends to show an overstatement mostly in the middle precincts as Liddle’s model predicted.  A regression line drawn through the data shows a modest upward slope.

Mitofsky then presented a similar plot of Liddle’s Error Index by precinct partisanship.  The pattern is flatter and more uniform and the slope of the regression line is flat.  It is important to remember that this chart, unlike all of Liddle’s prior work, is based not on randomly generated “Monte Carlo” simulations, but on the actual exit poll data. 

Thus, Mitofsky presented evidence showing, as Liddle predicted, that the apparent pattern in the error by partisanship — a pattern that showed less error in heavily Democratic precincts and more error in heavily Republican precincts — was mostly an artifact of the tabulation

Kathy Frankovic, the polling director at CBS, followed Mitofsky with another presentation that focused more directly on explaining the likely root causes of the exit poll discrepancy.   She talked in part about the history of past internal research on the interactions between interviewers and respondents in exit polls.  Some of this has been published, much has not.  She cited two specific studies that were new to me:

  • A fascinating pilot test in 1991 looked for ways to boost response rates.  The exit pollsters offered potential respondents a free pen as an incentive to complete the interview.  The pen bore the logos of the major television networks.  The pen-incentive boosted response rates, but it also increased within-precinct-error (creating a bias that favored the Democratic candidate), because as Frankovic put it, “Democrats took the free pens, Republicans didn’t.”  [Correction (5/17):  The study was done in 1997 on VNS exit polls conducted for the New York and New Jersey general elections.  The experiment involved both pens and a color folder displayed to respondents that bore the network logos and the words “short” and “confidential.” It was the folder condition, not the pens, that appeared to increase response rates and introduce error toward the Democrat.  More on this below]
  • Studies between 1992 and 1996 showed that “partisanship of interviewers was related to absolute and signed WPE in presidential” elections, but not in off-year statewide elections.  That means that in those years, interviews conducted by Democratic interviewers showed a higher rate of error favoring the Democratic candidate for president than Republican interviewers.

These two findings tend to support two distinct yet complementary explanations for the root causes of the exit poll problems.  The pen experiment suggests that an emphasis on CBS, NBC, ABC, FOX, CNN and AP (whose logos appear on the questionnaire, the exit poll “ballot box” and the ID badge the interviewer wears and which the interviewers mention in their “ask”) helps induce cooperation from Democrats, “reluctance” from Republicans.

Second, the “reluctance” may also be an indirect result of the physical characteristics of the interviewers that, as Frankovic put it, “can be interpreted by voters as partisan.”  She presented much material on interviewer age (the following text comes from her slides which she graciously shared):   

In 2004 Younger Interviewers…

    * Had a lower response rate overall
        – 53% for interviewers under 25
        – 61% for interviewers 60 and older
    * Admitted to having a harder time with voters
        – 27% of interviewers under 25 described respondents as very cooperative
        – 69% of interviewers over 55 did
    * Had a greater within precinct error

Frankovic also showed two charts showing that since 1996, younger exit poll interviews have consistently had a tougher time winning cooperation from older voters.  The response rates for voters age 60+ were 14 to 15 points lower for younger interviewers than older interviewers in 1996, 2000 and 2004.  She concluded: 

IT’S NOT THAT YOUNGER INTERVIEWERS AREN’T GOOD – IT’S THAT DIFFERENT KINDS OF VOTERS MAY PERCEIVE THEM DIFFERENTLY

  • Partisanship isn’t visible – interviewers don’t wear buttons — but they do have physical characteristics that can be interpreted by voters as partisan. 
  • And when the interviewer has a hard time, they may be tempted to gravitate to people like them.

Frankovic did not note the age composition of the interviewers in her presentation, but the Edison-Mitofsky report from January makes clear that the interviewer pool was considerably younger than the voters they polled.  Interviewers between the ages of 18 and 24 covered more than a third of the precincts (36% – page 44), while only 9% of the voters in the national exit poll were 18-24 (tabulated from data available here).   These results imply that more interviewers “looked” like Democrats than Republicans, and this imbalance introduced a Democratic bias into the response patterns.

Finally, Dr. Fritz Schueren presented findings from an independent assessment of the exit polls and precinct vote data in Ohio commissioned by the Election Science Institute.  His presentation addressed the theories of vote fraud directly.

Scheuren is the current President of the American Statistical Association, and Vice President for Statistics at NORCHe was given access to the exit poll data and matched that independently to vote return data. [Correction 5-17:   Schueren had access to a precinct level data file from NEP that included a close approximation of the actual Kerry vote in each of the sample precincts, but did not identify those precincts. Scheuren did not independently confirm the vote totals ].

His conclusion (quoting from the Election Science press release):

The more detailed information allowed us to see that voting patterns were consistent with past results and consistent with exit poll results across precincts. It looks more like Bush voters were refusing to participate and less like systematic fraud.

Scheuren’s complete presentation is now available online and MP highly recommends reading it in full.

[ESI also presented a paper at AAPOR on their pilot exit poll study in New Mexico designed to monitor problems with voting.  It is worth downloading just for the picture of the exit poll interviewer forced to stand next to a MoveOn.org volunteer, which speaks volumes about another source of problem].

The most interesting chart in Scheuren’s presentation compared support for George Bush in 2000 and 2004 in the 49 precincts sampled in the exit poll.   If the exit poll had measured fraud in 2004, and had fraud occurred in these precincts in 2004 and not 2000, one would expect to see a consistent pattern in which the precincts overstating Kerry fell on a separate parallel line, indicating higher values in 2004 than 2000.  That was not the case. A subsequent chart showed virtually no correlation between the exit poll discrepancy and the difference between Bush’s 2000 and 2004 votes.

Typos corrected – other corrections on 5/17

UPDATE & CLARIFICATION (5/17) More about the 1997 study: 

The experimental research cited above was part of the VNS exit poll of the New Jersey and New York City General Elections in November, 1997.  While my original description reflects the substance of the experiment, the reality was a bit more complicated.

The experiment was described in a paper presented at the 1998 AAPOR Conference authored by Daniel Merkle, Murray Edelman, Kathy Dykeman and Chris Brogan.  It involved two experimental conditions:  In one test, interviewers used a colorful folder over their pad of questionnaires that featured “color logos of the national media organizations” and “the words ‘survey of voters,’ ‘short’ and ‘confidential.'” On the back of the folder were more  instructions on how to handle either those who hesitated or refused.   The idea was to “better standardize the interviewer’s approach and to stress a few key factors” to both the interviewer and the respondent  intended to lead to better compliance. 

In a second test, interviewers used the folder and offered a pen featuring logos of the sponsoring news organizations.   A third “control” condition used the traditional VNS interviewing technique without any use of  a special folder or pen. 

There was no difference between the folder and folder/pen conditions so the two groups were combined in the analysis.   The results showed that both the folder and folder/pen conditions slightly increased response rates but also introduced more error toward the Democratic candidate as compared to the control group.   Since there was no difference between the folder/pen and folder conditions, it was the folder condition, not the pen, that appeared to influence response rates and error.

The authors concluded in their paper: 

The reason for the overstatement of the Democratic voters in the Folder Conditions is not entirely clear and needs to be investigated further.  Clearly some message was communicated in the Folder Conditions that led to proportionately fewer Republicans filling it out.  One hypothesis is that the highlighted color logos of the national news organizations on the folder were perceived negatively by Republicans and positively by Democrats, leading to differential nonresponse between the groups.

Murray Edelman, one of the authors, emailed me with the following comment: 

The reference to this study at the 2004 AAPOR conference by both Bob Groves and Kathy Frankovic in their respective plenaries has inspired us to revise our write up of this study for possible publication in POQ and to consider other factors that could explain some of the  differences between the two conditions, such as the effort to standardize the interviewing technique and persuade reluctant respondents  and the emphasis on the questionnaire being “short” and “confidential.”  However, we agree that the main conclusion, that efforts to increase response rates may also increase survey error, is not in question.

Back to text

Mark Blumenthal

Mark Blumenthal is the principal at MysteryPollster, LLC. With decades of experience in polling using traditional and innovative online methods, he is uniquely positioned to advise survey researchers, progressive organizations and candidates and the public at-large on how to adapt to polling’s ongoing reinvention. He was previously head of election polling at SurveyMonkey, senior polling editor for The Huffington Post, co-founder of Pollster.com and a long-time campaign consultant who conducted and analyzed political polls and focus groups for Democratic party candidates.