Exit Polls: Winston’s Theory

Exit Polls Legacy blog posts

The New Republic‘s Noam Scheiber passed on a theory floated by Republican pollster David Winston about the discrepancy in exit polling data that favors Democrats, not just this year, but in years past (although not as consistently):

Winston suggested that one reason the early exit poll data was so far off a couple weeks ago was that the people asking the polling questions may have unconsciously over-represented African Americans in their samples, particularly with the memory of 2000 and the fear about disenfranchising blacks so fresh in their minds. By way of explanation, Winston suggested that if you were supposed to be interviewing 35 people, of which six were supposed to be African American, and you ended up interviewing seven African Americans in order to ensure that the group was adequately represented, and if a lot of other people asking questions over-compensated in the same way, then what would seem like a marginal difference could have huge implications for the poll’s overall results. (African Americans vote overwhelmingly Democratic, and you’ve just increased their weight in your sample by nearly 17 percent.)

Mickey Kaus read Scheiber’s post and asked:

But this is a testable theory, no? Did the early exit polls oversample blacks in comparison with the final vote? And did it vary from polltaker to polltaker? (Presumably some of them are more guilty of guilt than others.)

The quick answer to Kaus’s question is yes, the proposition is certainly testable. I have heard, through the grapevine, that the mid-day exit polls did have unusually high percentages of women and African American voters. However, these early numbers were not weighted by turnout and, obviously, reflected only half the day’s vote. I heard through the same grapevine that the female and Black percentages were better at the end of the day. Unfortunately, we do not know the racial composition of the end-of-day just-before-poll closing exit polls (Unless Stephen Freeman’s sources saved those as well).

However, Winston’s theory breaks down on some other important details of how the exit polls are done.

1) He assumes that NEP instructs interviewers to fill racial and demographic quotas. They do not. Interviewers are only told to solicit every 4th or 5th or 10th voter exiting the polls. I can confirm this because a few helpful "birdies" (apologies to Wonkette) sent me a copy of the NEP Interviewer Training Manual for 2004.

2) Winston assumes that interviewers are liberal. I know that NEP recruits interviewers from college campuses, but only because my "birdies" were students. That is sample of 3-4 out of 1500; a very small sample size. We really do not know much about the composition of their interviewing staff.

Appropriately, NEP places great emphasis on the "appearance and conduct" of its interviewers. Its training manual instructs: "Your appearance can influence a voter’s decision to participate in the survey. Therefore, please dress appropriately…Please wear clean, conservative, neat clothing and comfortable shoes (clean sneakers are acceptable). But, please, NO JEANS and NO T-SHIRTS."

3) The exit polls can correct for skews in race, gender and reduce skews in age caused by refusals. This is a unique aspect of exit polling: The interviewers note the gender, race and approximate age of those who refuse their request for an interview. At the end of the day, the exit pollsters can use this data to correct for non-response bias. So you can correct for non-response bias in gender, race and approximate age, but obviously not for vote preference.

However, one hedge on all this: In writing about these issues, I realize that NEP and its forerunner VNS have disclosed very little about the timing of the various weighting and statistical corrections that they do. I know they collect hard turnout counts in their late afternoon and just-before-poll closing reports, and I know they use this actual turnout data to weight the results released late on Election Day. I know that when all the dust clears, they can weight to correct the data for demographic non-response at the precinct level, to match precinct level vote preference to the actual count, and to similarly weight to correct the vote regionally and statewide. What I cannot confirm is when all of this happens: gradually on election night or in one big procedure after midnight? And does some additional adjustment occur in the days and weeks that follow?

Here is one more wrinkle I overlooked in my initial reading. In 1996 (according to Merkle and Edelman, 2000), VNS had it’s interviewers call in only half of the data collected on Election Day:

During each call, the interviewer reported the vote and non-response tallies and read in the question-by-question responses from a subsample of the questionnaires. This subsampling is carried out so that we can use the responses from the vote questions from the full sample in each precinct (i.e. the 100 interviews) for the projection models without having to spend interviewer and operator time reading in each questionnaire. In 1996, 70,119 of the 147,081 questionnaires were subsampled, and the data for all questions were read into our system.

I noticed that Warren Mitofsky told the News Hour that NEP interviewed "almost 150,000 people nationwide on Election Day," but that the MIT/Cal Tech report that counted up the sample sizes posted on CNN just after the election counted 76,000 respondents when they "summed up the number of observations reported for each state poll" by CNN a few days after the election (see footnote #3). Looks to me like the half sampling procedure continued.

So I wonder: If this procedure was repeated this year, did the half sample include gender, age, race? Were the numbers that appeared on CNN.com since Election Day weighted to correct non-response?

I’m assuming the answer to both questions is yes, but we really do not know for certain. As a student of survey research, I would certainly like to learn more. Given that these data are paid for by news media outlets and released into the public domain, we really should know more.

Source:

Merkle, Daniel M. and Murray Edelman (2000). "A Review of the 1996 Voter New Service Exit Polls from a Total Survey Error Perspective." In Election Polls, the News Media and Democracy, ed. P.J. Lavrakas, M.W. Traugott, pp. 68-92. New York: Chatam House.

Mark Blumenthal

Mark Blumenthal is the principal at MysteryPollster, LLC. With decades of experience in polling using traditional and innovative online methods, he is uniquely positioned to advise survey researchers, progressive organizations and candidates and the public at-large on how to adapt to polling’s ongoing reinvention. He was previously head of election polling at SurveyMonkey, senior polling editor for The Huffington Post, co-founder of Pollster.com and a long-time campaign consultant who conducted and analyzed political polls and focus groups for Democratic party candidates.