« The UCal Berkeley Report | Main | The Freeman Paper Revisted »

November 19, 2004

Exit Polls: Winston's Theory

The New Republic's Noam Scheiber passed on a theory floated by Republican pollster David Winston about the discrepancy in exit polling data that favors Democrats, not just this year, but in years past (although not as consistently):

Winston suggested that one reason the early exit poll data was so far off a couple weeks ago was that the people asking the polling questions may have unconsciously over-represented African Americans in their samples, particularly with the memory of 2000 and the fear about disenfranchising blacks so fresh in their minds. By way of explanation, Winston suggested that if you were supposed to be interviewing 35 people, of which six were supposed to be African American, and you ended up interviewing seven African Americans in order to ensure that the group was adequately represented, and if a lot of other people asking questions over-compensated in the same way, then what would seem like a marginal difference could have huge implications for the poll's overall results. (African Americans vote overwhelmingly Democratic, and you've just increased their weight in your sample by nearly 17 percent.)

Mickey Kaus read Scheiber's post and asked:

But this is a testable theory, no? Did the early exit polls oversample blacks in comparison with the final vote? And did it vary from polltaker to polltaker? (Presumably some of them are more guilty of guilt than others.)

The quick answer to Kaus's question is yes, the proposition is certainly testable. I have heard, through the grapevine, that the mid-day exit polls did have unusually high percentages of women and African American voters. However, these early numbers were not weighted by turnout and, obviously, reflected only half the day's vote. I heard through the same grapevine that the female and Black percentages were better at the end of the day. Unfortunately, we do not know the racial composition of the end-of-day just-before-poll closing exit polls (Unless Stephen Freeman's sources saved those as well).

However, Winston's theory breaks down on some other important details of how the exit polls are done.

1) He assumes that NEP instructs interviewers to fill racial and demographic quotas. They do not. Interviewers are only told to solicit every 4th or 5th or 10th voter exiting the polls. I can confirm this because a few helpful "birdies" (apologies to Wonkette) sent me a copy of the NEP Interviewer Training Manual for 2004.

2) Winston assumes that interviewers are liberal. I know that NEP recruits interviewers from college campuses, but only because my "birdies" were students. That is sample of 3-4 out of 1500; a very small sample size. We really do not know much about the composition of their interviewing staff.

Appropriately, NEP places great emphasis on the "appearance and conduct" of its interviewers. Its training manual instructs: "Your appearance can influence a voter's decision to participate in the survey. Therefore, please dress appropriately...Please wear clean, conservative, neat clothing and comfortable shoes (clean sneakers are acceptable). But, please, NO JEANS and NO T-SHIRTS."

3) The exit polls can correct for skews in race, gender and reduce skews in age caused by refusals. This is a unique aspect of exit polling: The interviewers note the gender, race and approximate age of those who refuse their request for an interview. At the end of the day, the exit pollsters can use this data to correct for non-response bias. So you can correct for non-response bias in gender, race and approximate age, but obviously not for vote preference.

However, one hedge on all this: In writing about these issues, I realize that NEP and its forerunner VNS have disclosed very little about the timing of the various weighting and statistical corrections that they do. I know they collect hard turnout counts in their late afternoon and just-before-poll closing reports, and I know they use this actual turnout data to weight the results released late on Election Day. I know that when all the dust clears, they can weight to correct the data for demographic non-response at the precinct level, to match precinct level vote preference to the actual count, and to similarly weight to correct the vote regionally and statewide. What I cannot confirm is when all of this happens: gradually on election night or in one big procedure after midnight? And does some additional adjustment occur in the days and weeks that follow?

Here is one more wrinkle I overlooked in my initial reading. In 1996 (according to Merkle and Edelman, 2000), VNS had it's interviewers call in only half of the data collected on Election Day:

During each call, the interviewer reported the vote and non-response tallies and read in the question-by-question responses from a subsample of the questionnaires. This subsampling is carried out so that we can use the responses from the vote questions from the full sample in each precinct (i.e. the 100 interviews) for the projection models without having to spend interviewer and operator time reading in each questionnaire. In 1996, 70,119 of the 147,081 questionnaires were subsampled, and the data for all questions were read into our system.

I noticed that Warren Mitofsky told the News Hour that NEP interviewed "almost 150,000 people nationwide on Election Day," but that the MIT/Cal Tech report that counted up the sample sizes posted on CNN just after the election counted 76,000 respondents when they "summed up the number of observations reported for each state poll" by CNN a few days after the election (see footnote #3). Looks to me like the half sampling procedure continued.

So I wonder: If this procedure was repeated this year, did the half sample include gender, age, race? Were the numbers that appeared on CNN.com since Election Day weighted to correct non-response?

I'm assuming the answer to both questions is yes, but we really do not know for certain. As a student of survey research, I would certainly like to learn more. Given that these data are paid for by news media outlets and released into the public domain, we really should know more.

Source:

Merkle, Daniel M. and Murray Edelman (2000). "A Review of the 1996 Voter New Service Exit Polls from a Total Survey Error Perspective." In Election Polls, the News Media and Democracy, ed. P.J. Lavrakas, M.W. Traugott, pp. 68-92. New York: Chatam House.

Related Entries - Exit Polls

Posted by Mark Blumenthal on November 19, 2004 at 12:59 PM in Exit Polls | Permalink

Comments

I think it is a little unclear what was half sampled and what was not.

So, half the cards were fully called in and the other half only had the voting data called in as part of the vote tally. Is that correct?

Thanks

Posted by: Alex in Los Angeles | Nov 19, 2004 3:55:39 PM

Memo

To: The Carter Center

From: Mark Blumenthal, Kevin Drum, and the royalty of the democratic blog community (not to mention the "official" Democratic Party).

Re: Election Fraud

This is to let you know to close up show, you are no longer needed to monitor elections. We have discovered through our own election process that the idea of some massive conspiracy to steal an election is not only incomprehensible but idiotic. You can replace your observer armbands with tinfoil hats if you think the explanation is election fraud. It is almost certainly something completely different. If all else fails you can blame the Jews in Florida, or the darker skinned voters who didn't know what they were doing.

Oh and tell that lady in Burma to give back her Nobel Peace Prize. Those Burmese elections didn't look right because of the Dixiecrat vote. Those damned Dixiecrats are everywhere.

Even if you have exit polling data, and then you have separate corroroboration from another statistical study, well you can go and start a war based on that where thousands die, or send a man to the electric chair, but when it comes to elections you don't have s smoking gun.

Sorry you guys went through all the bother trying to establish democracy over the last few decades. We just figured out it runs on auto-pilot.

Posted by: Wilbur | Nov 19, 2004 4:13:32 PM

One of the states that showed a high discrepancy between exit polling and the final count was New Hampshire.

It would be difficult to oversample African-Americans in New Hampshire, since it's one of the states with the least ethnic diversity.

Plus--if Republicans don't like to respond to the media for exit polls, why were Republicans supposedly so eager to respond to polling during the election season? That was the excuse given for why so many Republicans showed up in the polling data.

Posted by: Vicki Meagher | Nov 19, 2004 4:34:10 PM

Alex in Los Angeles wrote:

"I think it is a little unclear what was half sampled and what was not."

"So, half the cards were fully called in and the other half only had the voting data called in as part of the vote tally. Is that correct?"

You're right, it's a bit unclear. As I read it, in 1996, interviewers called in data for the vote questions and the tallies of non-respondants for ALL questionnairs. The called in data for ALL OTHER question for only a random half of the sample.

It seems logical that they would have also called in basic demographics (gender, age, race) for ALL questionnaires (since the non-respondent data would not be very useful without it), but the text leaves that unclear.

Obviously, we have no idea what they did this year, although the discrepancy between the total interviews as described by Warren Mitofsky and as "summed up" from CNN.com provides a hint.

Posted by: Mark Blumenthal | Nov 19, 2004 6:25:31 PM

It amazing how secretive the NEP is being about their numbers. I'm beginning to think the NEP is actually run by the Bush administration.

Posted by: Alan | Nov 19, 2004 6:39:46 PM

Wilbur:

The Carter Center has never found an election fraudulent simply on the basis of exit polls. In the recent Venezuela referendum on Chavez, for example, there was a *much* greater divergence with the exit polls than in the US, yet Carter has accepted the referendum as legitimate.

As for "blaming the Jews"--I am Jewish myself. But when the Berkeley report specifically says that the tendency of eVoting counties to show an unusually strong Bush increase is not uniform but is greatet in Miami/Dade, Broward and Palm Beach counties, are we supposed to ignore that (a) these counties have by far the largest concentrations of Jewish voters in Florida and (b) even Democrats concede that Bush made *some* inroads with Jewish voters in Florida this year? If you don't believe that last point, try http://www.sun-sentinel.com/news/local/broward/sfl-pprecincts10nov10,0,3453139.story?coll=sfla-news-broward where some Jewish Democrats are quoted to that effect. True, Kerry easily won the Jewish vote, in southern Florida as elsewhere, but it is the *divergence from 2000* that matters.

To put it another way, the Berkeley report did not say that *as a matter of raw votes*, the eVoting counties showed greater gains for Bush than the other counties. It said, "Once you adjust for factors A, B, and C" the eVoting counties showed suspsiciously large gains for Bush. When someone says something like that, surely it is legitimate to note "but perhaps when you take factors D, E, and F into account the 'suspicious' pattern disappears."

Posted by: David T | Nov 20, 2004 6:04:37 PM

All of the late afternoon cnn.com exit polls were already weighted for gender, race, and party identification.

I distinctly remember studying the different ratios of racial groups on the Ohio poll and if anything, I got the impression that they were UNDERSAMPLING minorities, based on the racial makeup of Ohio as a whole.

My point, though, is that the exit polls, at least for the cnn.com Ohio polls, WERE ALREADY WEIGHTED PROPERLY. And they were separated by gender - with Kerry winning both the male vote and female vote. The ratio of female to male was 53 to 47.

Posted by: Clint Cooper | Nov 22, 2004 3:12:39 AM

I was once Exit Polled. It the 1984 Democratic Presidential Primary in New York. And it wasn't really random. I saw the worker at the polls and I wanted to answer the survey so I walked past the guy 2 or 3 times until he asked me if I was willing to be polled.

Posted by: jerry skurnik | Nov 22, 2004 4:21:56 PM

The comments to this entry are closed.