The “Reluctant Bush Responder” Theory Refuted?

Exit Polls Legacy blog posts

First, my apologies for not posting yesterday. The Inauguration was also a federal holiday, which meant no child care in my household and a day of being Mystery Daddy not Mystery Pollster.

So without further ado: Though the Edison/Mitofsky Report gives us much to chew over, the table drawing the most attention is the one on page 37 that shows completion rates for the survey by the level of partisanship of the precinct.

A bit of help for those who may be confused: Each number is a percentage and you read across. Each row shows completion, refusal and miss rates for various categories of precincts, categorized by their level of partisanship. The first row shows that in precincts that gave 80% or more of their vote to John Kerry, 53% of voters approached by interviewers agreed to be interviewed, 35% refused and another 12% should have been approached but were missed by the interviewers.

Observers on both sides of the political spectrum have concluded that this table refutes the central argument of the report, namely that the discrepancy in Kerry’s favor was “most likely due to Kerry voters participating in the exit polls at a higher rate than Bush voters” (p. 3). Jonathan Simon, who has argued that the exit poll discrepancies are evidence of an inaccurate vote count, concluded in an email this morning that the above table “effectively refutes the Reluctant Bush Responder (aka differential response) hypothesis, and leaves no plausible exit poll-based explanation for the exit poll-vote count discrepancies.” From the opposite end of the spectrum, Gerry Dales, editor of the blog DalyThoughts argues [in a comment here] uses the table to “dismiss the notion that the report has proved…differential non-response as a primary source of the statistical bias” (emphasis added). He suspects malfeasance by the interviewers rather than fraud in the count.

The table certainly challenges the idea that Kerry voters participated at a higher rate than Bush voters, but I am not sure it refutes it. Here’s why:

The table shows response rates for precincts not voters. Unfortunately, we have no way to tabulate response rates for individuals because we have no data for those who refused or were missed. Simon and Dales are certainly both right about one thing: If completion rates were uniformly higher for Kerry voters than Bush across all precincts, the completion rates should be higher in precincts voting heavily for Kerry than in those voting heavily for Bush. If anything, the table above shows slightly higher completion rates look slightly higher in the Republican precincts.

However, the difference in completion rates need not be uniform across all types of precincts. Mathematically, an overall difference in completion rates will be consistent with the pattern in the table above if you assume that Bush voter completion rates tended to be higher where the percentage of Kerry voters in the precincts was lower, or that Kerry voter completion rates tended to be higher where the percentage of Bush voters in the precincts was lower, or both.  I am not arguing that this is likely, only that it is possible. 

Note also that the two extreme precinct categories are by far the smallest (see the table at the bottom of p. 36): Only 40 precincts of 1250 (3%) were “High Rep” and only 90 were “High Dem” (7%). More than three quarters were in the “Even” (43%) or “Mod Rep” (33%) categories. Not that this explains the lack of a pattern – it just suggests that the extreme precincts may not be representative of most voters.

Second, as Gerry seems to anticipate in his comments yesterday, the completion rate statistics are only as good as the interviewers that compiled them. Interviewers were responsible for counting each voter they missed or that refused to be interviewed and keeping tallies on their race, gender and approximate age. The report presents overwhelming evidence that errors were higher for interviewers with less experience. One hypothesis might be that some interviewers made improper substitutions without recording refusals appropriately.

Consider my hypothetical in the last post: A husband is selected as the nth voter but refuses. His spouse offers to complete the survey instead. The interviewer breaks with the proscribed procedure and allows the spouse to do the interview (rather than waiting for the next nth voter). [Note: this is actually not a hypothetical – I exchanged email with a professor who reported this example after debriefed students he helped recruit as NEP interviewers]. Question is: would the interviewer record the husband as a refusal? The point is that the same sloppiness that allows an eager respondent to volunteer (something that is impossible, by the way, on a telephone survey) might also skew the completion rate tallies. Presumably, that is one reason why Edison/Mitofsky still plans to conduct “in-depth” interviews with interviewers in Ohio and Pennsylvania (p. 13) – they want to understand more about what interviews did and did not do.

Third, there is the possibility that some Bush voters chose to lie to the exit pollsters. Any such behavior would have no impact on the completion rate statistics. So why would a loyal Bush voter want to do that? Here’s what one MP reader told me via email. As Dave Barry used to say, I am not making this up:

Most who are pissed off about the exit polls are Democrats or Kerry supporters. Such people are unlikely to appreciate how profoundly some Republicans have come to despise the mainstream media, just since 2000. You have the war, which is a big one. To those who support Bush much of the press have been seditious. So, if you carry around a high degree of patriotism you are likely to have blood in your eye about coverage of the Iraq war, alone. Sean Hannity and Rush Limbaugh had media in their sights every day leading up to the 2004 election, and scored a tremendous climax with the Rather fraud and NY Times late-hit attempt. I was prepared to lie to the exit pollster if any presented himself. In fact, however, I can’t be sure I would have, and might have just said “none of your f—ing business.” We can’t know because it didn’t happen, but I do know the idea to lie to them was definitely in my mind.

Having said all that, I think Gerry Dales has a point about the potential for interviewer bias (Noam Scheiber raised a similar issue back in November, and in retrospect, I was too dismissive). The interviewers included many college students and holders of advanced degrees who were able to work for a day as an exit pollster. Many were recruited by college professors or (less often) on “Craigslist.com.” It’s not a stretch to assume that the interviewers were, on average, more likely to be Kerry voters than Bush voters.

My only difference with Gerry on this point is that such bias need not be conscious or intentional. Fifty years or more of academic research on interviewer effects shows that when the majority of interviewers share a point of view, the survey results often show a bias toward that point of view. Often the reasons are elusive and presumed to be subconscious.

Correction:  While academic survey methodologists have studied interviewer effects for at least 50 years, their findings have been inconsistent regarding correlations between interviewer and respondent attitudes.   I would have done better to cite the conventional wisdom among political pollsters that the use of volunteer partisans as interviewers — even when properly trained and supervised — will often bias the results in favor of the sponsoring party or client.  We presume the reasons for that bias are subsconscious and unintentional.

In this case, the problem may have been as innocent as an inexperienced interviewer feeling too intimidated to follow the procedure and approach every “nth” voter and occasionally skipping over those who “looked” hostile, choosing instead those who seemed more friendly and approachable.

One last thought: So many who are considering the exit poll problem yearn for simple, tidy answers that can be easily proved or dismissed: It was fraud! It was incompetence! Someone is lying! Unfortunately, this is one of those problems for which simple answers are elusive. The Edison/Mitofsky report provides a lot of data showing precinct level characteristics that seem to correlate with Kerry bias. These data make a compelling case that whatever the underlying problem (or problems), they were made worse by young, inexperienced or poorly trained interviewers especially when up against logistical obstacles to completing an interview. It also makes clear (for those who missed it) that these problems have occurred before (pp. 32-35), especially in 1992 when voter turnout and voter interest were at similarly high levels (p. 35).

Many more good questions, never enough time. Hopefully, another posting later tonight… if not, have a great weekend!

Mark Blumenthal

Mark Blumenthal is the principal at MysteryPollster, LLC. With decades of experience in polling using traditional and innovative online methods, he is uniquely positioned to advise survey researchers, progressive organizations and candidates and the public at-large on how to adapt to polling’s ongoing reinvention. He was previously head of election polling at SurveyMonkey, senior polling editor for The Huffington Post, co-founder of Pollster.com and a long-time campaign consultant who conducted and analyzed political polls and focus groups for Democratic party candidates.