« October 2004 | Main | December 2004 »

November 29, 2004

Freeman's Data

Steven Freeman, the author of the widely circulated paper entitled, "The Unexplained Exit Poll Discrepancy," has posted exit poll tabulations for 49 of the 50 states, plus DC, on his website. While I disagree with many of Freeman's conclusions about the discrepancy, his data provide a valuable resource: The only collection of "just-before-poll-closing" tabulations I am aware of in the public domain.

While these data are worthy of attention, we should remember some important limitations. The results posted on CNN.com on election night, as now, did not show the overall vote preference measured by the exit polls in each state. Rather, they showed the preference tabulated separately for a wide variety of demographic subgroups and answers to other questions. The tabulations included a table of the results by gender, as well as the percentage of men and women in the total sample. Consider the following example, which is based on the results posted on CNN.com as of today (November 29, 2004 - click the image to see a full size version):

Cnn_vote_by_gender

To calculate overall support for President Bush in this sample, multiply Bush's support among women (48%) by the percentage of women in the sample (54%), then multiply Bush's support among men (55%) by the percentage of men in the sample (46%) and add the two: (0.48*0.54) + (0.55 * 0.46) = 0.51 (or 51%).

This extrapolation process adds some random rounding error to the tabulations. Freeman also reports his tabulations out to one decimal point (e.g. 51.2%), but the mathematical principle of "significant digits" tells us that results of the underlying calculations are only accurate to the nearest whole percentage point (For an explanation of significant digits, see this site under "multiplying and dividing").

While we do not know for certain that the results posted by CNN on election night were the final "before poll closing" results, the timing of their appearance online strongly suggests it.  Those who monitored CNN.com on election night reported that exit poll results did not appear in any state until the polls closed in that state. The data that Freeman produces were taken from screen shots just before or after midnight. Given the differences between Freeman's data and the actual count, we can safely conclude the results had not yet been fully corrected to match final tallies. However, the sample sizes he lists for each state are slightly smaller in every instance I checked than those appearing on the site today. Do the sample sizes differ because of missing interviews or precincts or because of the weighting procedure? Without confirmation from NEP, we can only speculate.

Despite the limitations, Freeman's data have obvious advantages over other exit poll results reported on Election Day. His tabulations are not based on leaked or "stolen" data or on numbers passed from person to person. They were put into the public domain on the official CNN web site on election night and copied (using "screen shots") on to a computer hard drive.

I believe Freeman's data are worthy of our attention for two reasons: The most important is the suggestion by Warren Mitofsky (here and here) and others associated with the exit polls that the discrepancy may result from what survey methodologists call "differential non-response." That is, Republicans were theoretically more likely to refuse to be surveyed than Democrats. That hypothesis, if proven true, could have important consequences for all political surveys.

Another reason is the continuing speculation about problems in the actual count. Whatever we think about the plausibility of the various conspiracy theories, a fuller presentation of the uncorrected exit poll should shed more light on the issue.  It might even help restore some confidence in the actual count. I would think that the news organizations that own the data would see the public good that might result in putting the relevant tabulations and analyses into the public domain. 

Finally, if Freeman's tabulations are wrong or misleading, the NEP can easily clear up any confusion by releasing the correct "before-closing-time" tabulations. Similarly, if Freeman is in error in his estimates of the statistical significance of the discrepancies, NEP can tell us more about the appropriate sampling error for the results in each state. Remember, Freeman's data are derived from results that were publicly released by CNN. Providing more information about the data CNN released and the sampling error associated with it would conform to the spirit (if not the letter) of the principles of disclosure of the National Council of Public Polls (NCPP): "to insure that pertinent information is disclosed concerning methods that were used so that consumers of surveys may assess studies for themselves."

I have more to say on these data...stay tuned...

Posted by Mark Blumenthal on November 29, 2004 at 04:46 PM in Exit Polls | Permalink | Comments (11)

The Difference Between "Partial" and "Final" Exit Polls

A few days ago John Kesich, a commenter on this site, complained about the way the term "exit polls" has been used to describe both projections and raw data. He had a point. Confusion over the term "exit poll" runs far deeper, as I have seen the modifiers "early" and "final" applied inconsistently to a wide variety of exit poll tabulations. Much of the confusion stems from the reluctance of the National Election Pool (NEP) to discuss the various tabulations they generated on Election Day.  Since they will not comment, confusion about the terminology is inevitable.

I have a backlog of questions to cover about exit polls, but I want to start by reviewing NEP's various tabulations and projections and suggesting some terms to identify them clearly.

First a review of the process: On November 2, 2004, the NEP conducted separate exit polls in all 50 states and the District of Columbia plus a separate, stand-alone "national" sampling. NEP instructed its interviewers to call in to report their results three times on Election Day (all times are local): At about noon, at about 3:00 p.m. and roughly an hour before the polls closed. NEP started releasing tabulations of the vote preference question for the national survey and for most of the battleground states on an hourly basis beginning at 1:00 p.m. Eastern Time. Here is what I know about the various tabulations and projections:

1) Early-afternoon unweighted tabulations - NEP released tabulations for many states between 1:00 and 3:00 p.m. Eastern Time on Election Day based on partial data that were completely raw and unweighted.

2) Late afternoon tabulations (weighted by turnout) - At about 3:00 p.m. (local time) interviewers obtain a hard counts of actual turnout from election officials for their covered precincts. NEP officials use this data to weight their late afternoon exit poll tabulations to match the actual turnout in the sampled precincts. Presumably, NEP started to deliver weighted data for eastern states at about 4:00 EST, but weighted data for states in western states may not have been available until 6 or 7 pm EST.

3) Just-before-poll-closing tabulations (weighted by turnout) - Interviewers call in roughly an hour before polls close with their final tabulations and another hard count of actual turnout. NEP uses these final reports to prepare exit poll tabulations for each state a few minutes before the polls close.  The data are weighted by actual turnout.  The weighting procedure assures that the mix of precints within each state -- urban vs. rural, Democratic vs. Republican, etc. -- matches the actual turnout patterns recorded that day.  The network "decision desks" use these tabulations to "call" winners, but only in states where the leader's margin far exceeds statistical significance (the tests of statistical significance assume "confidence levels" of at least 99%, not 95%).

NEP also generates cross-tabular tables (weighted by turnout) for each state just before the polls close. These are tables similar in format to those now available on CNN.com (though the results are now different) that show how respondents answered each question and the vote preference calculated across answers to each question. The cross-tab tables play no role in projecting winners. Rather, networks and newspapers use them to prepare "analytical" stories about the election.

4) Projections after the polls close - Once the polls close, NEP gathers actual results for the precincts sampled in the exit polls and also for another larger sample of precincts (typically referred to as a sample of "key precincts"). Since not all precinct data is available at once, NEP gradually combines the exit poll results and the actual vote counts into an evolving hybrid of projections and estimates that gradually improves over the course of the evening. Although the projection models and tabulations are reportedly quite elaborate, NEP and its forerunners have disclosed very little about them.

5) "Corrected" Exit Poll Tabulations - Once the actual results have been counted in the wee hours of election night, NEP re-weights the results of each exit poll so that the vote preference on the poll matches the actual count. They then release new cross-tabular tables for each state to the general public. In theory, weighting to match the vote preference to actual results makes the complete exit poll more accurate.

6) "Final" Tabulations? - The tabulations put out the day after the election may not be truly "final." I have heard rumors of additional revisions that either have occurred or are about to be released. For example, I heard about a week ago that NEP was about to "revise" its estimates for Hispanic voters. The national survey posted on CNN.com estimates that President Bush received 44% of the Hispanic vote, yet a story in Sunday's Washington Post puts his Hispanic support at 42%. Does this difference reflect the rumored revision? I have no idea, but will report if I learn more.

7) Raw data deposited with the Roper Center - Consistent with past practices, NEP has promised to make a copy of its raw data available to scholars at the Roper Center archives, only on a more accelerated timetable than usual (about three months).

Those who are concerned about the discrepancy between the exit polls and the actual vote count should focus only on #3, the tabulations prepared just before poll closing for each state. Unfortunately, these were not officially released. The early releases (#1 & #2) were widely leaked and posted on the Internet, but had bigger discrepancies resulting from incomplete samples or the use of completely unweighted data. The later releases now available through official channels (#5) are not helpful for analysis of the discrepancy since they were "corrected" to conform to actual vote results. The only source of "just-before-poll-closing" results appears to be the data in the paper by Steven Freeman (more on this in the next post).

Let me anticipate a few pertinent questions:

Why so many different tabulations? As discussed here before, the exit polls serve at least three functions: (a) They help give producers and reporters a head start in preparing their election night broadcasts and stories, (b) they assist the networks in "calling" winners and (c) they provide a resource to help reporters and the general public interpret the results of the election. The mid-day tabulations and before-poll-closing cross tabs help provide a "head start" that gradually improves during the day. The before-poll-closing tabulations (#3) and the later estimates that incorporate actual votes (#4) facilitate official "projections." The various "corrected" releases (#5 to #7 above) - the only ones meant for wide release - serve the third function, providing an analytical tool for reporters, scholars and the general public.

Are "uncorrected" mid-day and before-poll-closing tabulations available from prior years? Generally, no. Again, these tabulations were never intended for public release. Leaked mid-day numbers are available only to the extent that the "leakees" saved them. Smatterings of before-poll-closing tabulations have appeared in journal articles but are otherwise unavailable through official sources. Moreover, I assume that researchers cannot easily replicate the "before-poll-closing" tabulations using the raw data available through the Roper Center - if they could, analysts like Ruy Teixeira would have run them rather than relying on raw, unweighted tabulations from past years.

Why will it take at least three months to release the raw data? The process of opening the raw data to scholars is slow because (presumably) the archivists need to format the data and prepare documentation so researchers can use it appropriately. Remember, we are talking about raw data from 150,000 interviews, 50 separate state exit polls, plus D.C. and the separate national survey, each with a slightly different questionnaire. The task of preparing it all is huge.

However, as John Kesich's complaint implied, nothing prevents the immediate release of the various Election Day exit poll tabulations except NEP's reluctance to do so. These results were disseminated in electronic documents to NEP members and subscribers on November 2 that were presumably saved somewhere. I am not sure what purpose continuing secrecy serves except to provide greater fuel to those spinning conspiracy theories.

Posted by Mark Blumenthal on November 29, 2004 at 10:03 AM in Exit Polls | Permalink | Comments (10)

FAQ: Questions about Exit Polls

***************

Looking for Information on Exit Polls for Election Day 2006?

See the Exit Polls: What You Should Know 2006 or the newly revised Exit Poll FAQ, both now on Pollster.com!

***************

Since the election, I have written quite a bit on exit polls.  Although I have learned a lot in that process, my Election Day summary of “what you need to know” about exit polls still holds up  well and is a good place to start if you are new to this site. 

Here is the complete list of questions:

You can also see a summary of all posts about exit polls (in reverse chronological order) by clicking this link.

Posted by Mark Blumenthal on November 29, 2004 at 09:14 AM in Exit Polls | Permalink | Comments (0)

November 24, 2004

Stones Cry Out on Exit Polls

A blogger and San Diego State graduate student named Rick Brady posts his own exploration of the exit poll discrepancy.  Brady plows a lot of ground that will be familiar to regular readers and (at risk of ruining the surprise ending) ultimately finds MP's take on the issue.  Nonetheless, he reports some interesting interesting information obtained about who has access to what and when we might expect a full release of raw data. 

First, he picked up a bit of color and information from Liz Doyle of Edison Media Research (the company that ran the exit poll along with Mitofsky International):

Ms. Doyle shared that she has been inundated with calls and e-mails from professors and bloggers demanding data because they are convinced there is some kind of conspiracy...I could tell Ms. Doyle had had it with these conspiracy theories. The bottom line was that everyone wants access to their unweighted data and methods for independent review....

Ms. Doyle politely informed me that the [raw] data I was requesting would be available via the Roper Center and due to the unprecedented demand for the data, the NEP was working as quickly as possible to prepare the data for public use. However, these unweighted data couldn’t be expected for at least three more months

Brady then spoke with Richard Morin, the polling director at The Washington Post and got the following response via email:

The Post was one of the subscribers to the exit polls, like the New York Times, WS Journal, USA Today. We got the results of the final national poll and a few states when it was completed but before it was weighted, so I know that the poll had Kerry up by 3. We do not have all of the states, however, and don't know who, if anybody, saved them. They came to us as PDFs, not as data sets, so we cannot analyze them using SPSS or SAS. I saved the national but I do not believe I saved the four states we bought though my assistant did print them out and we have those copies. My recollection is that all of the states were off by a bit, all had a Democratic bias.

Interesting.

I am working on some more on the ongoing exit poll controversy and -- as time allows (he says, knowing that his family reads the blog too) -- will post a bit over the holiday weekend.

Posted by Mark Blumenthal on November 24, 2004 at 05:17 PM in Exit Polls | Permalink | Comments (11)

November 23, 2004

More on the Berkeley Report

Having raised the issue of the Cal Berkeley Report on alleged voting irregularities in Florida, I have been struggling with how much to comment on the ongoing debate among statisticians on their findings (some of it in the comments section of this blog). While this topic fascinates statisticians, it tends to leave the rest of us a bit puzzled. I think Keith Olberman spoke for many:

I have made four passes at "The Effect of Electronic Voting Machines on Change in Support for Bush in the 2004 Florida Elections," and the thing has still got me pinned to the floor.

Most of the paper is so academically dense that it seems to have been written not just in another language, but in some form of code. There is one table captioned "OLS Regression with Robust Standard Errors." Another is titled "OLS regressions with frequency weights for county size." Only the summary produced by Professor Michael Hout and the Berkeley Quantitative Methods Research Time is intelligible.

I have been following the debate, and thought about doing a reader's guide to some of the statistical issues. I am holding off, at least for now, as I am not sure most readers of this are as obsessed by this subject as those who have left comments.  I could be wrong -- let me know if you'd really like to learn more.

For now, let me share a few web sites that have done a good job summarizing the key issues. The best, and easiest to understand, was the post by Kevin Drum on Saturday. It is short and worth reading in full, but here's the gist:

It turns out that [Berkeley Prof. Michael] Hout's entire result is due to only two outliers: Broward County and Palm Beach County. This suggests several things:

    • There was almost certainly not any systemic fraud. If there were, it would have showed up in more than just two counties.
    • The results in Broward and Palm Beach are unusual, but it's hard to draw any conclusion from just two anomolies. As Kieran says, "it seems more likely that these results show the Republican Party Machine was really, really well-organized in Palm Beach and Broward, and they were able to mobilize their vote better than the Democrats."
    • Anyone who wants to continue investigating possible fraud in Florida anyway should focus on Broward and Palm Beach.

Drum based his summary largely on more detailed critiques by Kieren Healy at Crooked Timber (see also the comments) and Columbia University Political Science and Statistics Professor Andrew Gelman. In the comments section of this humble blog, you will find a post from George Mason University Political Science Professor Michael McDonald eviscerates the Berkeley Study as "completely worthless." Other critiques come from bloggers Newmark's Door, Rich Hasen's Election Law Blog, and Alex Strashny.

The bottom line is that the Berkeley Study's conclusions are something less than a slam-dunk. As Kaus' "Feiler Faster" theory might predict, peer-review-by-Internet has moved at lightning speed. Yes, Michael Hout and his colleagues have impressive academic credentials, but then so do Michael McDonald and Andrew Gelman [and B.D. McCullough and Florenz Plassman -- see update below]. The results in Broward and Palm Beach counties are unusual, but the fact that these two counties are among the biggest and most Democratic in Florida with the greatest populations of Jewish voters (see NewmarksDoor) cripples the county level analysis.  Continue investigating? Certainly, but the conclusion in the Berkeley report that "electronic voting raised President Bush's advantage from the tiney edge he held in 2000 to a clearer margin of victory in 2004" looks premature at best. 

Again, if readers would like to me to attempt to "demystify" the underlying statistical issues, leave or comment or email me. Otherwise, I'll return to issues of polling methodology.

UPDATE:  Prof. McDonald's complete write-up of his critique of the Berkeley study is now posted on his website.

BONUS UPDATE (For Kausfiles Readers): A paper by two economics professors -- B.D. McCullough of Drexel University and Florenz Plassman of SUNY Binghamton -- that rebuts the Berkeley/Hout Study point by point. Money quote: "We conclude that the study is entirely without merit and its 'results' are meaningless."

Posted by Mark Blumenthal on November 23, 2004 at 04:18 PM in Exit Polls | Permalink | Comments (15)

November 22, 2004

WP's Morin on Exit Polls

Yesterday's Washington Post had a long story on the exit polls by Richard Morin, its director of polling. If you have followed the exit polling controversy with interest, it is absolutely positively a must-read. Of course, as a "self-important blogger" (see below), I am also duty bound to note that it plows a lot of familiar ground, confirming much of what you have already read here.

Since The Washington Post subscribes to NEP's data, Morin had full access to it on Election Day and has presumably seen the confidential reports by the exit pollsters to their clients. So he based the following on access to the hard data:

The sampling errors gave a boost to Kerry, who led in all six releases of national exit poll results issued on Election Day by the National Election Pool (NEP), the consortium of the major TV networks and the Associated Press that sponsored the massive survey project...

In the first release, at 12:59 p.m. on Election Day, Kerry led Bush 50 percent to 49 percent, which startled partisans on both sides. That statistically insignificant advantage grew to three percentage points in a late-afternoon release, where it remained for hours, even as the actual count began to suggest the opposite outcome. It was only at 1:33 a.m. Wednesday that updated exit poll results showed Bush ahead by a point.

Even more curious numbers were emerging from individual states. The final Virginia figures showed Bush with a narrow lead. Exit poll data from Pennsylvania, which was held back for more than an hour, showed Kerry ahead by nine percentage points. The actual results: Bush crushed Kerry in Virginia by nine points, while Kerry took Pennsylvania by just a two-point margin.

In a review of 1,400 sample precincts, researchers found Kerry's share of the vote overstated by 1.9 percentage points -- which, unhappily for exit pollsters, was just enough to create an entirely wrong impression about the direction of the race in a number of key states and nationally.

Morin also confirms that previous years showed a similar, though typically smaller Democratic skew.  He ads one important new wrinkle -- 1992 had a similar skew: 

It's hardly unexpected news that the exit polls were modestly off; exit polls are never exactly right. The networks' 1992 national exit poll overstated Democrat Bill Clinton's advantage by 2.5 percentage points, about the same as the Kerry skew. But Clinton won, so it didn't create a stir. In 1996 and 2000, the errors were considerably smaller, perhaps just a whiff more Democratic than the actual results. That suggests to some that exit polls are more likely to misbehave when their insights are valued most -- in high-turnout, high-interest elections such as 1992 and this year [emphasis added].

Morin also answered a question that comes up repeatedly:

Perhaps the Democratic skew this year was the result of picking the wrong precincts to sample? An easy explanation, but not true. A post-election review of these precincts showed that they matched the overall returns. Whatever produced the pro-Kerry tilt was a consequence of something happening within these precincts. This year, it seems that Bush voters were underrepresented in the samples. The question is, why were they missed? [emphasis added]

This piece would have been stronger without the usual gratuitous slap at bloggers (though this one is a bit back-handed):

It's also time to make our peace with those self-important bloggers who took it upon themselves to release the first rounds of leaked exit poll results...but rather than flog the bloggers for rushing to publish the raw exit poll data on their Web sites, we may owe them a debt of gratitude. A few more presidential elections like this one and the public will learn to do the right thing and simply ignore news of early exit poll data. Then perhaps people will start ignoring the bloggers, who proved once more that their spectacular lack of judgment is matched only by their abundant arrogance.

Oh well....I've said my piece on this issue already. At least Morin says bloggers are "his new best friends." Name calling aside, the article remains worth reading in full.

UPDATE:  Morin had much more to say about the exit polls in this online chat -- even some nice words for a certain blogger.  Who knew? 

Posted by Mark Blumenthal on November 22, 2004 at 01:27 PM in Exit Polls | Permalink | Comments (22)

November 19, 2004

The Freeman Paper Revisted

I want to revisit my post from Wednesday night on the paper by Dr. Steven Freeman on the "Unexplained Exit Poll Discrepancy." First, Dr. Freeman himself posted an answer last night that many of you may have missed (even I did until tonight - I have copied his complete response at then end of this post). Second, several other comments convinced me that my last three paragraphs were not as clear, and perhaps a bit more incendiary, than they should have been.

First, my main argument is that we have not yet seen any empirical evidence in the exit polls to prove the existence of vote fraud, nor any evidence that the exit poll discrepancy can be explained by any such fraud. Warren Mitofsky strengthened that argument when he told the Chris Johnson, of the blog MayflowerHill, that his initial analysis showed no deviation from the discrepancy "in precincts with touch screen computers that don't leave paper trails, or any other type of machine for that matter."

However, as I wrote yesterday - and it bears repeating - the fact that the exit polls show no evidence of vote fraud does not disprove vote fraud. It may have occurred on a scale too small to be detectable by the exit polls.

I realize that I confused this issue with my own language in the third to last paragraph, which began, "So to summarize: Absent further data from NEP, you can choose to believe..." It sounds as if I am assuming there are only two possibilities with respect to the possibility of vote fraud. Obviously, that is not the case.

Had I written that better, I might have said: "So to summarize, if you want to explain the exit poll discrepancy, absent further data from NEP, you can choose to believe..." My point is that there are two competing theories for the discrepancy: The first is that the exit polls were slightly biased to Kerry due to a consistent pattern of what methodologists call "differential non-response" that has been evident in exit polls to a lesser degree for a dozen years (Republicans were more likely to refuse to fill out the exit poll than Democrats). The second theory is that systematic and consistent vote fraud occurred in almost every state and using every type of voting equipment. The first hypothesis seems plausible to me; the second wildly improbable.

Steven Freeman and others are right that no one has conclusively proven either hypothesis, but I never suggested that I offered conclusive proof, only a far more plausible explanation for the discrepancy.

Finally, I realize I probably would have been better off omitting the word "delusional" from the last paragraph. It obviously conveyed a broader judgment than I intended, a conclusion brought home by this comment by reader Anthony England:

Apparently reasonable people cannot ask questions about this issue without being accused of promulgating wild allegations that professional pollsters have "deliberately suppressed evidence of ... fraud" or without being derided as delusional conspiracy nuts.

I did not intend to condemn all who see a "possibility of count errors" as "delusional," as Steven Freeman heard it. Nor did I intend to imply that there is anything "delusional" about simply asking questions. Hopefully, asking questions is what this site is about.   In retrospect, the word "delusional" was a mistake.  Good blogging, I am told, is about correcting mistakes as soon as possible.  My apologies.

One last thing:  Loyal reader CW made this point via email: "Machines that generate no confirming bit of paper for a voter to review are a disaster for public trust in elections."  Fraud or no fraud, on that point I completely agree.

Professor Freeman's complete response:

Hello, Mark:

I’d like to thank you for taking the time for offering this detailed critique of my paper, and more generally for your knowledgeable commentary on polling processes. Since writing the draft you read, I’ve learned a great deal about polling – in large part from reading through your site. I’ll have a revision of my paper out in a few days, which will be much stronger for having read your commentary.

Regarding your post, I’m going to respond to several points and then give a general response to what I see as the big question:

1. Data. I’m happy to make my CNN data available. I have 49 states & DC (only Virginia missing if anyone has that), although for a few I don’t have sample size. Just tell me where you’d like me to send it to or post. (My own personal and University websites have been going down from too much traffic.)

2. High degree of certainty (Your point 1). I agree that I overstated the case, should never have cited Hartmann, and did not understand the logistical challenges you explain. NEVERTHELESS, logic and evidence still indicate that exit polls should be a good basis for prediction, and although I can understand why the logistical challenges would increase the margin of error, it’s not at all clear why they should skew the results.

3. 250-million-to-1 (Your point 2). I see that I did put too much faith in stratification counterbalancing the effects of clustering, and will redo the calculations with the 30% increase. That’s a very good citation. NEVERTHELESS, as you point out, it doesn’t change the finding that **random error can be ruled out as an explanation.** This is really the main point of the first draft, because once chance is ruled out, some other explanation needs to be found.

4. Official “explanations. (Your point 3). My key point about explanations is that all we have -- at best -- are hypotheses. Perhaps Bush-voter refusal is a better hypothesis than I gave credit for, but it still is only a hypothesis. (Too many women would be irrelevant to the CNN data. Male and female preferences are reported separately and thus automatically weighted appropriately.) On the other hand, there are also creditable hypotheses, some with substantial evidence, which could have effected the tally.

I object most to belittling dismissals of these second set of hypotheses and allegations (e.g, Manuel Roig-Franzia and Dan Keating, “Latest Conspiracy Theory -- Kerry Won -- Hits the Ether” Washington Post, November 11, 2004; Tom Zeller, Jr. "Vote Fraud Theories, Spread by Blogs, Are Quickly Buried" New York Times November 12, 2004-Page 1), along with unquestioning acceptance as “explanations” the hypotheses and allegations about poll error.

In summary, I think that perhaps I biased my paper somewhat unfairly towards suggesting count errors as explanations, but that was probably in response to what I still see as an extreme bias at the press in dismissing them.

When you say that suggesting the possibility of count errors is delusional, perhaps you have done the same? (It seems as though you spend a lot of time on the tin foil hat circuit.)

Thinking coolly and scientifically: Is it delusional to question the Bush-voter-refusal hypothesis as conclusive without independent evidence? On the other hand, considering the scores of allegations, the history (especially in Florida), the lack of safeguards with electronic voting, the conflict-of-interest in election oversight, etc…, etc… (and now the Berkeley study) is it delusional to consider that, just possibly, even part of the discrepancy might be due to the possibility of miscount?

Yours truly, Steve Freeman

Posted by Mark Blumenthal on November 19, 2004 at 11:59 PM in Exit Polls | Permalink | Comments (19)

Exit Polls: Winston's Theory

The New Republic's Noam Scheiber passed on a theory floated by Republican pollster David Winston about the discrepancy in exit polling data that favors Democrats, not just this year, but in years past (although not as consistently):

Winston suggested that one reason the early exit poll data was so far off a couple weeks ago was that the people asking the polling questions may have unconsciously over-represented African Americans in their samples, particularly with the memory of 2000 and the fear about disenfranchising blacks so fresh in their minds. By way of explanation, Winston suggested that if you were supposed to be interviewing 35 people, of which six were supposed to be African American, and you ended up interviewing seven African Americans in order to ensure that the group was adequately represented, and if a lot of other people asking questions over-compensated in the same way, then what would seem like a marginal difference could have huge implications for the poll's overall results. (African Americans vote overwhelmingly Democratic, and you've just increased their weight in your sample by nearly 17 percent.)

Mickey Kaus read Scheiber's post and asked:

But this is a testable theory, no? Did the early exit polls oversample blacks in comparison with the final vote? And did it vary from polltaker to polltaker? (Presumably some of them are more guilty of guilt than others.)

The quick answer to Kaus's question is yes, the proposition is certainly testable. I have heard, through the grapevine, that the mid-day exit polls did have unusually high percentages of women and African American voters. However, these early numbers were not weighted by turnout and, obviously, reflected only half the day's vote. I heard through the same grapevine that the female and Black percentages were better at the end of the day. Unfortunately, we do not know the racial composition of the end-of-day just-before-poll closing exit polls (Unless Stephen Freeman's sources saved those as well).

However, Winston's theory breaks down on some other important details of how the exit polls are done.

1) He assumes that NEP instructs interviewers to fill racial and demographic quotas. They do not. Interviewers are only told to solicit every 4th or 5th or 10th voter exiting the polls. I can confirm this because a few helpful "birdies" (apologies to Wonkette) sent me a copy of the NEP Interviewer Training Manual for 2004.

2) Winston assumes that interviewers are liberal. I know that NEP recruits interviewers from college campuses, but only because my "birdies" were students. That is sample of 3-4 out of 1500; a very small sample size. We really do not know much about the composition of their interviewing staff.

Appropriately, NEP places great emphasis on the "appearance and conduct" of its interviewers. Its training manual instructs: "Your appearance can influence a voter's decision to participate in the survey. Therefore, please dress appropriately...Please wear clean, conservative, neat clothing and comfortable shoes (clean sneakers are acceptable). But, please, NO JEANS and NO T-SHIRTS."

3) The exit polls can correct for skews in race, gender and reduce skews in age caused by refusals. This is a unique aspect of exit polling: The interviewers note the gender, race and approximate age of those who refuse their request for an interview. At the end of the day, the exit pollsters can use this data to correct for non-response bias. So you can correct for non-response bias in gender, race and approximate age, but obviously not for vote preference.

However, one hedge on all this: In writing about these issues, I realize that NEP and its forerunner VNS have disclosed very little about the timing of the various weighting and statistical corrections that they do. I know they collect hard turnout counts in their late afternoon and just-before-poll closing reports, and I know they use this actual turnout data to weight the results released late on Election Day. I know that when all the dust clears, they can weight to correct the data for demographic non-response at the precinct level, to match precinct level vote preference to the actual count, and to similarly weight to correct the vote regionally and statewide. What I cannot confirm is when all of this happens: gradually on election night or in one big procedure after midnight? And does some additional adjustment occur in the days and weeks that follow?

Here is one more wrinkle I overlooked in my initial reading. In 1996 (according to Merkle and Edelman, 2000), VNS had it's interviewers call in only half of the data collected on Election Day:

During each call, the interviewer reported the vote and non-response tallies and read in the question-by-question responses from a subsample of the questionnaires. This subsampling is carried out so that we can use the responses from the vote questions from the full sample in each precinct (i.e. the 100 interviews) for the projection models without having to spend interviewer and operator time reading in each questionnaire. In 1996, 70,119 of the 147,081 questionnaires were subsampled, and the data for all questions were read into our system.

I noticed that Warren Mitofsky told the News Hour that NEP interviewed "almost 150,000 people nationwide on Election Day," but that the MIT/Cal Tech report that counted up the sample sizes posted on CNN just after the election counted 76,000 respondents when they "summed up the number of observations reported for each state poll" by CNN a few days after the election (see footnote #3). Looks to me like the half sampling procedure continued.

So I wonder: If this procedure was repeated this year, did the half sample include gender, age, race? Were the numbers that appeared on CNN.com since Election Day weighted to correct non-response?

I'm assuming the answer to both questions is yes, but we really do not know for certain. As a student of survey research, I would certainly like to learn more. Given that these data are paid for by news media outlets and released into the public domain, we really should know more.

Source:

Merkle, Daniel M. and Murray Edelman (2000). "A Review of the 1996 Voter New Service Exit Polls from a Total Survey Error Perspective." In Election Polls, the News Media and Democracy, ed. P.J. Lavrakas, M.W. Traugott, pp. 68-92. New York: Chatam House.

Posted by Mark Blumenthal on November 19, 2004 at 12:59 PM in Exit Polls | Permalink | Comments (8)

November 18, 2004

The UCal Berkeley Report

First, one point I should have made more clearly in previous posts: The absence of significant evidence of fraud in exit polls does not prove the absence of fraud. When Warren Mitofsky says he sees no greater deviations for any particular type of voting equipment, he means he sees no differences big enough or widespread enough to be statistically meaningful. If vote fraud occurred in just a few counties in one state, the exit polls may have lacked the statistical power to detect it. That lack of power is what statisticians call “Type II Error.”

Which brings me to the U.Cal Berkeley report, (now available here – you need to scroll down to the link, “The Effect of Electronic Voting…).” The first thing to understand about the report, in the context of my recent posts, is that has nothing to say about exit polls. It depends in instead on a statistical analysis of county level voting patterns.

“Observer,” a commenter on my last post, made that point and also did a nice summary of the report's findings:

They are using multivariate linear regressions to explain voting patterns in Florida, and are finding a very statistically significant correlation between the presence of electronic voting and a higher percentage for Bush.

The paper has undergone some peer review prior to publication. It mentions two concerns that were raised about the methodology, and shows that when those concerns were addressed the findings did not change substantially.

Of course, now that the paper has been released on Internet it will be subject to a much wider review by others with far more expertise in statistical modeling than I can offer. 

For now, just keep in mind that is possible that the Berkeley report detected a discrepancy that the Florida exit poll missed, given the size of the discrepancy and the number of precincts sampled.  It is also possible that the full report on the Florida exit poll will contradict the Berkeley finding.  Once again, without more specific data from NEP, we really cannot say for sure.

Corrected mispellings of Berkeley 11/19.

Posted by Mark Blumenthal on November 18, 2004 at 06:20 PM in Exit Polls | Permalink | Comments (27)

Fraud in Florida?

A quick note before anyone emails to ask whether I think these guys are "delusional" too. The release posted on RottenDenmark indicates that the team from the Research Center at the U. Cal. Berkeley will release evidence from some sort of new study. They "will report irregularities associated with electronic voting machines may have awarded 130,000-260,000 or more excess votes to President George W. Bush in Florida in the 2004 presidential election." Details will follow at a press conference later today, though I'm guessing their study is more than a rehash of the exit poll discrepancy. This is obviously a serious effort by a serious institution; it should be interesting.

To those who saw his post this morning, I believe Mr. Kaus slightly misread MayflowerHill in reporting that Warren Mitofsky "disprove[d] the idea that the no-paper-trail electronic voting machines in Florida were rigged" [emphasis added]. MayflowerHill's account makes no specific reference to Florida.

I am guessing that Mitofsky's analysis was done at the national level. When I described this method of analysis in earlier posts (here and here), I should have pointed out a few important limitations. Dr. Fritz Scheuren, VP for Statistics at the National Opinion Research Center (NORC) summarized the most important in an email on the AAPOR listserv a few days ago (quoted with permission):

Three cautions here. First the number of precincts under each method can get very small. Take Ohio for example where 75% of the voting was still done with punch cards and only 25% was electronic.

Second, the comparisons of voting outcomes are obviously not free of preexisting precinct differences, Such differences surely confound the results in a way that would be hard to adjust for, adding still more uncertainty.

Third, for some analyses it is the precinct, and not the voter, that is the unit of analysis and here the small number of precincts just about sinks us in any individual within state work that rely on exit polls.

The exit polls typically sampled 40-50 precincts per state (although Florida may have had more - it certainly had more interviews than other states). To analyze the source of the discrepancy with actual votes, Mitofsky's best unit of analysis would have been precincts not voters, so his ability to detect differences within a single state are limited.

Corrected mispelling of Berkeley 11/19.

Posted by Mark Blumenthal on November 18, 2004 at 10:14 AM in Exit Polls | Permalink | Comments (7)