« November 2004 | Main | January 2005 »

December 27, 2004

Happy New Year

Just a reminder, in case you missed my holiday post, that I am taking a break this week.  Blogging will resume in the New Year.

In the meantime, here’s an off-topic suggestion: Daniel Drezner has links to agencies that are directing relief toward those affected by the earthquake and tidal waves and other related information.

See you next year…

Posted by Mark Blumenthal on December 27, 2004 at 11:05 PM in MP Housekeeping | Permalink | Comments (24)

So if Exit Polls Have So Much Error, Why Do Them?

After my post last week, which argued that the deceiving Kerry "leads" in four battleground states fell well short of statistical significance required for a projection, especially given the 99.5% confidence level required by NEP, I got this comment from an alert reader via email:

If we really adhere to confidence intervals of the sort you (or NEP propose) of somewhere between 5 and 8%, exit polls are COMPLETELY USELESS. Eighteen states would not have been able to have been called (there margins were within the confidence intervals), and these 18 states were the only interesting states. An idiot who looked at the column of safe states for each candidate [and then] flipped a coin for the battleground states would do as well as exit pollsters

Actually, he has a point. The dirty little secret is that the exit polls alone are almost never used to project winners in seriously contested states, or (as the reader put it) in states where the winner was not already perfectly obvious 24 hours before the election.

Don't take my word for it. Consider the states the networks called at poll closing time. My friends at National Journal's Hotline provided a listing they compiled of when the networks called each of the "top purple states." Of these, only Alaska and West Virginia were called at poll closing time on the basis of exit polls alone (WV was called at that time by only three networks), and few still considered those states truly competitive by Election Day.  All of the true "battleground" states were called later in the evening on the basis of actual results.

Poll_closing

Or you could have asked Warren Mitofsky a few years ago. In January 2001, he made the following recommendation to CNN for future Election Night projections:

In order to project the winner in an election based solely upon exit poll data, the standard for an estimated vote margin will be increased from 2.6 standard errors to four standard errors - i.e. a Critical Value of 4.0 instead of 2.6 will be necessary for a "Call Status" based entirely on exit poll data [page 51 of the pdf].

A "critical value" is a number that specifies the level of confidence in the formulas that calculate the margin of error. In this case, a critical value of 4.0 is the equivalent of 99.997% confidence level and much wider margins of error than those used this year, which were based on a 99.5% confidence level. In plain English, Mitofsky's recommendation (which the networks apparently did not adopt) was to use exit polls to project winners only when they show a candidate cruising to a huge landslide victory.

So why do the networks shell out millions of dollars for exit polls? First, they do care about making projections as quickly as possible in the non-competitive states. While it is true that your or I could probably have "called" states like Indiana, Oklahoma or Massachusetts the day before the election with complete accuracy, news organizations are supposed to report only what they "know," not what they think they know. The exit polls in all the obvious states, with their huge, easily statistically significant margins provide hard confirmation of what everyone expects and thus provide a factual basis for the early projections.

Second, they use the exit polls to fill airtime on election night with analysis about why the candidates won and lost. Or as the CNN post-election report put it less cynically in 2001: "Exit polling provides valuable information about the electorate by permitting analysis of such things as how segments of the electorate voted and what issues helped determine their vote." Four years ago, that same report argued against using exit polls to project winners, but endorsed the use of exit polls for analytical purposes. They concluded (p. 8 of the pdf):

Total elimination of exit polling would be a loss, but its reliability is in question. A non-partisan study commission, perhaps drawn from the academic and think-tank communities, is needed to provide a comprehensive overview and a set of recommendations about exit polling and the linked broader problems of polling generally.

Perhaps it is time to reconsider that recommendation.

[FAQ on Exit Polls]

Posted by Mark Blumenthal on December 27, 2004 at 06:01 PM in Exit Polls | Permalink | Comments (4)

So WHY Were the Exit Polls "Wrong?"

This post is another summary that mostly ties together various items I've covered separately, but also adds some new material.

We know that the exit polls had an average error favoring John Kerry of 1.9% per precinct. What explains the error? There are many theories that are, in my view, far more plausible than the notion of widespread fraud or problems with the count. Unfortunately, we lack definitive proof of a cause, both because NEP has not released any of its internal analysis and because such proof is often elusive. Let's review what we do know.

Warren Mitofsky and Joe Lenski , the researchers that conducted the exit polls on behalf of the National Election Pool (NEP), have so far been circumspect in there public comments. They have offered theories, but stopped short of claiming definitive proof for any explanation of the error favoring Kerry. Here is a sampling of their on-the-record speculation:

Kerry was ahead in a number of the -- in a number of the states by margins that looked unreasonable to us. And we suspect that the reason, the main reason, was that the Kerry voters were more anxious to participate in our exit polls than the Bush voters...in an exit poll, everybody doesn't agree to be interviewed. It's voluntary, and the people refuse usually at about the same rate, regardless of who they support. When you have a very energized electorate, which contributed to the big turnout, sometimes the supporters of one candidate refuse at a greater rate than the supporters of the other candidate. (Warren Mitofsky on The News Hour, November 5, 2004)

In addition, some inquiry into what went wrong with the exit polls is also necessary. Thankfully, Lenski told me that such a probe is currently underway; there are many theories for why the polls might have skewed toward Kerry, Lenski said, but he's not ready to conclude anything just yet. At some point, though, he said we'll be able to find out what happened, and what the polls actually said. (Farhood Manjoo for Salon.com, November 12, 2004)

One thing [Warren Mitofsky] confirmed to me is that the average deviation to Kerry in the completed version of the exit poll is estimated at +1.9%. When asked if the full 1.9% deviation could be explained by non-response bias (Kerry voters being more likely to complete the exit poll than Bush voters), he said, "It's my opinion, but I can't prove it." He went on to say that it would be an impossible thing to "prove" categorically because there exist an infinite number of variables that could have a micro-impact on the exit poll which could combine for a statistically significant impact. These factors ranged from the weather to the distance from the polling place some of his poll takers were forced to stand. He is also trying to determine whether there is a statistically significant correlation between certain types of precincts and the non-response deviation. Again, right now he feels the most reasonable and logical explanation of the average 1.9% deviation for Kerry was non-response bias. (Blogger Chris Johnson of Mayflower Hill, November 17, 2004).

Whether the internal NEP analysis ever sees the light of day remains an open question. The networks have obviously resisted public disclosure to date. Four years ago, a Congressional investigation into the election night snafus helped motivate several networks to release internal reports. That pressure is lacking this year, so we will have to wait and see.

Until then, we can make some educated guesses about the analysis they have done or are doing. First, the NEP analysts can examine several potential sources of error to see if they contributed to any systematic bias for Kerry. By examining actual vote returns they can identify errors in:

  • The random samples of precincts
  • The hard counts of turnout obtained by interviewers
  • Data entry or tabulation
  • Telephone surveys of absentee voters (in 13 states listed here)
  • Absentee voting not covered by the exit polls (in 37 states and DC)

While all of these factors could have introduced error, with the possible exception of absentee ballots, it is hard to imagine how any could have contributed to a systematic bias favoring Kerry. Moreover, these problems are relatively easy to identify once the full count is available. As such, I'm assuming that if any of these factors could explain the errors favoring Kerry, we would have heard about it already.

The second step is to look at the error that remains, something the analysts refer to as "within precinct error." The most likely culprit is some combination of response and coverage error. Response refers to randomly selected voters who did not participate in the survey; coverage refers to votes refers to voters who were not included in the sample because the exited the polls when the interviewer was away or did not pass the interviewer when the exited.

The good news is that the exit pollsters have more tools as their disposal to help study response and coverage error than other survey researchers. Since interviewers are face to face with potential respondents, they keep a tally on the gender, race and approximate age of refusals. Most important, the NEP analysts can calculate the difference between the poll and the actual count within each precinct. They know quite a bit about each precinct: how it voted, the type of voting equipment used at the polling place, the number of exit doors at the polling place, whether the polling place officials were cooperative and how far the interviewer had to stand from the exit. They also know the age, gender and level of experience of the interviewer at each precinct. They can use all of these characteristics and more to see if any tend to explain the error in Kerry's favor.

The bad news is that it is difficult to say much about those voters that refused to be interviewed, because...well...they were not interviewed. If the NEP analysts are lucky, they will be able to draw some inferences from the characteristics of the precincts where the error was greatest, but otherwise, explanations may be elusive, even with all the data available.

Those of us not privy to the internal investigation are left to speculate about the most plausible theories. Here are some educated guesses, but keep in mind, this is just hypothesis:

Bush voters were more reluctant to be interviewed - As summarized in an earlier post, Republicans and conservatives have long reported less trust of the national media and, as such, may be slightly less likely to want to participate in the exit polls. The NEP interviewers and materials prominently display their big network sponsorship.

Kerry voters were more likely to volunteer to be interviewed - Exit polls have a potential "back door" that other surveys lack. On telephone surveys, a respondent cannot possibly volunteer to be interviewed. In an exit poll, the interviewer is supposed to pick every third or fifth exiting voter, but others may still approach and express an interest in being interviewed. Interviewers are instructed to deny such requests, but only training and diligence of the interviewers will prevent deviation from the sampling procedure.

This year's NEP exit poll interviewers were trained via telephone and most worked for just one day without supervision. With roughly 50 interviews per precinct and a 50% response rate, it would only take an average of one non-random "volunteer" respondent favoring Kerry per precinct to create a 2% error.

Now consider: The Democratic National Committee waged an apparently successful campaign to get the 5 million members on its email lists to vote for Kerry in unscientific online polls. A CBS News report found that John Kerry ran "about 20 points better" in non-scientific online polls than in traditional, random sample surveys (thanks to alert reader BB for the link). Is it possible that some of this enthusiasm to respond to online surveys carried over to the Election Day and make some partisan Democrats more apt to volunteer to take the exit polls?

Bush voters were more likely to avoid exit pollsters who were forced to stand among electioneering partisans - Overzealous election officials who force exit pollsters to stand 100 feet or more from the polling place consistently present exit pollsters their biggest logistical challenge. At that distance the interviewers cannot cover all exiting voters, and worse, often get trapped standing among electioneering partisans - a gaggle most voters try to avoid.

Now consider that several news accounts suggest that Democratic campaigns and groups like ACT and Moveon.org put far greater emphasis on Election Day visibility than their Republican counterparts. Matt Bai's post-election piece for the New York Times Sunday Magazine noted the puzzlement of Democratic organizers that their "field offices weren't detecting any sign of Bush canvassers on the streets or at the polls." Is it possible that exit poll interviewers found themselves frequently standing among Democratic partisans that exiting Republican voters might want to avoid?

Again, all of this is just speculation, and definitive proof may be elusive even to those with access to the raw data. However, some combination of the above most likely caused the exit poll errors that favored Kerry.

By comparison, the alternative explanation for the exit poll "discrepancy" - widespread nationwide voter fraud - is wildly implausible. Consider the preliminary finding that Warren Mitofsky shared with blogger Chris Johnson:

One possibility [Mitofsky] was able to rule out, though, is touch screen voting machines that don't leave any paper trail being used to defraud the election. To prove this, he broke down precincts based on the type of voting machine that was used and compared the voting returns from those precincts with his own exit polls. None of the precincts with touch screen computers that don't leave paper trails, or any other type of machine for that matter, had vote returns that deviated from his exit poll numbers once the average 1.9% non-response bias was taken into account.

In other words, the size of the "discrepancy" between the exit polls and the vote did not vary by the type of voting equipment used at the precinct. Now if you believe there were problems in the count that were limited to a few counties or precincts, than the exit poll "discrepancy" has little relevance. Even if such problems occurred, the NEP exit polls lacked the statistical power to detect small errors within individual counties or precincts.

However, if you believe the error in the exit polls presents evidence of widespread fraud, you need to explain how such a fraud could have been committed consistently across all types of voting equipment and in all the battleground states. You would also have to reconcile that theory with New Hampshire, where the exit polls overstated John Kerry's support by 5%, yet a Ralph Nader sponsored recount found no noteworthy discrepancies. A Nader spokesman concluded, "it looks like a pretty accurate count here in New Hampshire."

None of this makes much sense. The more plausible explanation is that a problem evident to some degree in the exit polls every year since 1990 - a problem most likely caused by some combination of response and coverage error -- simply got worse.

[FAQ on Exit Polls]

Posted by Mark Blumenthal on December 27, 2004 at 06:00 PM in Exit Polls | Permalink | Comments (21)

December 25, 2004

Season's Greetings!

First, I just wanted to let everyone know that I will be taking a much-needed break from blogging for the next week. I’ll be back on January 3.

Second, I have two more posts on exit polls that I could not finish before the holiday. By the magic of “post scheduling,” the elves at Typepad will bring you two more items on exit polling tomorrow. In the New Year, I resolve to move on to other subjects.

Finally, my family wishes yours a Merry Christmas, a joyous holiday season and a happy New Year.

Deena_and_sam_4

Posted by Mark Blumenthal on December 25, 2004 at 01:37 PM in MP Housekeeping | Permalink | Comments (0)

December 24, 2004

Have the Exit Polls Been Wrong Before?

I have a short backlog of posts on the exit polls I've been working on this week, intended mostly to summarize information I've covered previously and make it more accessible via the FAQ. However, there is new information here, as well as in the posts that will follow.

One of the odd bits of received wisdom I keep hearing about the exit poll controversy is that up until this year, the exit polls were "always right." If so then this year's errors seem "implausible," and wild conspiracy theories of a widespread fraud in the count somehow seem more credible. The problem with this reasoning is that exit polls similarly "wrong" before, though perhaps not to the same degree or consistency.

Here is the documentation on previous errors. First, from the Washington Post's Richard Morin:

The networks' 1992 national exit poll overstated Democrat Bill Clinton's advantage by 2.5 percentage points, about the same as the Kerry skew

Warren Mitofsky, who ran the 2004 exit poll operation along with partner Joe Lenski, wrote the following in the Spring 2003 issue of Public Opinion Quarterly (p. 51):

An inspection of within-precinct error in the exit poll for senate and governor races in 1990, 1994 and 1998 shows an understatement of the Democratic candidate for 20 percent of the 180 polls in that time period and an overstatement 38 percent of the time...the most likely source of this error is differential non-response rates for Democrats and Republicans:

From the internal CNN report on the network's performance on Election Night 2000 (p. 48 of pdf):

Warren Mitofsky and Joe Lenski, heads of the CNN/CBS Decision Team, told us in our January 26 interview with them that in VNS's use of exit polls on Election Day 2000, the exit polls overstated the Gore vote in 22 states and overstated the Bush vote in 9 states. In the other 19 states, the polls matched actual results. There was a similar Democratic candidate overstatement in 1996 and a larger one in 1992.

In short, Mitofsky and Lenski have reported Democratic overstatements to some degree in every election since 1990. Moreover, all of Lenski and Mitofsky's statements were on the record long before Election Day 2004.

Of course, those errors were apparently bigger and more consistent this year. According to an internal NEP report leaked to the New York Times, this year's "surveys had the biggest partisan skew since at least 1988, the earliest election the report tracked." However, in some states, the errors in 2000 were still quite large. Consider this comment from Joe Lenski to CNN on December 12, 2000 (p. 48 of pdf), describing the table also copied below: 

The second group contains five states that had stupendously bad exit poll estimates. Here is a comparison of the final best survey estimate at poll closing with the final actual results for these five states... As you can see the exit polls in these five states were off by between 7 and 16(!!!) [Emphasis in original]

Lenski_table

The exit poll errors four years ago led Mitofsky to tell the CNN investigators, "The exit poll is a blunt instrument," and Lenski to add, "the polls are getting less accurate" (p. 26 of pdf). They recommended "raising the bar" on projections made from exit polls: "The proposed changes result from a belief that exit polling is "less accurate than it was before" and that "we should take exit poll data with caution in making calls," said Lenski" (p. 27).

All of this led the authors of the internal CNN report -- Joan Konner, James Risser, and Ben Wattenberg - to conclude (p. 3, 7):

Exit polling is extremely valuable as a source of post-election information about the electorate. But it has lost much of the value it had for projecting election results in close elections...[Their recommendation to CNN:] Cease the use of exit polling to project or call winners of states. The 2000 election demonstrates the faults and dangers in exit polling. Even if exit polling is made more accurate, it will never be as accurate as a properly conducted actual vote count.

[FAQ on Exit Polls]

Posted by Mark Blumenthal on December 24, 2004 at 10:23 PM in Exit Polls | Permalink | Comments (5)

December 23, 2004

Oh Those Pesky Filtered Questions

Josh Marshall flags an error in Wednesday's Washington Post poll story on Social Security (conducted jointly with ABC).  The story correctly reported that "The president also has at least general support from 53 percent of the public for the concept of letting people control some of their contributions to invest in the market."  The problem came in the paragraph that followed: 

It is on the specifics that Bush faces problems. Support dropped to an even split when people were told that the cost of the transition to a new program could reach $2 trillion over time, as some forecasts project.

Marshall initially reported on an earlier version of the story which cited specific numbers showing that support "drops to 46% to 47%...when a price tag is put to the plan."  Those are the numbers you get when you access the "complete data" for this question on the Post's website.  However, as Josh noted, that page shows the question was asked only of those who support the stock market option.  So the 46% support with a $2 trillion in borrowing was among the 53% that initially supported the program.   

Again, as Marshall and his alert readers noticed, the correct characterization is that support for the President's plan fell to only 24% (46% of 53%), and opposition increased to 69%, when respondents learned of the plan's $2 trillion price tag.  ABC's release and especially its compilation of full results (which shows the "net" calculation combining the two questions) make this crystal clear: 

Ss_pricetag

Note that another question in the survey requires similar math.  The Post story reported correctly that: 62 percent said they would not participate in such a program if it meant their retirement income would go up or down depending on the performance of their stock picks -- which is the essence of Bush's plan.

Thirty seven percent (37%) said they would participate.  The ABC release mentions a follow-up question (that the Post omits):

Among the minority who say they would participate, eight in 10 say they'd invest "some" or a "just a little" of their Social Security funds in the market. Just 19 percent say they'd put in all or most of their available assets [emphasis added].

The full results among the 37% that answered the question:  7% said they would put "all" their social security money in the stock market, 11% "most," 57% "some" and 23 "just a little."   Thus, another entirely accurate characterization of the same numbers: Only 7% of Americans say they would invest "all or "most" of their Social Security funds in the stock market if it meant their retirement benefits  would go up or down with the market.

As Josh might say, that's a pretty small number. 

All of this suggests a few important lessons: 

Always check who was asked the question and how the pollster calculated the percentage (e.g. what was the "base?").  Follow-up questions like the ones asked above are very common in opinion surveys, as is the sort of mischaracterization on the Post story. 

My best tip to readers is to do what pollsters do. If you can, read the full questionnaire with results filled in before you read the poll story.  Unfortunately, this is not always possible, as many media organizations do not release full results even on the web, but many do.  Here is a list of organizations that typically release "filled-in" questionnaires or the equivalent (the links take you to archives of past data, otherwise links to complete data are usually included in stories posted online.  RealClearPolitics is another good source for links to complete releases):

It is tempting to take this opportunity to share my pet peeves about the way some organizations provide these releases only to paid subscribers or about the way virtually all withhold basic demographic results and screen questions.  However, it's the holiday season, so I'll put it this way:  I wish every public survey organization would resolve to emulate in 2005 the releases of the New York Times poll (try this link, or if that doesn't work, the "multimedia interactive feature" link in the upper right column on this page).  They include the verbatim text of every question, including demographics and complete time series data when available. 

 

Posted by Mark Blumenthal on December 23, 2004 at 10:32 AM in Polls in the News | Permalink | Comments (1)

December 21, 2004

NAES Reports on Hispanic Voters

The National Annenberg Election Survey (NAES) has just weighed in with its massive 81,422 interview rolling survey on the issue of Hispanic voter preference in the 2004 elections. They combined their rolling tracking surveys for the eight weeks before the election and and two weeks after to obtain a sample of 907 Hispanic registered voters. They compared the vote preference to that measured among Hispanic registrants over the same time period in 2000. The money quote:

There has been recent disagreement over how well Bush did among Hispanics. The television network-Associated Press national exit poll taken on Election Day gave him 44 percent of their votes, compared to 35 percent in 2000. Then a study by Ana Maria Arumi of NBC News, aggregating the 51 individual 2004 exit polls conducted in every state for the same sponsors concluded that the Bush share was 40 percent. But Antonio Gonzalez, president of the William C. Velasquez Institute, a research group that deals with political issues, contended an exit poll he conducted showed Bush got only 33 percent. 

The Annenberg data, which gave Bush 41 percent, cannot resolve the dispute. But it suggests strongly that Bush made significant gains whose precise magnitude is uncertain. The margin of error for the 2004 Annenberg data was plus or minus three percentage points [emphasis and links added].

Note the caveat -- This is a telephone survey of registered voters, some of whom did not vote:

Through both ten week periods, the degree of Hispanic support for each of the major party candidates remained quite level. But there is no way of knowing which pre-election respondents voted as they expected, or voted at all. Nor are post-election recollections as reliable as what people tell exit pollsters on Election Day; there is usually a tendency for more respondents to say they voted for the winner than actually did so.

Nonetheless, the report makes strictly apples-to-apples comparisons of interviews done over the same period time in its 2000 and 2004 surveys, and this sample has none of the clustering issues inherent in exit polling. The unusually large number of interviews helps show where Bush's gains (from 35% in 2000 to 41% this year) occurred. For example, those increases were greatest among Hispanic men and those living in the South and Northeast.

Today's report also includes results from 3,592 Hispanic registered voters interviewed over the course of the year. It provides results for a variety of major issues and political attitudes broken out by national heritage: Mexico, Puerto Rico, Cuba Spain, Central America, South America. I have not had a chance to do more than skim the tables, but this report provides an unmatched resource.

I am a huge fan of the Annenberg Survey, for reasons I resolve to explain at some point in the New Year. Two things to note: Their sampling and telephone interviewing methodologies are absolutely top notch, and their massive rolling average tracking program is tailor made for exactly this sort of analysis. The report and their methodology page explain it all.

Posted by Mark Blumenthal on December 21, 2004 at 02:45 PM in The 2004 Race | Permalink | Comments (3)

December 19, 2004

What About Those German Exit Polls?

A commenter asked last week, "why are exit polls so much more accurate in Europe?" This is a question worth considering, because all surveys are not created equal. Differences in sample sizes, response and coverage rates and the experience and training of interviewers can tell us a lot about the potential for survey error.

I cannot claim personal expertise in European exit polls, but a Google search quickly turned up a rather contrary opinion published earlier this year by the ACE project (an acronym for Administration and Cost of Elections, a joint project funded by the UN and the US Agency for International Development):

[Exit poll] reliability can be questionable. One might think that there is no reason why voters in stable democracies should conceal or lie about how they have voted, especially because nobody is under any obligation to answer in an exit poll. But in practice they often do. The majority of exit polls carried out in European countries over the past years have been failures [emphasis added].

Presumably, the newfound belief in the accuracy of European exit polls comes from Steven Freeman's evolving paper, the Unexplained Exit Poll Discrepancy. Freeman placed great emphasis on the "highly reliable" results from Germany. He showed results from exit polls conducted by the Forschungsgruppe (FG) Wahlen on behalf of the ZDF television network that were off by only 0.26% over the last three national elections.

Freeman also cited another "consistently accurate" survey, this one conducted by student volunteer interviewers in Utah. In 2004, according to Freeman, the Utah Colleges Exit Poll came within 0.2% of the actual result for the 2004 presidential race. "Consistently accurate exit poll predictions from student volunteers," Freeman concluded, "including in this presidential election, suggest we should expect accuracy, within statistical limits, from the world's most professional exit polling enterprise."

Four weeks ago, after I posted my original critique of his paper, Freeman called seeking further input. My advice included the strong suggestion that he check on the methodologies used for the German and Utah polls before implying that the NEP surveys were comparable. As of this writing, Freeman's paper still lacks any reference to basic methodological information regarding the German and Utah exit polls.

I made some inquiries about both, which I summarize below. For the sake of comparison, let's begin with the methodological details of the National Election Pool (NEP) exit polls:

NEP Exit polls

  • State exit polls sampled 15 to 55 precincts per state, which translated in 600 to 2,800 respondents per state. The national survey sampled 11,719 respondents at 250 precincts (see the NEP methodology statements here and here)
  • NEP typically sends one interviewer to each polling place. They hire interviewers for one day and train them on a telephone conference call.
  • The interviewers must leave the polling place uncovered three times on Election Day to tabulate and call in their results. They also suspend interviewing altogether after their last report, with one hour of voting remaining.
  • The response rate for the 2000 exit polls was 51%, after falling gradually from 60% in 1992. NEP has not yet reported a response rate for 2004.
  • Interviewers often face difficulty standing at or near the exit to polling places. Officials at many polling places require that the interviewers stand at least 100 feet from the polling place along with "electioneering" partisans.

German Exit Polls (by FG Wahlen)

Dr. Freeman's paper includes exit poll data conducted by the FG Wahlen organization for the ZDF television network. I was able to contact Dr. Dieter Roth of FG Wahlen by email, and he provided the following information:

  • They use bigger sample sizes: For states, they sample 80 to 120 polling places and interview 5000 and 8000 respondents. Their national survey uses up to 22,000 interviews.
  • The use two "well trained" interviewers per polling place, and cover voting all day (8:00 a.m. to 6:00 p.m.) with no interruptions.
  • Interviewers always stand at the exit door of the polling place. FG Wahlen contacts polling place officials before the election so that the officials know interviewers will be coming. If polling place officials will not allow the interviewers to stand at the exit door, FG Wahlen drops that polling place from the sample and replaces it with another sample point.
  • Their response rates are typically 80%; it was 83.5% for the 2002 federal election.
  • The equivalent of the German of the US Census Bureau conducts its own survey of voters conducted within randomly selected polling places. This survey, known as "Repräsentativ-Statistik," provides high quality demographic data on the voting population that FG Wahlen uses to weight their exit polls.

Dr. Roth also added the following comment: "I know that Warren Mitofsky's job is much harder than ours, because of the electoral system and the more complicated structure in the states."

Utah Colleges Exit Poll

  • The Utah poll typically consists of 90 precincts and between 6,000 and 9,000 respondents. Compare that to the NEP exit poll for Utah in 2004, which sampled 15 precincts and 828 respondents.
  • Interviewers are student volunteers from local universities who attend an in-person training seminar.
  • They assign eight student interviewers to each precinct working in two shifts of four each. The larger number of interviewers allows coverage of all exits at each polling place all day long.
  • The response rate has typically been 60%, although the 2004 survey attained a response rate of roughly 65%.

In short, there are sound methodological reasons why the German and Utah exit polls typically obtain more accurate results: They do more interviews, attain better coverage and better response rates and use arguably better trained interviewers.

Of course, the lower response rates and coverage problems, in and of themselves, do not explain why the NEP exit polls had a slight Kerry bias this year. If the error was in the surveys, then something made Kerry supporters either slightly more cooperative or more likely to be interviewed. The fact that response and coverage rates were lower did not cause the error, it just made the potential for error that much greater.

12/20-corrected misspelling of Wahlen in original

[FAQ on Exit Polls]

Posted by Mark Blumenthal on December 19, 2004 at 10:49 PM in Exit Polls | Permalink | Comments (31)

December 15, 2004

Measurement Error...in the Count

A quick break from exit polls...

Alert JW, a resident of Washington State, asks this interesting question highly relevant to the ongoing recount in that state's race for Governor:

Can a vote that is only "decided" by 42 votes out of 2,800,000 ever really be accurate? We're going into our 2nd recount, and I bet that the various totals given by each recount approximate the variation that exists in sampling polls if your sampling size was 2.8 million. Does anyone ever talk about this thing?

Actually, some have. When the presidential recount in Florida came down to a margin of a few hundred votes either way, Johns Hopkins University President William R. Brody penned a Washington Post OP-ED piece on this very point:

But before we rush to conclude that a recount will resolve any closely contested election, consider this simple fact: A plurality of 300 votes out of nearly 6 million votes cast constitutes a margin of only 1 in 20,000. If we wish to recount the votes to determine whether the number 300 is indeed correct, we must be accurate in the recount process to much better than 0.005 percent.

Put another way, if you or I were asked to recount votes in one of the Florida precincts and were given a stack of 20,000 votes to count, we would have to perform the recount with zero errors! Just one error in the 20,000 ballots would be equivalent to the 300-vote margin that Gov. George W. Bush finished with in the recount.

I don't know about others, but I can assure you that there is no way I could count 5,000 ballots, let alone 20,000, and maintain 100 percent accuracy. Simply distract me for one second while I'm counting and I could easily make a mistake.

We, the American people - and in this case, most especially the media - have tacitly assumed that voting is an intrinsically accurate process. But even in the absence of ballot tampering, no voting process can be expected to be 100 percent error free...

All of which raises an important question. What is the intrinsic accuracy of the voting process, of the voting machines and tallying methods? I suspect that most people would be happy to learn that vote counting was accurate to 0.05 percent. But in 6 million votes, that error rate would translate into a 3,000-vote margin of error - clearly not accurate enough for this election. If we knew the error rate, we could perhaps put into a statute the requirement for a runoff election whenever the margin was less than the voting error rate.

Now consider this issue in terms of surveys. We have been discussing sampling error in recent posts, the random variation that comes from drawing a random sample rather than interviewing the entire population. In tabulating the vote there is no sample, hence no sampling error, yet small tabulation errors still occur. Brody wrote about errors four years ago; such errors certainly remained prevalent this year. In surveys, these inevitable tabulation errors are usually random and offsetting. Absent strong evidence to the contrary, I assume most such errors in the vote count were similarly random.

[Because someone will ask: Yes, I have seen claims that "100% of the reports of improper vote tabulation" benefited George Bush, but so far at least, I have not seen systematic evidence beyond the anecdotal. If you know of any such effort, or any effort to debunk these claims, please post a comment.]

Another source of error suggested in the Florida recount, but not touched on by Brody was a broader conception of what survey researchers call "measurement error." We know that four years ago, many Florida voters went to the polls intending to cast vote for one candidate, but did not ultimately have their choice recorded as intended because of confusing "butterfly" ballots and or improperly punched chads that voided their ballots. Obviously, there was considerable debate -- legal and political -- over whether a recount could have corrected some of those errors. Whatever side of that debate you were on, it is clear that there was some fuzziness in the count then and now.

If measurement error can be a factor in something as seemingly straightforward as balloting for president, imagine how important it can be on more complex issue questions that frequently show up on opinion polls. Ideally, a survey researcher will try to minimize measurement error by "pre-testing" questions - do they measure the things we want them to? The Mystery Pollster assumes the issue of potential "measurement error" will come up again and again as we broaden our focus a bit in 2005.

Posted by Mark Blumenthal on December 15, 2004 at 04:23 PM in Measurement Issues | Permalink | Comments (6)

December 14, 2004

Exits: Were They Really "Wrong?"

Last week's posting of more detailed information on the sampling error of exit polls by the National Election Pool (NEP) allows for a quick review of the now well established conventional wisdom that the "exit polls were wrong." 

Let's first set aside the mid-day numbers that were widely leaked on the Internet but never intended as the basis for projections (numbers that even the exit pollsters recognized as flawed in some states -- see this earlier post for explanations on the differences among the many estimates provided by the exit pollsters on Election Day). Let us also stipulate that at least one state estimate of Hispanic voters (Texas) was obviously wrong, given the correction issued by NEP.

The conclusion that the exit polls were wrong is essentially about two possible failings:

  • That the end-of-day numbers seemed to predict a Kerry victory.
  • That the end-of-day numbers showed Kerry doing consistently better than he actually did.

What is the reality given what we now know about the exit poll's sampling error?

1) Did the just-before-poll-closing exit polls show a consistent and statistically significant "error" in Kerry's favor?

Yes, but that error has been exaggerated. Here is what we know:

  • An internal NEP review of 1,400 sample precincts showed Kerry's share of the vote overstated by an average of 1.9 percentage points. As far as I can tell, no one from NEP questions the statistical significance of that overstatement.
  • The before-poll-closing exit poll results posted by Steven Freeman (and included in the updated report by the Cal Tech / MIT Vote Project) show errors in Kerry's favor in 43 of the 51 surveys (the states plus DC). These overstate Kerry's vote by an average of 1.6 percentage points. If the surveys had been perfectly executed with perfectly random samples (an all but impossible assumption under real world conditions), the pattern of errors should have been the same as if we had flipped a coin 51 times: about half favoring Kerry, about half favoring Bush. The probability of 43 of 51 perfect surveys favoring Kerry by chance alone is less than 0.001%. Of course, this statistic only tells us that the surveys were imperfect. It says nothing about the cause or magnitude of the error.

To be clear: Everyone -- including the exit pollsters -- agrees they "overstated" Kerry's vote. There is some argument about the precise degree of certainty of that overstatement, but if all agree that the difference is statistically significant the degree of certainty has little consequence. The size of the error matters, and the reasons for it matter, but whether our level of confidence about the error's existence is 99.9% or something greater does not.

Having said that, the second draft of the paper, "The Unexplained Exit Poll Discrepancy," continues to needlessly exaggerate the significance of the error, especially within individual states. For example, Freeman claims that there were significant "discrepancies" in Pennsylvania, Ohio and Florida when each state is considered separately. He has a graphic (Figure 1.2) showing a significant error in Florida, assuming a 95% confidence level. However, these assertions are not supported by the margins of error reported by NEP.

  • I applied the appropriate "confidence intervals" reported by NEP (as distributed to its partner networks on or before Election Day) to each state. Contrary to Freeman's assertions, the separate "discrepancies" in Ohio, Florida and Pennsylvania fail to attain statistical significance even at a 95% confidence level. In fact, I see only four states (New Hampshire, New York, South Carolina and Vermont) with statistically significant errors favoring Kerry at a 95% confidence level. Of course, when using a 95% confidence level, 2-3 states should be out of range by chance alone.
  • I also followed the advice of Nick Panagakis and estimated confidence intervals at a 99.5% level of confidence (the standard used by NEP to make projections) using the actual estimates of the design effect obtained from NEP Warren Mitofsky by blogger Rick Brady. By my calculations, this more demanding test renders the apparent errors in NH, NY, SC and VT non-significant.

Then there is the statistic heard round the world - Freeman's original "250 million to one" estimate of the odds against the discrepancy occurring simultaneously in Pennsylvania, Ohio and Florida. In his second draft, Freeman revised his estimate down to a mere 662,000 to one. Given that Freeman continues to understate the "design effect" used to calculate the sampling error in this year's exit polls, his revised estimate also remains too high.

Some have asked that I calculate my own estimate of the joint probability of an error in Ohio, Florida and Pennsylvania. I am reluctant to do so for two reasons: First, the rounding error in Freeman's data alone renders this sort of hairsplitting moot. Second, and more important, it really doesn't matter. Everyone concedes there was a small (2%) but significant average error in Kerry's direction. For those concerned about problems with the count, what matters most is why that error occurred.

2) Considering sampling error, did the end-of-day numbers predict a Kerry victory?

This one is easy: No. Not even close.

It is true that the early numbers fooled a lot of people, including the talking heads on the cable networks, pollsters and advisors of both campaigns and possibly -- depending on which accounts you believe -- even John Kerry and George Bush. The reason, as the Post's Richard Morin put it, is that the 1.9% average error in Kerry's favor "was just enough to create an entirely wrong impression about the direction of the race in a number of key states and nationally."

True. But that error was not big enough to give Kerry statistically significant leads in enough states to indicate a Kerry victory. If we totally ignore sampling error, the exit polls showed Kerry ahead in only four states that he ultimately lost:  Ohio, Iowa, Nevada and New Mexico. Obviously, Kerry would have won the election had he prevailed in all four.

Before considering whether any of those leads were statistically significant, two quick reminders: First, as Nick Panagakis points out, NEP required at least a 99% confidence level for projections on Election Night. Second, the margin between two candidates on a survey involves two estimates; and the margin of error applies separately to each candidate. Statisticians debate which rules of thumb to apply when determining the significance of the margin between two candidates, but the consensus in the case falls somewhere between the 1.7, as recommended by the American Statistical Association and 2.0, as recommended by those who consider it more appropriate in a race where the vote for 3rd party candidates is negligible (this is a great topic for another day's post - thanks to alert reader Bill Kaminsky for guiding MP through the competing arguments).

Ohnvianm_1

Fortunately, in this case, the statistical debate is irrelevant. None of Kerry's apparent exit poll leads in the four states were large enough to attain statistical significance, even if we assume a 95% confidence level and use a cautious multiplier (1.7) on the margin of error. As the preceding table shows, the exit polls had Kerry ahead by 4 percentage points in Ohio, by 3 in New Mexico, by 2 in Iowa and by one point in Nevada. The NEP 95% margin of error for these states multiplied by 1.7 is between 5 and 7 percentage points. At the more appropriate confidence level of 99.5% -- the one NEP actually uses to recommend projections - these relatively small margins would fall far short of that needed to call a winner.

In a year-end review in this week's Roll Call (subscription required), Stuart Rothenberg concluded the following:

The problem wasn't the exits - it was the folks who treated the early-afternoon numbers as if they were a predictor of what would happen after everyone had voted. The exit poll was off by a couple of points, but that's well within the margin of error.

If only someone had warned us about those early afternoon numbers on Election Day. Oh, wait...

UPDATE:  I mentioned William Kaminski's blog, but not his commentary on this issue nor his elegant graphic -- it's worth the click just for the chart.  My only remaining quibble with his graphic is that it displays a "safe estimate" of a 95% confidence of +/-4%; leaving the impression that the numbers for DE, AL, AK and NE fall outside that interval.  However, the appropriate 95% confidence interval provided by NEP for these states is +/-5%.

[FAQ on Exit Polls]

Posted by Mark Blumenthal on December 14, 2004 at 03:48 PM in Exit Polls | Permalink | Comments (58)