« October 2005 | Main | December 2005 »

November 30, 2005

Polling Holiday Sales

The Wall Street Journal's "Numbers Guy" Carl Bialik takes a look this week at the various measures used to estimate retail sales over the Thanksgiving holiday weekend.  While not about political polling, to be sure, most of the measures he describes are survey based, and the lessons learned are useful for anyone who consumes survey data.   

Bialik piece is well-worth reading in full (no subscription required), I want to add a point or two.  He explained the methods used by a survey sponsored by the National Retail Foundation that estimated a 22% increase in Thanksgiving weekend spending compared to a year ago:

Here's how the NRF's polling company, BIGResearch LLC, arrived at that estimate: The company has gathered an online panel of consumers who answer regular surveys about their buying habits, elections and other matters. The company emailed panelists on the Monday or Tuesday before Thanksgiving to advise them that a survey was coming over the weekend. Then a second email went out late Thanksgiving night, saying the survey was open. It stayed open until late Saturday night. The survey asked consumers a series of questions about their weekend shopping activity and season-long plans. The key question for the group's estimate was, "How much did you spend on holiday shopping?"

BIGResearch averaged answers to that question, adjusting for factors like the age, gender and income of its 4,209 respondents. Then it extrapolated to all U.S. adults. The conclusion: spending was up 22% compared with the same weekend last year, as measured in the same way. About 8% of that growth came from the U.S. population increase, and with a greater percentage of respondents saying they plan to shop than did last year. The rest of the increase came from a surge in reported average spending over the weekend.

Bialik goes on to note some very valid criticism of this methodology and then includes a quotation from yours truly:

Pollster Mark Blumenthal, who writes about survey research on his blog MysteryPollster.com, told me it's reasonable to compare survey results from this year with results from last year, as NRF is doing. But he cautions that "what [survey respondents] say about what they did may or may not reflect what they actually did."

I want to add two more cautions:  First, the survey results may not be truly projective of the U.S. population.  As Bialik tells us, the survey was conducted using an "online panel."  Online panels are becoming more common (MP has discussed them here and here).  They have been used to conduct political polls by Harris Interactive, Economist/YouGov, Knowledge Networks, Polimetrix and and the Wall Street Journal/Zogby "Interactive" surveys sponsored by the .  However, except for Knowledge Networks, none of these surveys use random sampling to select their pool of potential respondents.  In a typical telephone survey, for example, every household with a working telephone has a chance of being selected.  Panel researchers, by contrast, begin by creating a pool of potential respondents who have selected themselves, typically by responding to advertisements on websites (including this one) or accepting an invitation to answer a survey when filling out a website registration form.  The researchers typically offer some sort of monetary incentive to those willing to respond to occasional surveys. 

Although their specific methods vary widely (the methods of Harris, Knowledge Networks and Polimetrix are especially unique), panel researchers typically draw samples from the panel of volunteers and then weight the results by demographics like age, gender and income, as BIGResearch did.  While these statistical adjustments will force the demographic composition of their samples to match larger populations, other attitudes or characteristics of the sample may still be way out of whack. 

Consider a somewhat absurd hypothetical:  Suppose we conducted a survey by randomly intercepting respondents walking through shopping malls.  Suppose we even started by picking a random sample of shopping malls across the United States.  We could certainly weight the selected sample by race, gender and age and other demographics to match the U.S. population, but even then our sample would still overestimate those who visit shopping malls, and by extension, their retail purchases in comparison to the full population.  As a projection of the shopping behavior of the U.S. population, our hypothetical sample would be pretty worthless.

Now consider the way BIGResearch gathered their sample.  Oh wait, we can't do that.  We can't because BIGResearch tells us nothing about how they gathered their panel, not in Bialik's article, not in the NRF press release, not in their "full report" and not anywhere I can find on their company website

So we will have to make an educated guess.  Since BIGResearch conducts an online panel, we know they are missing those without Internet access.  If they are like most Internet panels and recruit through Internet banner ads and other "opt-in" solicitations tied to some sort of monetary incentive, we can also assume their pool is biased toward those inclined to complete surveys for cash.  Is it possible this selection process might create some bias in the estimation of retails sales?  We have no way of knowing for sure.  Though to be fair, Bialik notes some prior BIGResearch retail sales estimates that were "on the mark" compared to Commerce Department statistics. 

A second and potentially more fundamental caution involves the way BIGResearch conducted this particular survey.  According to Bialik, they contacted potential respondents a few days before Thanksgiving to "advise them that a survey was coming over the weekend."  Bialik does not say how much the initial invitation said about the survey topic, but if it described the survey as about holiday shopping the solicitation itself may have helped motivate respondents to go do some.  Also, consider the academic research showing that those with an interest in a survey's topic are more likely to respond to it.  As such, heavy shoppers are probably more likely to complete a survey about shopping. 

Despite all of the above, as per my quotation in Bialik's article, if the survey was done exactly the same way, asking exactly the same questions two years in a row, then using it to spot trends in the way respondents answered questions is reasonable.  What seems more questionable - especially given the lack of methodological disclosure - is trusting that those answers provide a bullet-proof projection of what the respondents actually did or whether their answers are truly projective of the U.S. population.

 

Posted by Mark Blumenthal on November 30, 2005 at 01:04 PM in Internet Polls | Permalink | Comments (0)

November 28, 2005

Dispatch Poll: Questionnaire Language

A quick update on the Columbus Dispatch poll: The folks at the Dispatch have kindly provided the full text of the questions asked on their final pre-election survey on Ohio Issues 1 through 5.  The full text appears on the jump. 

The language used for the Dispatch poll was "greatly condensed" as compared to the actual ballot language (as described by Dispatch reporter Darrel Rowland), but not nearly as condensed as I had guessed.  Looking only at the text of the descriptions (and excluding the titles) the poll questions varied in length from 64 to 128 words while the actual language varied from 210 to 932 words.    In my original post, I had guessed incorrectly that the poll had used the even more condensed language that appeared in the graphic sidebar alongside the original poll story.   

The complete language used by the Columbus Dispatch poll follows below.  Use the links to see the actual ballot language for Issues 1, 2, 3, 4 & 5.

NOV. 8, 2005, STATEWIDE BALLOT ISSUES

    Issue 1: Proposed Constitutional Amendment    
    (Proposed by Resolution of the General Assembly of Ohio)    
    The proposed amendment is for the purpose of creating and preserving jobs and stimulating economic growth in all areas of Ohio by improving local government public infrastructure, including roads and bridges, expanding Ohio's research and development capabilities and product innovation, and developing sites and facilities. The amendment would authorize the state to issue general obligation bonds of up to $2 billion for these purposes.
    Shall the proposed amendment be adopted?   Yes...    No...   Don't Know...

    Issue 2: Proposed Constitutional Amendment
    (Proposed by Initiative Petition)
    The proposed amendment would allow all electors the choice to vote by absentee ballot in all elections. The amendment would provide that any person qualified to vote in an election is entitled during the 35 days prior to the election to receive and cast a ballot by mail or in person at the county board of elections or additional election location designated by the board.    
    Shall the proposed amendment be adopted?   Yes...    No...   Don't Know...

Issue 3: Proposed Constitutional Amendment    
    (Proposed by Initiative Petition)
    The proposed amendment would establish revised limits on political contributions, establish prohibitions regarding political contributions and provide for revised public disclosure requirements of campaign contributions and expenditures. The amendment would limit annual contributions by individuals to $25,000 in total to all candidates for state executive offices and members of the General Assembly, political parties, PACs, multi-candidate PACs and small donor PACs. The amendment would establish limits ranging from $50 by an individual to a small donor PAC to $100,000 by a political party to a candidate for statewide executive office.
    The measure also would prohibit out-of-state political parties and candidate committees from making contributions or expenditures in connection with any candidate election or making a contribution to a political party in Ohio.    
    Shall the proposed amendment be adopted?    Yes...   No...   Don't Know...       

    Issue 4: Proposed Constitutional Amendment
    (Proposed by Initiative Petition)
    The proposed amendment would provide for the creation of a five-member state redistricting commission with responsibility for creating legislative districts. Sitting judges would choose two members of the commission; the other three would either be appointed by the first two or chosen by lot. The new state commission would replace the existing separate processes for creating legislative districts for representatives to Congress and representatives and senators to the Ohio General Assembly. The commission would be required to create as many legislative districts as possible that are politically competitive.
    The commission may consider whether to alter a plan to preserve communities of interest based on geography, economics, or race, so long as the reconfiguration does not result in a significant reduction in competitiveness.    
    Shall the proposed amendment be adopted?     Yes...   No...  Don't Know...

Issue 5: Proposed Constitutional Amendment

    (Proposed by Initiative Petition)    
    The proposed amendment would create a newly appointed board of nine members to administer elections. The amendment would eliminate responsibility of the elected Ohio Secretary of State to oversee elections.
     The members of the board would be appointed as follows: four by the governor, four by the members of the General Assembly affiliated with the political party that is not the same as that of the governor, and one by a unanimous vote of the chief justice and justices of the Ohio Supreme Court. The member appointed by the Supreme Court may not be affiliated with a political party. The governor and members of the General Assembly must appoint equal numbers of men and women and take into consideration the geographic regions and racial diversity of the state.    
    Shall the proposed amendment be adopted?     Yes...   No...  Don't Know...

Posted by Mark Blumenthal on November 28, 2005 at 04:45 PM in Initiative and Referenda | Permalink | Comments (1)

November 23, 2005

Pre-Thanksgiving Odds & Ends

Just enough time today to pass on a few odds and ends before taking a few days off for the holiday:   

A New Poll - We have another national public to watch.   It is a partnership of The Cook Political Report and a new a new "bi-partisan" polling and strategic consulting firm called RT Strategies.   The first poll takes a closer look at perceptions of the two "presumptive frontrunners" for each party's presidential nomination:  Hillary Clinton and John McCain.   Charlie Cook reviews the results in his weekly column and the full results are available at the Political Report website.

A New Take on Fraud in Ohio - Hamilton College Political Science Professor Phillip Klinker has posted a "quick and dirty" regression analysis of the Ohio's county level vote on the blog "PolySigh."  Once Klinker controlled for county level variables like race, Kerry's 2004 vote and 2005 turnout, the statistical impact of different types of voting equipment on the level of support for Ohio Issues 2 through 5 was tiny.   If anything, the electronic counting equipment that the fraud theorists argue was used to rig the outcome correlated with higher vote for Issues 3, 4 and 5.   These results imply that if "fraud" occurred anywhere in Ohio, it occurred everywhere, including ballots cast on punch cards or optically scanned paper ballots.  See my last post for why that's important.

[Clarification - I'll agree with commenter Nash on one thing.  My second to last sentence above was poorly written.  Here's a second try:  The results of the regression imply that if a fraud explains the discrepancy between the Dispatch poll and the results, it occurred everywhere at roughly the same level.   That would include counties that used punch cards or paper ballots].

A Thanksgiving break - I'm taking a few days off to rest up and enjoy the holiday with my family.  Hope you have a Happy Thanksgiving and see you next week!

Posted by Mark Blumenthal on November 23, 2005 at 02:48 PM in Initiative and Referenda, Polls in the News, The 2005 Race | Permalink | Comments (4)

November 22, 2005

Ohio Update: The AG's Opinion on Paper Ballots

At end of my long post on the recent problems with the Columbus Dispatch poll, I passed along a tip from an Ohio election lawyer that a recent "opinion" issues by the Ohio Attorney General gave all paper ballots the legal status of "public documents" after all formal counting has been completed.  Not surprisingly, the opinion by Ohio AG Jim Petro is posted online

Petro issued the opinion -- which as I understand it, has the force of law -- after various public requests for access to ballots and voting logs during the recount of Ohio's 2004 vote.  The opinion rules that during the official count or any recount "a board of elections has a duty to preserve ballots," and so Ohio's Public Records Law does not give the public the right to inspect the ballots.   

After the official count is completed (sixty days after the election), however, Ohio citizens have the right to examine the ballots under the Public Records Law:

Following the completion of the canvass of election returns under R.C. 3505.32, pollbooks used in an election are public records of a board of elections and are subject to public inspection in accordance with any reasonable regulations the custodian board of elections has established under R.C. 3501.13, except as may be provided by a proper order of a court.

That means that in early January, any Ohio citizen (or reporter) can request access to the paper ballots and conduct their own audit.   An enterprising voter or investigative reporter should be able to check the ballot paper trail in any of the 82 of 88 counties to look for inconsistencies between that paper record and the official count.   

For those who believe that the Ohio results amounted to "an astonishing display of electronic manipulation" in Ohio's election," or that the fraud is the most likely "suspect" for the considerable difference between the Dispatch poll and the election results, this provision offers the opportunity to go gather real evidence.  Obsessing over the Dispatch poll does not. 

I agree that as electronic voting systems become more and more pervasive, a paper trail is just the first step.  We also need more formal routine audit procedures with far more transparency.  So why not take the first step with a "citizen's audit" of the considerable paper trail in Ohio?  Yes, the various boards of elections may resist those who try to test the public records law, but why not try?

Posted by Mark Blumenthal on November 22, 2005 at 02:18 PM in Initiative and Referenda | Permalink | Comments (5)

November 21, 2005

Ballot Issues: How Do We Know?

I was reading through an embarrassingly long backlog of email today and came across one message I meant to respond to weeks ago.  In my first post on initiative and referenda polling, I wrote:

We know that many voters make up their minds by reading the actual language on the ballot while standing in the voting booth or filling out their absentee ballot.

Reader A.L. emailed with a great question:

How do we know that?  What sort of research has been done on that?  Just what are the numbers like here...do 80% of voters decide in the booth based on the ballot language or is it more like 50%?

To be honest, we don't.  At least, MP does not know with anywhere near that level of precision, and cannot find any formal academic research on that subject.   In response to AL's question, I actually emailed the California pollsters to ask if they knew of any such research, and no one did. 

As such, it may have been more accurate to say "we assume" than "we know."  Certainly every voter gets exposed to the ballot language at some point.  For some (such as California who receive a sample ballot in the mail from their Secretary of State in advance of the election), this experience may come long before they vote.  For others, it happens for the first time as they are casting their vote.  Some may try to read the text before deciding, others have made up their minds before confronting the text.  How many try to decide by reading the text on Election Day?  I can't say for sure, but I have certainly talked to voters who did just that. 

My observation was based mostly on a bit of conventional wisdom - widely shared by campaign consultants and managers - that derives from years of watching initiative and referenda campaigns.  More often than not, ideas that are very popular when polled as concepts fail at the ballot box.  Even when polled as a formal ballot issue, support almost always starts high and falls as the campaign progresses.   "No" campaigns do what they can to seed doubts, and like the campaign in Ohio, use legalistic or complex ballot language as their ally.  That's why pollsters and consultants largely agree that the "no" side has a built-in advantage.

Just because I could not find an academic research on this question does not mean that none exists.  Perhaps one of MP's many academically minded readers can suggest a citation.  If so, please post a comment or email me

Posted by Mark Blumenthal on November 21, 2005 at 05:08 PM in Initiative and Referenda | Permalink | Comments (2)

November 18, 2005

Columbus Dispatch Poll: Past Performance No Guarantee of Future Results

"Past performance is no guarantee of future results."  We typically hear that disclaimer applied to financial investments, but in an era of declining response rates, it should apply just as well to polls.   Last week, I neglected to include such a disclaimer in a post touting a survey with a long and remarkable history of success -- the Columbus Dispatch mail-in poll -- less than 24 hours before it turned in one of its most disastrous performances ever.  While we may never know the exact reasons why, it is clear in retrospect that problems stem from the very different challenges the Dispatch poll faced this year and the modifications to its methodology made in response. 

First, let's review what we know about what happened and then speculate a bit about why.

[Update: This post is long even by MP standards.  On refelection, I thought readers in a hurry might appreciate an executive summary of the detail that follows about the Columbus Dispatch poll and what was different in 2005: 

  • It has always been less accurate in statewide issue races than in candidate contests. 
  • It had never before been used to forecast an off-year statewide election featuring only ballot issues.
  • It departed from past practice this year by including an undecided option and not replicating the actual ballot language - two practices that helped explain the poll's past accuracy.
  • Its response rate this year was significantly lower than roughly half that obtained in recent elections, including a similarly low turnout election in 2002.
  • The timing of the poll would have missed any shifts over the final weekend, and the final poll showed support trending down for all four initiatives.   Meanwhile a post-election survey showed that nearly half the "no" voters made up their minds in the days after the Dispatch poll came out of the field.

All the details follow on the jump, including comments on the fraud theories pushed by Fitrakis, Wasserman and Friedman].

In 1996, an article appeared in the journal Public Opinion Quarterly by Penny Visser, Jon Krosnick, Jesse Marquette and Michael Curtin documenting the "remarkably accurate forecasts" of a statewide mail-in survey conducted since 1980 by the Columbus Dispatch.   In 32 statewide races involving candidates between 1980 and 1994, the final Dispatch pre-election poll "deviated from the actual results by an average of 1.6%" (p. 189) compared to 5.4% for telephone surveys conducted by the University of Akron and 4.9% for the University of Cincinnati.  I discussed some of the reasons offered for that greater accuracy in a post last year.

In 2000, the same authors (Visser, Krosnick, Marquette and Curtin) published a follow-up chapter that also looked at other races tested by the Dispatch.   In contests between 1980 and 1996 they found that the Dispatch had much more average error (5.8%) forecasting statewide ballot issues than local ballot issues (3.9%), local candidate races (2.7%) and statewide candidate races (1.5%).  Nonetheless, as I noted here last week, the Dispatch polls of statewide issue races still had slightly less average error (again, 5.8%) than comparable polls conducted by telephone (7.2%).

[In preparing this post, I discovered that in their 2000 chapter, Visser, et. al. mistakenly categorized four issues from 1995 as statewide rather than local.  The correct error rates for the Dispatch polls appear to be 7.1% for statewide issues and 3.4% for local ballot issues between 1980 and 1996.  This error did not affect the statistics that compared comparable Dispatch and telephone polls].

Here is a summary of the average errors for statewide ballot issues included in the Visser, et. al. chapter, plus results for the three results since that I could find in the Nexis database.  Note, as those authors did, that "some of the errors for the referenda were quite substantial," including 12.8% for a soft drink tax referendum in 1994 and 11.2% for a convention vote in 1992.  Note also that size of the errors varied even within individual surveys.  For example, the 1994 survey included an error of only 3.8. 

19902004_error


Now let's look at what happened this year and how it was different.

One huge difference should jump out immediately from the table above.  Every previous Columbus Dispatch poll on statewide issues was part of an even-numbered year survey that also included questions about races for President, Governor or Senator.  As far as I can tell, 2005 is the first ever Dispatch poll conducted for an election with only statewide issues on the ballot

Another big difference is less obvious.  This year, the Dispatch poll offered respondents an "undecided" option for each issue.  While the Dispatch polls typically offer an undecided option on other polls, they typically drop the undecided option on the final pre-election poll in order to better replicate the actual voting experience.   This time, however, according to an Election Day email exchange I had with Darrel Rowland of the Dispatch, they were concerned that with the state issues involved, "not to include that could have greatly distorted Ohioans' stance on these issues."   

2005_error


The change in the format complicates the error calculation.  To keep the statistics as comparable as possible to the Visser, et. al. articles, I dropped the undecided and recalculated the percentages just as they did for the telephone surveys.  By that standard, the 2004 Dispatch poll showed an average error of 21% across the five issues, more than quadruple the error rate seen in previous years for ballot propositions.   Even with the most favorable handling of the undecided vote (allocating ALL to the "no" vote), the average error would still be 12%, double that from previous years.   So obviously, things were much different in this election.

Why?

Unfortunately, we may never know definitively.  To understand the clues we do have, we need to consider how this election was different.  First, as noted above, this year's statewide Dispatch poll appears to be the first conducted with only state issues on the ballot.  Second, the Dispatch broke with past practice and included an undecided option on the vote questions on the final poll. 

Another critical difference, according to almost everyone I talked to in Ohio, was the length and complexity of the ballot language.  It really must be seen to be believed.  Try any of the links to the actual ballot language for Issues 1 (606 words), 2 (210 words), 3 (932 words), 4 (616 words) and 5 (351 words).   Imagine the effort required by the voter to try to digest all of this information (which apparently appeared in very small print on the ballot), or just the impression left by seeing all that verbiage, either in the voting booth or in the fifteen-page "Issues Report" available from the Secretary of State's office.  Compare that to the voter experience in California where the actual ballot language was 70 words or less for each proposition and where, unlike Ohio, all voters receive a sample ballot 30 days before the election.  [Update:  ].

Consider also the axiom about ballot issues among those who run campaigns.  The "no" side is always easier.  While I cannot cite academic research on this point, the nearly universal experience of those who follow initiative and referenda campaigns is that when confused or in doubt, regular voters will default to the "no" vote.  That is why support for ballot issues almost always declines in tracking polls as Election Day approaches. 

[Update:  Mark Schmitt the American Prospect's Jim McNeil have more on voter confusion and ballot language.  Thanks to reader ML].

So with this context in mind, let us look at the most likely sources of error for this year's Columbus Dispatch survey. 

Discussions of survey problems always start with the type of error we are all most familiar with, the so called "margin of error" associated with interviewing a sample rather than the entire population.  Since the reported margin of sampling error for the Dispatch survey was only +/- 2.5%, the large discrepancies in this case were obviously about something else.  So let us consider three types of error that are most likely responsible in this case:

1) Campaign Dynamics - We pollsters often cite the cliche that a poll is just a "snapshot" in time of a constantly changing process.  Because of the mail-in process, the Dispatch poll required a "time lapse" shot that stretched over a full week, from October 28 through November 3.  So as it always does, the final Dispatch poll measured voter opinions anywhere from 5 to 11 days before Election Day.  A significant trend toward "no" over the final week would have thrown off the results. 

In this case the Dispatch poll itself provides evidence of the typical downward trend in support for each of the initiatives.  The final poll that is now getting all the attention was their second on the state issues.  As the following table shows, opposition to the initiatives increased by anywhere from 5 to 10 percentages points during the month of October.

Trend_2005


The table also includes results for the telephone survey conducted by the University of Akron over a 23-day span from September 28 to October 20.  Note that their results for "likely voters" are similar to those from the Dispatch, although the Univ. of Akron poll did not report results for an undecided category (their report does not explain how they handled undecided responses). 

The skeptic might note that there has always been a lag between the final Dispatch Poll and the election which never produced errors this large before.  Is there evidence that the movement toward to the "no" side was big enough in 2005 to explain the larger than usual discrepancy?  Unfortunately, there were no tracking polls conducted in Ohio over the final weekend of the campaign, either by public pollsters or according to MP's sources, by the campaigns themselves. 

However, a post election survey conducted for the "no" side (and provided to MP) by Republican pollster Neil Newhouse shows evidence of a lot of late decision making.  Newhouse interviewed 1,533 respondents November 9-13 who reported casting ballots in the special election.   Among those who said they "generally voted no" on the reform issues, nearly half (44%) made up their minds and "decided to vote no in the closing days of the campaign" rather than having been against them all along [emphasis added].

Consider also the actual ballot language may have helped the "no" campaign close its case.  One of their central messages was that the reform proposals would open "gaping loopholes for special interests."  Imagine what conclusions a voter might reach on encountering all that fine print.  The impact of the ballot language may explain why the Newhouse survey showed the loophole argument to be "particularly resonant" among late deciding voters.

2) Replicating the Ballot - Two important reasons offered for why the Dispatch poll traditionally outperformed are that they simulate two key aspects of actual voting:  The lack of an undecided option and a nearly exact replicate of the ballot language.   The final 2005 survey was different from the others it involved neither feature. 

When telephone surveys allow for an undecided option, they create the dilemma of how to interpret the responses of those who are undecided.  As Visser, et. al. (1996) hypothesized, some respondents may provide "top-of-the-head responses" on a telephone survey vote questions that are "less predictive of actual voting behavior" (p. 204).   They found evidence that "the absence of undecided responses in the Dispatch surveys eliminated this potential source of error." 

We will never know if the Dispatch could have done better had they omitted the undecided category as usual on their final survey this year.  Even if we allocated every undecided vote to the "no" choice, the errors on Issues 2, 3 and 5 would still be quite large.   However, the fact that between 9% and 25% of the voters indicated they were undecided on the various issues tells us there was considerable confusion and uncertainty late in the campaign. 

The much bigger issue this year involves ballot language.  Again, in assessing the Dispatch poll, Visser et. al. (1996) theorized that "the lack of correspondence between the telephone survey candidate preference items and the format of an actual ballot may have reduced forecast accuracy" (pp. 208-209).  They conducted in-person experiments using different survey modes and concluded that "the Dispatch questionnaire design apparently contributed to its accuracy" (p. 212 - though note that as summarized above, the difference in accuracy of telephone and mail-in polls was much less for statewide ballot issues). 

The one thing we know for certain is that the Dispatch Poll this year did not attempt to replicate the ballot language.  "Out of necessity," said Darrel Rowland of the Dispatch via email, "the Dispatch Poll provided respondents with a greatly condensed version of each issue" (he sent an identical email to The Hotline).   Unfortunately, we do not yet know the exact text they did use.  I emailed Rowland to request it, but he did not respond in time for this post. 

My best guess is that the text was probably pretty close to that used in the graphic that the Dispatch posted alongside their poll story (click the graphic for a full size version):   

Dispatch_poll_1


[UPDATE (11/28):  I guessed wrong.  The actual questions (available here), while greatly condensed compared to the full ballot language were not as short as those in the above graphic].

If they used these abbreviated descriptions, the Dispatch poll may have introduced two potential sources of error.  First, they did not replicate the experience real voters had when confronting over 2700 words of ballot text.  Second, by simplifying the concepts involved they may have unintentionally yet artificially framed the choice around the substance of the proposals (vote by mail, redistricting reform, etc).  The real campaign framed those choices around more thematic arguments that tended to lump all the proposals together (which would really fight corruption, improve democracy, provide "loopholes" for special interests, etc.). 

California saw a similar dynamic this year.  The "no" campaigns against Propositions 74-77 focused less on the specifics of the proposals and more on overarching themes of "Stop Arnold" and "Stop the Schwarzenegger Power Grab."  The automated pollster SurveyUSA captured the difference this framing could make in a split sample experiment.  When they described California's Proposition 76 as something that simply "limits growth in state spending so that it does not exceed recent growth in state revenue," they found 49% support.  But when they also included the text, "the Governor would be granted new authority to reduce state spending during certain fiscal situations," support fell to 42%.   That difference persisted through election eve, and the longer version was ultimately the more accurate. 

3) Response Bias and Likely Voters.  One of the counter-intuitive aspects of the Columbus Dispatch Survey is that it seems to do better at getting a representative sample of likely voters despite having had a lower response rate than comparable telephone studies conducted since 1980.  Visser, et. al. (1996) theorize that telephone surveys do worse at identifying likely voters because "the social desirability of being an active participant in the democratic process often leads to an overrporting" of likelihood to vote, past voting and interest in politics. 

In contrast, although mail survey response rates are typically very low, the people who respond tend to be highly interested in their topics (Ferness 1951; Jobber 1984). And people highly interested in an election are most likely to vote in it.  As a result, the self-selected samples of the Dispatch mail surveys may have been especially likely to turn out.  The very nature of the mail survey response procedure may have effectively eliminated non-voters from the obtained samples [p. 198].

The authors found evidence to support this hypothesis.  Dispatch survey respondents were more representative of the voting electorate than the "likely voters" identified by telephone surveys.

We have some incomplete clues that this advantage did not exist in the final 2005 survey.  Self-identified Democrats outnumbered Republicans by 10 percentage points, even though according to Darrel Rowland's email, "the returns [typically] lean a little Republican, which reflects Ohio's recent history of tilting a bit toward the GOP."   In the post-election survey by Republican pollster Newhouse, Democrats outnumbered Republicans among those who reported casting a ballot, but the advantage was only two percentage points (36.9% to 34.7%). 

The Dispatch's Darrel Rowland also suggests that the geographic distribution of voters may have been off (presumably a bit heavier in Democratic areas): 

However, even when you do weight using our most common method (geographical distribution of the mail poll ballots) the outcome is essentially the same.

The Democratic leaning sample probably contributed to the error, but I have no doubt that weighting by party or region probably would not have reduced the discrepancy significantly.  These differences are clues to what may have been a "response bias" that was related more to the vote preference than two political party. 

The overall response rate provides another big clue.  Visser et. al. (1996) tell us that "between 1980 and 1994, the Dispatch response rates ranged from 21% to 28%, with an average of 25% (p. 185).  That rate has not fallen significantly in recent years: 19% in 2000, 22% in 2002 and 25% in 2004.  Note that the response rate was only three points lower in 2002 when the vote turnout was 47.8% of registered voters than last year when turnout was 71.8% of registered voters.

This year however, the Dispatch Poll response rate fell off significantly.  It was only 11% for the poll conducted in late September and 16%12% on the final survey.  Turnout alone does not explain the difference.  Turnout this year was 43.8% of registered voters, only a few points lower than in 2002 (47.8%) when the Dispatch achieved nearly double the response rate (22%).  [Note: the Dispatch poll is the only media poll I know that routinely publishes its response rate alongside the survey results. They deserve huge credit for that].

So what caused the decline?

As with any look at non-response "proof" is elusive.  We know little about those that do not return their surveys because they did not return their surveys.   However, consider two theories. 

a) A Mail-in Vote Survey about Voting by Mail.   Note the reference above by Visser, et. al. to the idea that people who respond to a survey tend to be interested in their topics.  A study published just last year (Groves, Presser and Dipko) found stronger evidence for this idea:  Response rates were higher among teachers for a survey about education and schools, higher among new parents for a survey about children and parents, higher among seniors for a survey about Medicare and higher among political contributors for a survey about voting and elections.

Now remember that Issue 2 was a proposal to make it easier to vote early or by mail in Ohio.  So we have a survey that was, at least in part, about voting by mail.  Wouldn't we expect a higher response rating among those who want to vote by mail on an election survey that attempts to replicate voting...by mail?   

b) Uncertainty and Confusion = Non-response. Remember my comments about how voting on initiative and referenda can be different, that voters that are confused or uncertain appear to default to a safer "no" vote.  That is what happens in the voting booth.  But what happens when voters are similarly confused or uncertain, but are confronted with a mail-in survey whose completion evokes a considerably lower sense of civic duty.  What if, as was the case this year, but never before in the history of the Dispatch Mail-in Poll, there was no candidate race at the top of the ticket, but only five issues that presented respondents with a far more significant "cognitive burden" to complete. 

My hypothesis is that many perennial voters who were confused or uncertain decided to simply pass on completing the survey.  Meanwhile, the voters who were familiar with the reform issues and ready to support them were more apt to send them in.    This would explain why the response rate was lower than usual, and why the final sample more Democratic than usual and than indicated in a post election survey.

Note that this theory may explain a complimentary phenomenon involving a self-administered online survey in California.  In California, the roles were reversed.  The Propositions were proposed by a Republican governor and opposed by Democrats.  A self-administered online survey conducted by The Hoover Institute, Stanford and Knowledge Networks at about the same time as the Dispatch Poll, showed much higher support for the propositions and a much more Republican sample than other public polls.  Their final Election Eve survey came into line with other polls. 

So we have many reasons why the Dispatch poll could have been wrong.  There is a lesson here for those who follow political polls. In an era where even the best public polls struggle to achieve a 30 percent response rate, no poll is immune from problems.  Past performance is assuredly no guarantee of future performance. 

But wait, there is one more theory now bouncing around the Internet.  Yes, it's....

4) FraudA week ago, Bob Fitrakis and Harvey Wasserman posted a diatribe on their website FreePress.org reminiscent of similar rants on the 2004 exit poll controversy.  Their unshakable conclusion:  Either the "uncannily accurate" Dispatch poll was wrong, or "the election machines on which Ohio and much of the nation conduct their elections were hacked by someone wanting to change the vote count."  A blogger named Brad [Friedman] on "BradBlog"  chimed in that the results were "staggeringly impossible" and then followed up yesterday with a preemptive shot [also posted on HuffingtonPost] at yours truly:

In the meantime, Mystery Pollster Blumenthal -- who had pooh-poohed the concerns many of us have about the historically accurate Exit Poll descrepancy [sic] with the Final Results in last year's Presidential Election, where they were accurate virtually everywhere...except in the key swing states -- has again decided that it must be the polls that are wrong, never the Election Result.

OK.  Let's all take a deep breath.  Never mind that I have never claimed a poll could disprove the existence of fraud.  Never mind that that the biggest exit poll discrepancies occurred in the "key swing states" of Vermont, Delaware, New Hampshire and Mississippi.  Let's focus on the Dispatch poll. 

Were the results surprising given the survey's history?  Yes.  Were they "staggeringly impossible?"  Of course not.  Fitrakis, Wasserman and Friedman "Brad" seem to think polls (or at least, those polls that produce results they like) are imbued with magical powers that make them impervious to error.  They are not.  The Dispatch poll performed very well historically despite a response rate in the 20s for sound reasons.  There are also sound reasons why 2005 was an exception.  To review what this post says about the Dispatch poll:

  • It has always been less accurate in statewide issue races than in candidate contests. 
  • It had never before been used to forecast an off-year statewide election featuring only ballot issues.
  • It departed from usual practice this year by including an undecided option and not replicating the actual ballot language - two practices that helped explain the poll's past accuracy.
  • The timing of the poll would have missed any shifts over the final weekend, and the final poll showed support trending down for all four initiatives.   Meanwhile a post-election survey showed that nearly half the "no" voters made up their minds in the days after the Dispatch poll came out of the field. 

Consider also the way Fitrakis and Wasserman cherry pick the data to make their dubious thesis appear more compelling.  They contrast the "precision" of the Dispatch poll on Issue 1 and with the "wildly wrong" results on Issues 2-5:

The Issue One outcome would appear to confirm the Dispatch polling operation as the state's gold standard....

The Dispatch was somehow dead accurate on Issue One, and then staggeringly wrong on Issues Two through Five....

[D]ead accurate for Issue One . . . wildly wrong beyond all possible statistical margin of error for Issues 2-5.

A compelling story, as Patrick Fitzgerald might say, a if only it were true.  Fitrakis and Wasserman achieved this result by allocating all of the undecideds to the "no" vote for Issue 1, thus showing the result to be off by only a single percentage point.  Had they dropped the undecided as Visser et. al. did for surveys that reported an undecided category, they would have shown a 12 point error for Issue 1 - a level matching the Dispatch poll's previous all-time high error. 

And had they applied the undecided allocation to Issue 4 - one of the poll results they deemed so "wildly wrong" - they would have reduced the error there to just one point, the same level that made the Issue 1 results appear so "dead accurate."  Not only do they neglect to apply their "methodology" consistently, they actually seem to mock the need to shift the undecideds on Issue 4 to make the poll consistent with the final result:

Issue Four's final margin of defeat was 30% in favor to 70% against, placing virtually all undecideds in the "no" column.

And if that is not enough, consider the "staggeringly" implausible magnitude of the fraud they allege:  They expect us to believe that someone was brazen enough not only to steal a statewide election but to then run up the score run, creating losing margins of between 27 to 40 percentage points. 

That would be an especially reckless bit of criminality given something that they neglect to mention:  Last week, 82 of Ohio's 88 counties cast their ballots last week on election equipment that left a paper trail.  They do mention that 44 counties used new election equipment, but leave out the reason.  In the aftermath of the 2004 election the Ohio Legislature mandated the purchase of election equipment with a paper trail.  Last week, the only counties still using the paper-free touch-screen machines were Franklin, Knox, Lake, Mahoning, Pickaway & Ross. 

The one thing that is truly impossible is the notion that the election could have been stolen with votes from those six counties.  The closest margin of defeat was for Issue 2.  It lost by 771,316 votes statewide, nearly double the number of all the votes cast in the six no-paper trail counties (403,113) - and that includes the yes votes.

So let me conclude with one suggestion for Bob, Harvey and "Brad:" Rather than demanding an "investigation" of the Dispatch poll, rather than once again praising the pollsters' "gold standard" methodology while simultaneously questioning their integrity and threatening to smear their reputations, why not go look for real evidence of fraud?

And here is a suggestion on how to do just that:  I spoke yesterday with an Ohio election lawyer.  He told me that as a result of the 2004 vote count controversies Ohio's Attorney General issued an opinion that gives all paper ballots (and paper "tab" printout receipts from the new electronic machines) the legal status of "public documents" subject to Ohio's Public Document Law.  As such, once the official count is complete and certified, any Ohio citizen can request access to the paper trail in any precinct and compare the paper record to the precinct results.  I have not been able to confirm this independently, and if I am wrong I will gladly retract the suggestion.  But if not, and if someone has committed vote fraud on the scale that you allege, it should not be very hard to find.

Update (11/22):  The AG's "opinion" is available here; more discussion here

[11/20 - Response rate for final survey corrected; 11/22 - added link for Newhouse survey]

Posted by Mark Blumenthal on November 18, 2005 at 11:33 PM in Divergent Polls, Initiative and Referenda, Measurement Issues, The 2005 Race | Permalink | Comments (4)

Ohio Dispatch Poll: Stay Tuned

I have been focusing for the last several days on a post on the Columbus Dispatch poll controversy.  I had hoped to have it up two days ago, but as I have been engaged in gathering the facts interest in it has grown.  So it seemed more important to cover the topic completely in one post than to split it up.   I'm still working on it.  However, I will have a post up very soon, hopefully later this afternoon.
So sorry for the delay.  Stay tuned...

Posted by Mark Blumenthal on November 18, 2005 at 02:34 PM in The 2005 Race | Permalink | Comments (2)

November 16, 2005

Cillizza's Polling Fix

MP readers may want to check out the new feature from Washington Post blogger/columnist Chris Cillizza (aka "The Fix") called "Parsing the Polls."  Each week, says Cillizza,

We'll look at national polling as well as surveys conducted in individual House, Senate and gubernatorial races each week in an attempt to give Fix readers a sense of what the mood is not only in your state but also across the country.

The first installment reviews recent political polls out in New Jersey, New York and Pennsylvania.  Check it out.

Posted by Mark Blumenthal on November 16, 2005 at 01:10 PM in Polls in the News | Permalink | Comments (3)

November 15, 2005

Compared to What?.

And then there were nine.  With the latest survey out last night from Gallup/CNN/USAToday, we now have nine national surveys released this month and all nine show lowest ever job ratings for President Bush.  Again (for those who may have missed it yesterday), check the latest job rating graphic from Prof. Charles Franklin (below) which does not yet include the latest Gallup results.  Compared to other surveys conducted just a few weeks ago, the most recent round represents another significant drop.

Franklin_1109_1


Those who analyze polling numbers should be in the habit of asking, "compared to what?"  Eric Umansky, writing in Slate's "Today's Papers" feature, does just that in the process of critiquing one minor wrinkle in the Gallup poll analysis in USA Today:

Here's the part of the story that's already swinging around the blogosphere: "Fewer than one in 10 adults say they would prefer a congressional candidate who is a Republican and who agrees with Bush on most major issues." Which may or may not be significant. The question referenced actually asks whether you would be "most likely to support a Republican who agrees with George W. Bush on almost every major issue." That's obviously a bit different than how the story phrased it, and a darn high bar. What were the responses, if any, in previous years?

Umansky has a good point, and it depends on more than respondents hearing the words, "almost every."  Gallup offered respondents a clear choice between a Republican who opposes Bush on "almost every" issue and one who "has both agreements and disagreements" with Bush (as well among similarly described Democrats):

Thinking ahead to next year's Congressional elections, which type of candidate would you be most likely to support a Republican who agrees with George W. Bush on almost every major issue, a Republican who has had both agreements and disagreements with Bush, a Democrat who has had both agreements and disagreements with Bush, (or) a Democrat who disagrees with George W. Bush on almost every major issue?

So how significant is this finding?  Again, Umansky asks the right question.  "What were the responses, if any, in previous years?"  One of the great benefits of looking at Gallup data is that they have been asking many of the same questions over and over again for decades.  Unfortunately, this question is not one of them.  MP searched the Gallup archives and found nothing comparable. 

However, we can still try to understand the significance of the result by asking some different "compared to what" questions.  As the table below shows,** there are more than twice as a many voters ready to choose a Democrat who almost always disagrees with Bush (23%) than a Republican who almost always agrees (9%).   If we look at the results to this question tabulated by party identification, we also see that Democrats are twice as likely to prefer a consistently anti-Bush Democrat (44%) than Republicans are to prefer a consistently pro-Bush Republican (22%).  Both results indicate an advantage in intensity that will work to the Democrats' advantage, should it persist through the 2006 elections.

Type_of_cand


However, Gallup has asked another question that provides some additional context, albeit with six-month-old data.  "Please tell me whether you agree or disagree with George W. Bush on the issues that matter most to you."  Back in May, when Bush's job rating as measured by Gallup was roughly 8 points higher, 40% said they agreed with Bush on issues that "matter most."  That represented a nine point decline from just before the 2004 elections, when 49% said they agreed with Bush on issues.  At the same time, the same percentage (49%) also agreed with Kerry on the issues on an identically worded question. 

Agree_issues


As the table above shows, roughly nine of ten Republicans expressed agreement with Bush on key issues both in November 2004 and May 2005.  Agreement with Kerry was similar among Democrats in November.  The drop in agreement with Bush between November and May occurred mostly among independents (the drop appears to be statistically significant despite the relatively small sample sizes for the May survey).   

Unfortunately, we lack comparable data since May.  Perceptions of Bush have certainly worsened since May even among Republicans, but the overall declines in his approval ratings specific issues (such as the economy, Iraq, Foreign affairs, etc) have been in the single digits.  Contrary to the impression some may have taken away from the USAToday article, it is hard to imagine that a majority of Republicans suddenly started disagreeing with Bush on major issues since May. 

All of this is about the context in which to interpret one minor statistic from a larger survey.  The more important findings from this survey involve Bush's continuing slide in public opinion and the fact that - consistent with the other recent polls - registered voters now say they prefer a Democratic candidate by an 11 percentage point margin (53% to 42%). 

**A thank you to the folks at Gallup, who responded to my query and provided the data used to create these tables.

UPDATE (11/16):  In kindly linking to this post last night, Andrew Sullivan referred to the data I cited from May 2005 above as representing what political independents believe "now."  While those numbers are the most recent available, they are arguably not current since Gallup has not asked the question since May. 

However, other data support the idea that the largest declines in Bush' job rating have occurred among independents.   For example, last week Gallup's Jeff Jones pooled data from "post election 2004 and late September and October 2005," to look at change in the Bush job rating "spanning the political spectrum from liberal Democrats to conservative Republicans."  The result?

In the past year, conservative Republicans have become less likely to approve of Bush, but the decline has been more pronounced among moderate and liberal Republicans and independents . . .

There have been double-digit decreases in Bush approval ratings among pure independents (those who are independent and do not "lean" toward either party) and moderate and liberal Republicans. Pure independents' support has fallen from 42% to 28%, while moderate and liberal Republicans' support has dropped from 83% to 69%.   

Similarly, looking at the trend from January to October, the Pew Research Center found the biggest shift among independents:

The president continues to draw strong support from Republicans, 81% of whom approve of the job he is doing. But that number reflects an eight-point decline since January, with most of that drop occurring in late summer. Among independents, a plurality of 47% approved of Bush's performance in January; now just 34% do so. Approval among Democrats is now in the single digits (9%), down from 17% in January.

Posted by Mark Blumenthal on November 15, 2005 at 03:13 PM in Interpreting Polls, Polls in the News, President Bush | Permalink | Comments (1)

November 14, 2005

News Roundup: The Hits Keep on Coming

Last week, this site quietly passed the milestone of one million page views, as tracked by Sitemeter.  Now as MP will be the first to note, the "page views" statistic may tell us more about the mechanical function of a  site than about the number of individuals that used it.  Also, the most popular blogs hit that milestone every week.  Some can do it in a single day.  Nonetheless, a million page views still seems noteworthy for special interest blog with an admittedly narrow focus.

All of this reminds me of the uncertainty I felt about a year ago in the aftermath of the 2004 election, wondering whether I could find enough worthy topics to sustain a blog focused on political polling.  Granted, this last week has not been typical given the off-year elections in a handful of states, but the sheer volume of Mystery-Pollster-worthy topics I stumbled on is remarkable. 

First, there were the stories updated at the end of last week, the surveys on the California ballot propositions, the stunning miss by the Columbus Dispatch poll and release of two election day surveys in New Jersey and New York City.  (Incidentally, in describing their methodologies as "very similar" I should have noted one key difference:  The AP-IPSOS survey of New Jersey voters was based on a random digit dial [RDD] sample, while the Pace University survey of NYC voters sampled from a list of registered voters). 

But then there were the stories I missed:

Detroit - Just before the mayoral runoff election in Detroit last week, four public polls had challenger Freman Hendrix running ahead of incumbent Kwame Kilpatrick by margins ranging from 7 to 21 percentage points.  On Election Day, two television stations released election day telephone surveys billed as "exit polls"** that initially put Hendrix ahead by margins of 6 and 12 percentage points.   One station (WDIV) declared Hendrix the winner. 

However, when all the votes were counted, Kilpatrick came out ahead by a six-point margin (53% to 47%).  This lead to speculation about either the demise of telephone polling or the possibility of vote fraud (which was not an entirely unthinkable notion given an ongoing FBI investigation into allegations of absentee voter fraud in Detroit).  The one prognosticator who got it right used an old fashioned "key precinct" analysis that looked at actual results from a sampling of precincts rather than a survey of voters. 

Measuring Poll Accuracy - Poli-Sci Prof. Charles Franklin posted thoughts and data showing a different way of measuring "accuracy" in the context of the California proposition polling.   His point - one that MP does not quarrel with - is to focus not on the average error but on the "spread" of the errors.  Money quote: 

The bottom line of the California proposition polling is that the variability amounted to saying the polls "knew" the outcome, in a range of some 9.6% for "yes" and  8.55% for "no". While the former easily covers the outcome, the latter only just barely covers the no vote outcome. And it raises the question of how much is it worth to have confidence in an outcome that can range over  9 to 10%?

Franklin also points out an important characteristic of the "Mosteller" accuracy measures I hastily cobbled together last week:

MysteryPollster calculates the errors using the "Mosteller methods" that allocate undecideds either proportionately or equally. That is standard in the polling profession, but ignores the fact that pollsters rarely adopt either of these approaches in the published results. I may post a rant against this approach some other day, but for now will only say that if pollsters won't publish these estimates, we should just stick to what they do publish-- the percentages for yes and no, without allocating undecideds

For the record: I have no great attachment to any particular method of quantifying poll accuracy (there are many), but Franklin's point is valid.   The notion of "accuracy" in polling is more than quick computation and is very worthy of more considered discussion. 

More on California - Democratic Pollsters Mark Mellman and Doug Usher, who conducted internal surveys for the "No" campaigns against Propositions 74 and 76, summarized the lessons they learned for National Journal's Hotline (subscription required).

It is critical to test actual ballot language, rather than general concepts or initiative titles. It is tempting to test 'simplified' initiative descriptions, under the assumption that voters do not read the ballot, and instead vote on pre-conceived notions of each initiative. That is a mistake. Proper initiative wording -- in combination with a properly constructed sample that realistically reflects the potential electorate -- are necessary conditions for understanding public opinion on ballot initiatives. Pollsters who deviated from those parameters got it wrong.

Arkansas and the Zogby "Interactive" Surveys - A survey of Arkansas voters conducted using an internet based "panel" yielded very different results from a telephone survey conducted using conventional random sampling.  Zogby had GOP Senator Asa Hutchinson leading Democrat Attorney General Mike Beebe by ten points in the race for Governor (49% to 39%), while a University of Arkansas Poll had Beebe ahead by a nearly opposite margin (46% to 35%).  The difference led to an in-depth (subscription only) look at Internet polling by the Arkansas Democrat-Gazette (also reproduced here) and a release by Zogby that provides more explanation of their methodology (hat tip: Hotline).   

The "Generic" Congressional Vote - Roll Call's Stuart Rothenberg found a "clear lesson" after looking at how the "generic" congressional vote questions did at forecasting the outcome in 1994 (here by subscription or here on Rothenberg's site).

When it comes to the question of whether voters believe their own House Member deserves re-election, Republicans are in no better shape now than Democrats were at the same time during the 1993-1994 election cycle.

Rothenberg also looked at how different question wording in the so-called generic vote can make for different results. (Yes, Rothenberg's column appeared three weeks ago, but MP learned of it over the weekend in this item by the New Republic's Michael Crowly posted on TNR's new blog, The Plank).

Bush's Job Rating - Finally, all of these stories came during a ten day period that also saw new poll releases from Newsweek, Fox News, AP-IPSOS, NBC News/Wall Street Journal, the Pew Research Center, ABC News/Washington Post and Zogby.  [Forgot CBS News]. Virtually all showed new lows in the job rating of President George W. Bush. As always, Franklin's Political Arithmetik provides the killer graph:

Franklin_1109


In another post, Franklin (who was busy last week) uses Gallup data to create job approval charts for twelve different presidents that each share an identical (and therefore comparable) graphic perspective.  His conclusion:

President George W. Bush's decline more closely resembles the long-term decline of Jimmy Carter's approval than it does the free fall of either the elder President Bush or President Nixon.

Elsewhere, some who follow the Rassmussen automated poll thought they saw some relief for the President in a small uptick last week.  Others were dubious.  A look at today's Rasmussen numbers confirms the instincts of the skeptics.  Last weeks' up-tick has vanished. 

All of these are topics worthy of more discussion.  My cup runneth over.

**MP believes the term "exit poll" should apply only to surveys conducted by intercepting random voters as they leave the polling place, although that distinction may blur as exit pollsters are forced to rely on telephone surveys to reach the rapidly growing number that vote by mail. 

Posted by Mark Blumenthal on November 14, 2005 at 02:52 PM in Initiative and Referenda, Polls in the News, President Bush, The 2005 Race | Permalink | Comments (3)