Election Results and Lessons

Exit Polls Initiative and Referenda Legacy blog posts The 2005 Race

One lesson for MP is to allow for more blogging time on the day after an election.  But on the theory that a little late is better than never, here is a quick roundup of developments from Tuesday’s results:

California.  For all the variation in poll results earlier in the campaign, the public polls converged and compared reasonably well to the final results.  MP was right about one thing:  To the extent that the final results from conventional telephone surveys (Field, LA Times and PPIC) differed from those using unconventional methodologies (SurveyUSA, Stanford/Knowledge Networks and Polimetrix), reality fell somewhere in between.  With six pollsters and at least five different propositions tested on each survey, the complete listing of results is a bit cumbersome, so MP will simply present a table showing the average result for the five propositions that all six pollsters asked about compared to the average actual result (those who want to see all the detail can find results here and a full summary of the polling results here).   

The table includes two common measures of poll accuracy, although MP strongly recommends making too much of the differences in rankings.  By and large the final poll results from each organization were within sampling error of the actual outcome, and as such, random chance may have had much to do with the final ranking.  Also, the measurement of accuracy depends on how we deal with the “undecided” percentage, and not surprisingly.   We will save that discussion for another day, but for now know that the first (“Mosteller 2”) allocates the undecided proportionally.  The second measure (“Mosterller 5”) divides the undecided evenly between “yes” and “no.”  The smaller the number, the more accurate the result.  Either way, Field ranked first, SurveyUSA second with the others close behind. 

The bigger story in California was probably not the results on the final poll but the bigger differences among the various polls earlier in the campaign.  While the final results from each organization converged, they were much different just a week or two before,  especially for SurveyUSA and Stanford/Knowledge Networks (as indicated in the table above and discussed at length here and here).  On Tuesday, Political Science Prof. Charles Franklin took a closer and graphical look at the variability in results of the California propositions over the course of the campaign (here, here, here, here and here).  He compared propositions, not pollsters, but his approach suggests another avenue of inquiry.  Were some pollsters more variable than others in this campaign?  What does that variability tell us about their methodologies?   

Ohio.  MP’s instincts failed him with respect to the venerable Columbus Dispatch mail-in poll, which after decades of outperforming conventional telephone surveys turned in one of the more spectacularly inaccurate performances in recent memory.  For example, the final Dispatch survey (subscription required) conducted October 24 through November had Issue 2 (vote by mail) running 26 points ahead (59% to 33%).  It lost on Tuesday by 28 points (36% to 64%).  Similarly, the poll had Issue 3 (limits on campaign contributions) running 36 points ahead (61% to 25%).  It lost by an opposite 36 point margin (32% to 68%).  These results had MP seriously wondering whether the pollsters or election officials had mistakenly transposed “yes” and “no” in their tables.  The discrepancy was nearly as great for Issues 3 and 4 (on congressional redistricting and the role of the secretary of state).

MP will take a much closer look at what happened to the Dispatch poll in his next post, but if nothing else, these results underscore how “shark infested” the waters can be with respect to polling on ballot propositions (as another pollster put it in an email). 

Exit Polls. Alas, we have no true exit polls to ponder in Virginia, New Jersey or California as the television networks and the LA Times opted out this year.  There was one report to the contrary on election night (based on what  we cannot say), although for all we know, with five pollsters (!) and a campaign budget of at least $29 million, the Corzine campaign may have conducted its own exit poll. 

We can report on one exit poll two Election Day telephone polls conducted elsewherePace University did an exit a telephone poll among voters in the New York mayor’s race.   Also, while not an “exit poll” per se, AP-IPSOS apparently conducted an immediate post-election telephone survey Tuesday night among those who reported voting in New Jersey (their release also includes two sets of cross-tabulations).    Political junkies in need of an exit poll fix are advised to follow the links.

[Thanks to an alert reader for correcting MP about the above.   The Pace poll and AP polls both appear to have very similar methodologies.  Further clarification (posted 11/14): The AP-IPSOS survey used  random digit dial [RDD] sample, while the Pace University survey sampled from a list of registered voters].

Mark Blumenthal

Mark Blumenthal is the principal at MysteryPollster, LLC. With decades of experience in polling using traditional and innovative online methods, he is uniquely positioned to advise survey researchers, progressive organizations and candidates and the public at-large on how to adapt to polling’s ongoing reinvention. He was previously head of election polling at SurveyMonkey, senior polling editor for The Huffington Post, co-founder of Pollster.com and a long-time campaign consultant who conducted and analyzed political polls and focus groups for Democratic party candidates.