« April 2005 | Main | June 2005 »

May 27, 2005

Judicial Filibuster: Post Mortem

MP is admittedly still playing catch up, but the explosion of polling on judicial nominees deserves a bit of a follow-up.  Viewed side-by-side, the various public polls offered approaches and questions that were as widely divergent as the results.  However, they did paint a consistent picture:  The majority of Americans were not engaged in the debate and did not have clear, previously formed opinions on the specific conflict.  Looking forward, the results also tell us about the underlying attitudes that may ultimately influence debates over the use of the filibuster for future judicial nominations. 

The various surveys were very consistent in one sense:  The showed most Americans paying little attention to the "nuclear option" debate.  Four national polls released in the last two weeks all asked about how closely Americans were following the debate using nearly identical answer categories.  Although the questions showed minor variation, all four found few voters paying "very close" attention and majorities saying they followed the debate "not too closely" or "not at all:"

  • CBS - May 22-24: How closely would you say you have been following the debate about the filibuster in the Senate -- very closely, somewhat closely, not very closely, or not at all closely? 10% very, 24% somewhat - 66% not very/not at all closely
  • Gallup($) - May 20-22:  How closely have you been following the news about the use of the filibuster on judicial nominations in the U.S. Senate -- very closely, somewhat closely, not too closely, or not at all? 17% very, 26% somewhat - 57% not too, not at all closely
  • Washington Post-ABC - May 18-22 How closely have you been following news about the debate in the U.S. Senate over filibuster rules involving the confirmation of federal judges - very closely, somewhat closely, not too closely or not closely at all? 16% very, 31% somewhat - 53% not too/not at all closely
  • Pew - May 11-16: How closely did you follow news about the debate over changing Senate rules to stop the Democrats from using the filibuster against some of President Bush's judicial nominees--very closely, fairly closely, not too closely, or not at all closely? 14% very, 20% fairly, 65% fairly/not at all closely

The CBS poll took this a step further, showing that roughly half of Americans are unsure of what "filibuster" means.  When they asked respondents to say in their own words, "what the term filibuster means to you," only 37% "accurately defined it as involving an extended debate or as a procedural move to delay a vote."  Another 13% either expressed some awareness that a filibuster involves political speech. Nearly half (46%) could not answer the question at all. 

Cbs_open_end


These questions demonstrate the point I made a month ago:

The underlying issue is both complex and remote.  Few Americans are well informed about the procedures and rules of the Senate, and few have been following the issue closely...So true "public opinion" with respect to judicial filibusters is largely unformed.   When we present questions about judicial nominees in the context of a survey interview, many respondents will form an opinion on the spot.  Results will thus be very sensitive to question wording.  No single question will capture the whole story, yet every question inevitably "frames" the issue to some degree [emphasis added].

The various pollsters clearly had this challenge in mind when they designed their surveys.  Although their questions and results varied widely, their findings reveal some conflicting attitudes at work below the surface.  On the one hand, large majorities preferred that the Senate be assertive in its role of scrutinizing judicial appointments, that bi-partisan approval is preferable to narrow party-line votes and that the minority "should be able to block some judges they feel strongly about." 

  • 74% prefer that "both parties in the Senate should have to agree that a person should be a judge, even if that takes a long time" rather than "whichever party has the most Senators should get to decide whether a person should be a judge, even if the other party disagrees" (17% - CBS).
  • 63% believe that confirmation of Federal judges should require "a larger majority of 60 votes" rather than "a majority of 51 votes" (CBS).
  • 62% agree that "the minority party ought to be able to block some of the judges they feel strongly about because judges are appointed to the federal courts for life terms" (and 30% disagree - Pew)
  • 78% would rather the Senate "take an assertive role in examining each nominee" rather than "give the president's judicial nominees the benefit of the doubt and approve them without a lot of scrutiny" (18% - AP-IPSOS)
  • 56% prefer that "the Senate make its own decision about the fitness of each nominee to serve," rather than "generally confirm[ing] the president's judicial nominees as long as they are honest and competent" (34% - NBC/WSJ).

On the other hand, in at least one instance, a majority of Americans also agreed with the Republican argument that Bush should be able to win confirmation of a nominee supported by a majority of the Senators. Specifically, the Pew Research Center found that 53% agree that "the Republicans won the last election so President Bush should be able to appoint anyone he wants to the federal courts if a majority of Senators agree" (43% disagreed). The Pew report noted that 31% of their sample agreed with both the Democratic argument (that Senate minorities "ought to be able to block" some nominees) and the Republican argument (that Bush should be able to appoint those supported by a majority).   

Thus, not surprisingly, when polls showed widely divergent findings when they tried to put it all together and ask for opinions about the "filibuster" controversy.  While the results that follow are all over the map, several patterns seem evident.  First, when pollsters offer more information about the filibuster debate in their question, the percentage that cannot answer tends to be lower.  Second, most of the "informed" questions show more support for the idea of preserving the filibuster than ending it.  However, in reading these results, we need to keep in mind that they tell us much less about the pre-existing positions of the respondents than about they way respondents react to the information carried by the questions themselves: 

CBS (May 22-24): Do you think filibusters are mostly good because they allow the minority party in the Senate to express its views and even block legislation or nominees, or do you think filibusters are mostly bad, because they can obstruct the proposals of the majority party in the Senate?  34% mostly good, 34% mostly bad, 3% varies, 29% not sure/don't know

Qunnipiac (May 18-23)- As you may know, a filibuster can be used to prevent a vote on judicial nominations in the Senate. Which comes closer to your point of view? A) The filibuster should be used to keep unfit judges off the bench. or B) The filibuster should not be used, because nominees deserve a vote by the full Senate. 55% used, 36% not used

Washington Post-ABC (May 18-22) As you may know, the president nominates federal judges and the Senate votes whether to confirm them. A Senate rule called a filibuster allows a minority of senators to block a final vote on a judicial appointment even if a majority of senators supports the nominee. (Republicans want to eliminate the filibuster rule for judges, saying it's unfair that a minority can block a vote by the full Senate.) (Democrats want to keep the filibuster rule for judges, saying the minority needs a way to block nominees that they strongly oppose.) What about you: Do you prefer to (eliminate) the filibuster rule, or to (keep) the filibuster rule for judicial nominees? 43% eliminate filibuster, 40% keep filibuster, 17% not sure

Gallup($)  (May 20-22): As you may know, President Bush has nominated some people as federal judges who have not yet been confirmed by the Senate. The Democrats in the Senate have used a filibuster to prevent those nominees from being confirmed. In response, the Republicans in the Senate are trying to change the rules to prevent the use of filibusters on judicial nominations so that all are subject to an up-or-down vote. Which comes closest to your view -- [ROTATED:] you want to see the filibuster rule preserved and you do not want those judicial nominees confirmed, you want to see the filibuster rule preserved, but you would like to see the Senate have an up-or-down vote on those nominees, or you want to see the filibuster rules changed so that those judicial nominees are subject to an up-or-down vote?  19% preserve filibuster do not vote, 34% preserve filibuster but vote, 35% change rules, 12% unsure.

Pew (May 11-16) Do you favor or oppose changing the rules of the Senate to stop the use of filibusters against judicial nominees? 28% favor, 37% oppose, 35% don't know.

Time/SRBI (May 10-12):  Some Republicans in the Senate want to eliminate the ability of Democrats to use the filibuster, or extended debate, to block the Senate from voting on some of President Bush's judicial nominees. Do you think the Republicans should or should not be able to eliminate the filibuster in this case? 28% should, 59% should not, 14% unsure

Of course, the compromise agreed to earlier this week makes much of this moot, at least for now.  But various commentators -- Rutgers professor Ross K. Baker, Mickey Kaus and others -- argue that the deal effectively "kicks the can down the road," delaying the fight over the filibuster until some future Supreme Court nomination.  What do these results tell us about the role public opinion might play in some future fight over ending the filibuster?

This may be the "Democratic pollster" in me talking, but a delay in the narrow debate over whether to end the judicial filibuster tends to serve the Democratic position.  Two reasons: First, a higher profile debate over a Supreme Court nomination might better engage public opinion, especially on the notion of giving minority parties the ability to scrutinize and block appointments they feel strongly about.  Second, by agreeing to confirm more controversial nominees this time, the compromise weakens the Republican argument that Democrats have unfairly blocked their nominees.

Of course, as Kaus points out, if Bush nominates Owen, Brown or Pryor to the Supreme Court, all bets are off.

[6/2 - corrected references to Washington Post-ABC polls]

Posted by Mark Blumenthal on May 27, 2005 at 11:41 AM in Divergent Polls, Polls in the News | Permalink | Comments (5)

May 24, 2005

The Pew Typology

While I was focused on the AAPOR conference, I missed the release two weeks ago by the Pew Research Center of their massive new "Typology" study (full pdf version here).  Their study, which classified voters into nine "homogeneous groups based on values, political beliefs, and party affiliation," shows that political public opinion is far richer than the differences between Democrats and Republicans.  As implied by the report's title -- "Beyond Red. Vs. Blue" -- the divisions within the political parties may be more interesting than the divisions between them.   For those that want a thorough understanding of American public opinion, the Pew typology study is certainly a "must read." Its reception in the blogosphere is also instructive to those of us that conduct polls and survey research.

One of the most intriguing things about the Pew report is the web page ("Where Do You Fit?") that allows anyone to answer a series of questions and find out which of the nine categories they fit into.  This page has generated much interest in the blogosphere.  If you are not familiar with the study, it's a great place to start.  The first principal findings page also has a summary of the nine groups.

Several readers have asked for my take.  Here goes: 

The study's methodology page includes a two paragraph description of how they created the typology.   Let me try to unpack their description a bit (with the help of a Pew researcher who helped clarify a few issues):

The value dimensions used to create the typology are each based on the combined responses to two or more survey questions. The questions used to create each scale were those shown statistically to be most strongly related to the underlying dimension.

They started with the 25 questions listed on the Where Do You Fit page and performed a statistical factor analysis which identified a smaller number of "underlying dimensions" among the questions.   They found, for example, a common pattern in the way respondents answered questions about military strength vs. diplomacy, the use of force to defeat terrorism and the willingness to fight for your country.  Respondents that supported military force on one question also tended to support it on the other two.  So informed by the factor analysis, they created a single "scale" that combined the three military force questions into a single variable.  They repeated this process for eight specific value dimensions that they list early in the report (under the heading "Making the Typology," or on page 9 of the full PDF version).

The description of how they constructed the typology continues:

Each of the individual survey questions use a "balanced alternative" format that presents respondents with two statements and asks them to choose the one that most closely reflects their own views. To measure intensity, each question is followed by a probe to determine whether or not respondents feel strongly about the choice they selected.

In past typologies, the Pew researchers asked respondents a series of "agree-disagree" questions (see this page for examples from 1999).  The problem with that format is something survey methodologists call "acquiescence bias" - a tendency of some respondents to agree with all questions.  So this year, to get around that problem, they used a format that asked respondent to choose between "balanced alternative" statements.  To measure the intensity of feeling, they also asked a follow-up question in each case.  "Do you feel STRONGLY about that, or not?"  [Clarification:  Pew switched to the balanced alternatives format on their 1994 typology study, although the cited 1999 survey has examples of the older agree-disagree format].

Consider this example.  On past surveys, they asked respondents whether they agreed or disagreed with this statement:  "The best way to ensure peace is through military strength."  This time, they asked respondents to choose between two statements, the one used previously and an alternative:  "Good diplomacy is the best way to ensure peace."   

The use of these forced choices bothered some in the blogosphere.  New Donkey's Ed Kilgore wrote:

Question after question, the survey lays out a long series of false choices that you are required to make: military force versus diplomacy; environmental protection versus economic growth; gay people and immigrants and corporations and regulations G-O-O-D or B-A-A-D. Other than agreeing with a proposition mildly rather than strongly, there's no way to register dismay over the boneheaded nature of these choices.

[Jeff Alworth has a similar critique at Blueoregon].

He protests a bit too much.  The questions used to create the typology are intended as "broadly oriented values" measures (p. 10) not positions on specific policy proposals.  Moreover, the language of the questionnaire anticipates that respondents may not find it easy to agree with one statement, so it asks respondents to choose the one that "comes closer to your own views even if neither is exactly right."  As Kilgore recognizes, the measure of intensity (feel strongly or not) provides one way to gauge whether respondents had trouble choosing.  However, on the actual survey questionnaire (as opposed to the online "Where Do You Fit" version) included another out:   Respondents could volunteer a "neither" or "don't know" response; these ranged from 6% to 14%.

Continuing with the explanation of how Pew constructed the typology: 

As in past typologies, a measure of political attentiveness and voting participation was used to extract the "Bystander" group, people who are largely unengaged and uninvolved in politics.

Simple enough:  They defined "Bystanders" as those who pay little attention or rarely vote, and set them aside before turning to the heart of the procedure:

A statistical cluster analysis was used to sort the remaining respondents into relatively homogeneous groups based on the nine value scales, party identification, and self reported ideology. Several different cluster solutions were evaluated for their effectiveness in producing cohesive groups that are distinct from one another, large enough in size to be analytically practical, and substantively meaningful. The final solution selected to produce the new political typology was judged to be strongest on a statistical basis and to be most persuasive from a substantive point of view.

So what is "cluster analysis?"  It is a statistical technique that attempts to sort respondents into groups where the individuals within each group are as similar as possible but the differences between the groups are as large as possible.   Typically, the researcher must decide what questions to use as inputs and how many groups to create and the cluster analysis software sorts out respondents accordingly.

One minor technical issue is that a true cluster analysis can only work on a complete sample.  It functions by a process of repeatedly sorting, comparing groups and re-sorting until it reaches a statistically optimal result.  This is an important point for those trying to use the online Typology Test -- it will not produce precisely the same result as the true cluster analysis because the online version classifies respondents one at a time.  As I understand it, the Pew researchers designed the online version to provide a very close approximation of how individuals get assigned, but it may not yield exactly the same classification as on the full survey. 

A much bigger issue is that cluster analysis leaves a lot of room for subjective judgment by the researcher.  The statistical procedure produces no magically right or wrong result.  Instead, the typology "works" if, in the view of the researcher, it yields interesting, useful or persuasive results.  The Pew authors acknowledge this issue when they say they aimed for a balance of "analytical practicality" and "substantive meaning" and looked at "several different cluster solutions" before settling on the one they liked best. 

The key point:  The Pew Typology is not meant as a way to keep score, to tell us who is ahead or behind in the political wars.  We have measures of vote preference, job approval and party identification that perform that task admirably.  Rather, the Typology is more useful in understanding the undercurrents of opinion at work beneath the current party ID or vote preference numbers.  Armed with this knowledge, we can then speculate endlessly about what the political score might be in the future.  If the Pew Typology provides useful data for that debate, it succeeds. 

What I find most intriguing about the Pew Typology is the way it has been received and dissected in the blogosphere.  At the AAPOR conference, I wrote about plenary speaker, Prof. Robert Groves, who noted that traditional measures of survey quality put great emphasis on minimizing various sources of error but do not take into account the perspective of the user.  Perhaps, he suggested, we need to think more about the perceived credibility and relevance of the data. 

Turning to the blogosphere, I note two things.  First, as of today, when I enter "Pew typology" into Google, seven of the top ten sites (including the top two) are blogs.  The other three are for the Pew site itself.  Thus, a side point:  If blogs have gained influence, Google is a big reason why.

Second, as I sift through the commentary by bloggers on the left and right, most of the debate is through the Pew typology rather than about it.   Yes, some are critical, but the debate mostly centers on the appropriate meaning and interpretation of the results, not whether they are worth considering.  Others may disagree (that's the blogosphere after all), but from my perspective the Pew typology has achieved considerable relevance and credibility on all sides of blogosphere.

That says a lot.

[Typos corrected]

Posted by Mark Blumenthal on May 24, 2005 at 09:10 AM in Polls in the News | Permalink | Comments (3)

May 17, 2005

AAPOR Remainders

Some additional items of interest from the AAPOR conference

Peter Coy of Businessweek was also on hand and filed some commentary.  One point he emphasized, that I neglected in yesterday's post, was the steps the exit pollsters plan to take to avoid future errors:

On May 14 in Miami Beach, the pollsters laid out their plan of action in a forum at the annual meeting of the American Association for Public Opinion Research. They said they hope to prevent leaks of exit-poll data, shrink the exit-poll questionnaire so more voters will be willing to complete it, get permission for interviewers to stand closer to the exits to catch more voters on their way out, and improve the recruiting and training of the interviewers.

Will Lester of the Associated Press was also on hand and filed a similar story on Saturday's exit poll presentation:

Better training of interviewers to get a proper sample of voters after they cast ballots will be key to improving the performance of exit polls, one pollster who handled the 2004 election surveys said Saturday.

Finally, Lester also reported on a newly updated research on mobile-phone only households from the Center for Disease Control's National Health Interview Survey.  Their study showed:

Slightly more than 6 percent of households do not have a traditional landline phone, but do have at least one wireless phone. About 5.5 percent of adults have only a mobile phone.

But not for copyright law, I'd be tempted to quote these stories in full.  Read them all.

Posted by Mark Blumenthal on May 17, 2005 at 04:03 PM in Exit Polls, General, Sampling Issues | Permalink | Comments (0)

May 16, 2005

AAPOR: Exit Poll Presentation

Unfortunately, the sleep deprivation experiment that was my AAPOR conference experience finally caught up with me Saturday night.  So this may be a bit belated, but after a day of travel and rest, I want to provide those not at the AAPOR conference with an update on some of the new information about the exit polls presented on Saturday.  Our lunch session included presentations by Warren Mitofsky, who conducted the exit polls for the National Election Pool (NEP), Kathy Frankovic of CBS News, and Fritz Scheuren of the National Opinion Research Center National Organization for Research and Computing (NORC) at the University of Chicago.

Mitofsky spoke first and explicitly recognized the contribution of Elizabeth Liddle (that I described at length a few weeks ago).  He described "within precinct error" (WPE) the basic measure that Mitofsky had used to measure the discrepancy between the exit polls and the count within the sampled precincts:  "There is a problem with it," he said, explaining that Liddle, "a woman a lot smarter than we are," had shown that the measure breaks down when used to look at how error varied by the "partisanship" of the precinct.  The tabulation of error across types of precincts - heavily Republican to heavily Democratic - has been at the heart of an ongoing debate over the reasons for the discrepancy between the exit poll results and the vote count.

Mitofsky then presented the results of Liddle's computational model (including two charts) and her proposed "within precinct Error_Index" (all explained in detail here).  He then presented two "scatter plot" charts.  The first showed the values of the original within precinct error (WPE) measure by the partisanship of the precinct.  Mitofsky gave MP permission to share that plot with you, and I have reproduced it below.

Wpe_plot


The scatter plot provides a far better "picture" of the error data than the table presented in the original Edison-Mitofsky report (p. 36), because it shows the wide, mostly random dispersion of values.  Mitofsky noted that the plot in WPE tends to show an overstatement mostly in the middle precincts as Liddle's model predicted.  A regression line drawn through the data shows a modest upward slope.

Mitofsky then presented a similar plot of Liddle's Error Index by precinct partisanship.  The pattern is flatter and more uniform and the slope of the regression line is flat.  It is important to remember that this chart, unlike all of Liddle's prior work, is based not on randomly generated "Monte Carlo" simulations, but on the actual exit poll data. 

Wpe_index_plot


Thus, Mitofsky presented evidence showing, as Liddle predicted, that the apparent pattern in the error by partisanship -- a pattern that showed less error in heavily Democratic precincts and more error in heavily Republican precincts -- was mostly an artifact of the tabulation

Kathy Frankovic, the polling director at CBS, followed Mitofsky with another presentation that focused more directly on explaining the likely root causes of the exit poll discrepancy.   She talked in part about the history of past internal research on the interactions between interviewers and respondents in exit polls.  Some of this has been published, much has not.  She cited two specific studies that were new to me:

  • A fascinating pilot test in 1991 looked for ways to boost response rates.  The exit pollsters offered potential respondents a free pen as an incentive to complete the interview.  The pen bore the logos of the major television networks.  The pen-incentive boosted response rates, but it also increased within-precinct-error (creating a bias that favored the Democratic candidate), because as Frankovic put it, "Democrats took the free pens, Republicans didn't."  [Correction (5/17):  The study was done in 1997 on VNS exit polls conducted for the New York and New Jersey general elections.  The experiment involved both pens and a color folder displayed to respondents that bore the network logos and the words "short" and "confidential." It was the folder condition, not the pens, that appeared to increase response rates and introduce error toward the Democrat.  More on this below]
  • Studies between 1992 and 1996 showed that "partisanship of interviewers was related to absolute and signed WPE in presidential" elections, but not in off-year statewide elections.  That means that in those years, interviews conducted by Democratic interviewers showed a higher rate of error favoring the Democratic candidate for president than Republican interviewers.

These two findings tend to support two distinct yet complementary explanations for the root causes of the exit poll problems.  The pen experiment suggests that an emphasis on CBS, NBC, ABC, FOX, CNN and AP (whose logos appear on the questionnaire, the exit poll "ballot box" and the ID badge the interviewer wears and which the interviewers mention in their "ask") helps induce cooperation from Democrats, "reluctance" from Republicans.

Second, the "reluctance" may also be an indirect result of the physical characteristics of the interviewers that, as Frankovic put it, "can be interpreted by voters as partisan."  She presented much material on interviewer age (the following text comes from her slides which she graciously shared):   

    In 2004 Younger Interviewers...

        * Had a lower response rate overall
            - 53% for interviewers under 25
            - 61% for interviewers 60 and older
        * Admitted to having a harder time with voters
            - 27% of interviewers under 25 described respondents as very cooperative
            - 69% of interviewers over 55 did
        * Had a greater within precinct error

Frankovic also showed two charts showing that since 1996, younger exit poll interviews have consistently had a tougher time winning cooperation from older voters.  The response rates for voters age 60+ were 14 to 15 points lower for younger interviewers than older interviewers in 1996, 2000 and 2004.  She concluded: 

IT'S NOT THAT YOUNGER INTERVIEWERS AREN'T GOOD - IT'S THAT DIFFERENT KINDS OF VOTERS MAY PERCEIVE THEM DIFFERENTLY

  • Partisanship isn't visible - interviewers don't wear buttons -- but they do have physical characteristics that can be interpreted by voters as partisan. 

  • And when the interviewer has a hard time, they may be tempted to gravitate to people like them.

Frankovic did not note the age composition of the interviewers in her presentation, but the Edison-Mitofsky report from January makes clear that the interviewer pool was considerably younger than the voters they polled.  Interviewers between the ages of 18 and 24 covered more than a third of the precincts (36% - page 44), while only 9% of the voters in the national exit poll were 18-24 (tabulated from data available here).   These results imply that more interviewers "looked" like Democrats than Republicans, and this imbalance introduced a Democratic bias into the response patterns.

Finally, Dr. Fritz Schueren presented findings from an independent assessment of the exit polls and precinct vote data in Ohio commissioned by the Election Science Institute.  His presentation addressed the theories of vote fraud directly.

Scheuren is the current President of the American Statistical Association, and Vice President for Statistics at NORCHe was given access to the exit poll data and matched that independently to vote return data. [Correction 5-17:   Schueren had access to a precinct level data file from NEP that included a close approximation of the actual Kerry vote in each of the sample precincts, but did not identify those precincts. Scheuren did not independently confirm the vote totals ].

His conclusion (quoting from the Election Science press release):

The more detailed information allowed us to see that voting patterns were consistent with past results and consistent with exit poll results across precincts. It looks more like Bush voters were refusing to participate and less like systematic fraud.

Scheuren's complete presentation is now available online and MP highly recommends reading it in full.

[ESI also presented a paper at AAPOR on their pilot exit poll study in New Mexico designed to monitor problems with voting.  It is worth downloading just for the picture of the exit poll interviewer forced to stand next to a MoveOn.org volunteer, which speaks volumes about another source of problem].

The most interesting chart in Scheuren's presentation compared support for George Bush in 2000 and 2004 in the 49 precincts sampled in the exit poll.   If the exit poll had measured fraud in 2004, and had fraud occurred in these precincts in 2004 and not 2000, one would expect to see a consistent pattern in which the precincts overstating Kerry fell on a separate parallel line, indicating higher values in 2004 than 2000.  That was not the case. A subsequent chart showed virtually no correlation between the exit poll discrepancy and the difference between Bush's 2000 and 2004 votes.

Esi_2000_2004


Typos corrected - other corrections on 5/17

UPDATE & CLARIFICATION (5/17) More about the 1997 study: 

The experimental research cited above was part of the VNS exit poll of the New Jersey and New York City General Elections in November, 1997.  While my original description reflects the substance of the experiment, the reality was a bit more complicated.

The experiment was described in a paper presented at the 1998 AAPOR Conference authored by Daniel Merkle, Murray Edelman, Kathy Dykeman and Chris Brogan.  It involved two experimental conditions:  In one test, interviewers used a colorful folder over their pad of questionnaires that featured "color logos of the national media organizations" and "the words 'survey of voters,' 'short' and 'confidential.'" On the back of the folder were more  instructions on how to handle either those who hesitated or refused.   The idea was to "better standardize the interviewer's approach and to stress a few key factors" to both the interviewer and the respondent  intended to lead to better compliance. 

In a second test, interviewers used the folder and offered a pen featuring logos of the sponsoring news organizations.   A third "control" condition used the traditional VNS interviewing technique without any use of  a special folder or pen. 

There was no difference between the folder and folder/pen conditions so the two groups were combined in the analysis.   The results showed that both the folder and folder/pen conditions slightly increased response rates but also introduced more error toward the Democratic candidate as compared to the control group.   Since there was no difference between the folder/pen and folder conditions, it was the folder condition, not the pen, that appeared to influence response rates and error.

The authors concluded in their paper: 

The reason for the overstatement of the Democratic voters in the Folder Conditions is not entirely clear and needs to be investigated further.  Clearly some message was communicated in the Folder Conditions that led to proportionately fewer Republicans filling it out.  One hypothesis is that the highlighted color logos of the national news organizations on the folder were perceived negatively by Republicans and positively by Democrats, leading to differential nonresponse between the groups.

Murray Edelman, one of the authors, emailed me with the following comment: 

The reference to this study at the 2004 AAPOR conference by both Bob Groves and Kathy Frankovic in their respective plenaries has inspired us to revise our write up of this study for possible publication in POQ and to consider other factors that could explain some of the  differences between the two conditions, such as the effort to standardize the interviewing technique and persuade reluctant respondents  and the emphasis on the questionnaire being "short" and "confidential."  However, we agree that the main conclusion, that efforts to increase response rates may also increase survey error, is not in question.

Back to text

Posted by Mark Blumenthal on May 16, 2005 at 01:02 PM in Exit Polls, General | Permalink | Comments (82)

May 14, 2005

AAPOR: Day Two

A few quick notes from some of the sessions I attended yesterday at the AAPOR conference:

Keep in mind that the "working papers" presented at AAPOR (actually most are currently PowerPoint presentations) are just that - works in progress.  Also, I can only sit in on one of the eight presentations that typically occur at any given time, so what follows is just a tiny fraction of the amazing variety of research findings presented today.  Finally, I am sharing my own highly subjective view of what's "interesting."  I'm sure others here might have a different impression. 

Party ID - I have written previously about the idea that party identification is an attitude that can theoretically show minor change in the short term.  This morning, Trevor Tompson and Mike Mokrzycki of the Associated Press presented results from an experiment showing that the way survey respondents answer the party identification question can change during the course of a single interview.

Throughout 2004, the IPSOS survey organization randomly divided each of their poll samples, asking the party ID question for half of the respondents at the end of the survey (where almost all public pollsters ask it) and for the other half at the very beginning of the survey.  When they asked the party question at the end of the questionnaire, they found that consistently more respondents identified themselves as "independents" or (when using a follow-up question to identify "leaners") as "moderate Republicans." They also found that the effect was far stronger in surveys that asked many questions about the campaign or about President Bush than surveys on mostly non-political subjects.  Also, they found that asking party identification first had also had an effect other questions in between.  For example, when they asked party identification first, Bush's job rating was slightly lower (48% vs. 50%) and Kerry's vote slightly higher (47% vs. 43%).

A few cautions:  First, these results may be unique to the politics of 2004, the content of the IPSOS studies or both.  The effect may have been different for say, a Democratic president at a time of peace and prosperity.  Second, I am told that another similar paper coming tomorrow will present findings with a different conclusion. Finally -- and perhaps most importantly -- while the small differences were statistically significant, it is not at all clear which placement gets the most accurate read on party identification. 

Response Bias and Urban Counties - Michael Dimock of the Pew Research Center presented some intriguing findings on an examination they did on whether non-response rates might result in an overunderrepresentation of urban areas.  The basic issue is that response rates tend to be lower in densely populated urban areas, higher in sparsely Although the Pew Center uses a methodology that is far more rigorous than most public polls (they will, for example, "call back" numbers at least 10 times to catch those typically away from home), even though they weight their samples to match demographic characteristics estimated by the US Census, they still found that they under-represented voters from urban areas.  Thus, in 2004, they also adjusted their samples to eliminate this \geographic bias. 

The result, according to Dimock, was typically a one point increase in Kerry's vote, a one point drop in Bush's vote (for a two-point reduction of what was usually a Bush lead).  Thus, Pew's final survey had Bush ahead by a three-point margin (48% to 45%) that more or less nailed the President's ultimate 2.4% margin in the popular vote.  But not for the geographic correction, their final survey would have shown a 5% Bush lead.

After the presentation, a representative of CBS News pointed out that their survey also made a very similar weighting correction to adjust for geographic bias.  While all of this may sound a bit arcane, it reflects an important change for these public pollsters who rarely weight geographically.

The User's Perspective - For MP, one very heartening development at this conference was discussion of the idea that in considering the overall quality of a survey, pollsters need to consider the perspective of the consumers of their data.  Two very prominent figures within the survey research community, Robert Groves of the University of Michigan and Frank Newport of Gallup, both endorsed the view that one important measure of the quality of a survey is its "credibility and relevance" to its audience.  Put another way, these two leaders of the field argued this week that pollsters "need to get more involved" in users of survey data perceive their work. 

For my money, there no "users" of political survey data more devoted than political bloggers.  As Chris Bowers wrote on MyDD yesterday,

Without any doubt in my mind, I can say that political bloggers are by far the biggest fans of political polling in America today. We are absolutely obsessed with you and what you do. Many of us subscribe to all of your websites. We read your press releases with relish, and write for audiences that are filled with hard-core consumers and devotees of your work. In Malcolm Gladwell's terminology, political bloggers and the many people who visit and participate in political blogs are public opinion mavens who can almost never consume too much information about the daily fluctuations of the national political landscape.

Chris is absolutely right.  If political pollsters want to understand more about how their most devoted consumers feel, there is no better place to go than the blogosphere.

PS:  Actually, the Chris Bowers quotation above is from the speech he will present by videotape tomorrow at a session at the AAPOR conference on blogging's impact on polling in which I am also a participant.  Chris put the text of his presentation online, and those interested can view his readers' reactions to it here.  Also, Ruy Teixeira has posted a summary of the various polling issues he discussed on his blog during the campaign.

5/15 - Typo corrected

Posted by Mark Blumenthal on May 14, 2005 at 01:12 AM in General | Permalink | Comments (19)

May 13, 2005

AAPOR: Day One

As promised, I am reporting to you from beautiful Miami Beach at the first day of the annual meeting of the American Association for Public Opinion Research (AAPOR). 

Earlier this week, I said I would try to provide highlights on whatever seems most "newsworthy" each day.  I will still try to post every day, and some findings may be suitable for quick summary here.  However, after a first session this afternoon, I realize that attempting to digest the substance of this conference blog-style is not a great idea.  First, I am only one person and can only attend one of roughly 5-6 sessions presented in each time slot.  Second, on the topics of greatest interest -- weighting by party identification, non-response bias, cell phone only households, even exit polls -- researchers will present multiple papers.  Commenting on these in a scattershot way is likely to prove very confusing.  I do hope to listen, learn and use what I absorb here as

So for tonight at least, one general impression which needs a brief explanation:

From the beginning, this blog has had something of a two-fold mission:  To explain and demystify what pollsters do, but also to open something of a dialogue between the pollsters -- the producers of political survey data -- and their most rabid consumers in the blogosphere.  So it was particularly heartening that the first comments I heard today from the first speaker (Doug Usher of the Mellman Group) at the first session I attended (on weighting data) spoke directly to the same themes that first motivated this blog (quoting Usher's prepared text which he kindly shared):

Polling is under more scrutiny than ever.  Methods that were once the province of statisticians and highly trained public opinion pollsters are now in the public domain, debated by many with a level of expertise which is - shall we say - outside the margin of error.  Nonetheless, consumers of polls are becoming more familiar with - and more demanding of - sound methodology that ensures the highest level of accuracy in questions of interest.  And this isn't a bad thing...

Today, weighting has become a bigger topic of conversation, though not everyone in that conversation realizes that they are talking about weighting.  Throughout this past election cycle - and continuing through today - news organizations, partisans and blogging poll consumers are demanding to see more demographic data, to scrutinize any "anomalies" in demographic breakdowns.  And such scrutiny extends past political posturing, into survey research that impacts policy decisions across government, business and academic research.

Of course, this is really a debate about weighting. 

And that's not a bad thing either.   It's going to be an interesting conference. 

More tomorrow...er...later today.

Posted by Mark Blumenthal on May 13, 2005 at 12:22 AM in General, Polling & the Blogosphere | Permalink | Comments (1)

May 11, 2005

On "Smackdowns" and Fraud

The big news at the newly minted Huffington Post is that two of its "celebrity" bloggers have taken up the exit-polls-as-evidence-of-vote-fraud debate.  What's remarkable about yesterday's exchange between ABC sportscaster Jim Lampley and National Review Correspondent Byron York (here, here and here, to say nothing of the pat-self-on-back praise by standup comic Robert George) is how devoid it is of fact.   The Huffington Post has a long blog roll (that kindly includes MP), yet Lampley and York demonstrate little awareness of what others have said about this issue since November. Their "smackdown" includes a lot of impressive name calling but nary a link to the ongoing debate.   

Consider this paragraph from Lampley's initial post:  He sees evidence of fraud in the fact that the "extremely scientific" Las Vegas oddsmakers, having presumably consulted the leaked exit polls, set Kerry as the favorite.  He continues: 

It is damned near impossible to go to graduate school in any but the most artistic disciplines without having to learn about the basics of social research and its uncanny accuracy and validity. We know that professionally conceived samples simply do not yield results which vary six, eight, ten points from eventual data returns, thaty's why there are identifiable margins for error. We know that margins for error are valid, and that results have fallen within the error range for every Presidential election for the past fifty years prior to last fall. NEVER have exit polls varied by beyond-error margins in a single state, not since 1948 when this kind of polling began. 

Wow.  I'm guessing there are a few grad school instructors who may want to assign that paragraph to their students.  How many problems can you identify?

Jim, some advice:  You might want to put that degree to good use and browse a bit here (read these two closely) or perhaps just jump to the end (here and here) and consider where the debate is now.   Note the use of links throughout to source material like this report from the exit pollsters themselves or the one criticizing it here.  Bloggers do a lot of this.

What is ironic about the Lampley-York "smackdown" is that it comes on the same day an official investigation actually uncovered true evidence of fraud, though perhaps not the variety that Lampley imagines.  Yesterday, an appointed task force in Milwaukee that included a Republican appointed U.S. attorney, a Democratic county district attorney, the FBI and the Milwaukee police found what everyone seems to agree is "clear evidence" of fraud.  The AP published a widely distributed story, but Greg Borowski of the Milwaukee Journal Sentinel, the newspaper whose own reporting led to the official investigation, provides the must-read coverage. Read it all. 

According to the JS story, the true evidence of fraud comes from "200 cases of felons voting illegally and more than 100 people who voted twice, used fake names or false addresses or voted in the name of a dead person."  All of these were found in the city of Milwaukee, where John Kerry received 71.8% of the vote (up from 67.6% in 2000). 

It is, of course, theoretically possible that such fraud may have benefited either side, but the Democratic stronghold of Milwaukee is not exactly the place one expects to find padding of the Republican vote.  Presumably that is why Republican officials used the news to push a photo ID requirement for voting.

It is worth noting that the officials saw no evidence of a widespread conspiracy.  Quoting the AP story:

Investigators did not uncover any proof of a plot to alter the outcome of the hotly contested presidential race in Wisconsin's largest city and have filed no criminal charges.

"There is not the evidence of an overriding conspiracy in all of this," U.S. Attorney Steven Biskupic said.

Instead, the task force reported "widespread record keeping failures and separate areas of voter fraud." Biskupic said the faulty records will make it tough to prosecute many of the crimes, although he will file charges if he thinks he can prove wrongdoing in any cases.

Thus, the Milwaukee investigation appears to offer even more evidence of the sort of sloppiness associated with the random "reporting error" MP described recently.  Consider these examples from Borowski's article: 

The city's record-keeping problems meant investigators from the FBI and Milwaukee Police Department have logged more than 1,000 hours reviewing the 70,000 same-day registration cards, including 1,300 that could not be processed because of missing names, addresses and other information.

Indeed, about 100 cards described as "of interest to investigators" cannot be located, officials said. And within the past few weeks, police found a previously lost box of the cards at the Election Commission offices...

The newspaper also identified numerous cases in Milwaukee where the same person appears to have voted twice, though that analysis was hampered by major computer problems at the city.

Those problems, which city officials labeled a "glitch," meant hundreds upon hundreds of cases where people are incorrectly listed as voting twice. These are in addition to cases of double voting identified by investigators.

Apparently, mistakes happen.  A lot. 

Until Republicans and Democrats find a way to agree to clean  up the process, cynicism about the fairness of the count will continue.

UPDATE:  I should probably take back "devoid of fact" with respect to the Byron York half of the "smackdown."  His latest reply to Lampley  in the thread catches up to the state of the debate as of about mid-January, and includes a quote from a certain blogger/pollster you may recognize. 

[Hat tip to reader Rick Brady for emailing links to the original Lampley post and the AP story]

Posted by Mark Blumenthal on May 11, 2005 at 08:44 AM in Exit Polls | Permalink | Comments (15)

May 10, 2005

AAPOR Conference: Correspondents Wanted

As I have mentioned a few times in recent weeks, I will be in Miami the latter half of this week to attend the annual meeting of the American Association for Public Opinion Research (AAPOR).  AAPOR meetings -- the closest we get to a "pollster convention" -- are seldom the stuff of breaking news, but there is much of interest to the true wonks among us who closely follow public opinion surveys.   As such, I will be there hoping to learn and absorb much to share with MP's readers in the coming months. 

I am also hoping to post quick daily updates with whatever "headlines" seem appropriate.  As with much of MP's activities to date, this will be a bit of an experiment.  I'm not exactly sure what form my updates will take or how frequently I will post.  True "live blogging" is not likely given the typical technical facilities available and the proclivities of your humble correspondent.  However, I am hoping to share what seems most newsworthy at least once a day, especially for those who might like to attend the conference but cannot.

In that spirit, I have two requests for those still reading: 

  1. For those MP readers that will be attending the conference:  I could use your help. There are certainly several places in the schedule where I would like to be in two places at once, but of course, cannot.  So if you'd like to be an informal "MP correspondent," please drop me an email.  Suggestions, tips or hints on what to expect from those presenting papers would also be greatly appreciated.
  2. For those who will not be attending the AAPOR conference - Are there any particular subject areas that most interest you?  True glutton-wonks can download the full final program here (warning: the file is a hefty 18.4 MB).  Please email me or leave a comment. 

Posted by Mark Blumenthal on May 10, 2005 at 12:18 PM in Miscellanous | Permalink | Comments (1)

May 09, 2005

UK Polls: Anthony Wells

Unfortunately, I'm at least a week late with this recommendation, but perhaps for true political polling junkies it will be better late than never.  For MP-like commentary on political polling in the United Kingdom, no one does it better than Anthony Wells' UK Polling Report (actually, it might be more appropriate to describe MP as "Wells-like," since he was blogging long before I started).  His site now includes an impressive archive of British political polling.  His review of the performance of the polls in last week's parliamentary election is noteworthy especially since their accuracy came as something of a surprise. 

Here is Well's post-election take on the UK pre-election polls: 

So, with pretty much everything except Harlow counted, how well did the pollsters do? The bottom line is that everyone got it right - trebles all round! While NOP take the prize, having got the result exactly spot on, not only did all the pollsters get within the 3% margin of error, they all got every party’s share of the vote to within 2%. Basically, it was a triumph for the pollsters.

RESULT - CON 33.2%, LAB 36.2%, LD 22.7%

[Update 5/10: These results were posted on Wells' blog on 5/7.   As of today, BBC is reporting Conservative 32.3%, Labor 35.2%, Liberal Democrat 22.0%.  A am leaving Well's error estimates in place, although the reader should note they need to be recalculated]

NOP/Independent - CON 33%(-0.2), LAB 36%(-0.2), LD 23%(+0.3). Av. Error - 0.2%
MORI/Standard - CON 33%(-0.2), LAB 38%(+1.8%), LD 23%(+0.3). Av. Error - 0.8%
Harris -  CON 33%(-0.2), LAB 38%(+1.8), LD 22%(-0.7). Av. Error - 0.9%
BES - CON 32.6%(-0.6), LAB 35%(-1.2), LD 23.5%(+0.8%). Av. Error - 0.9%
YouGov/Telegraph - CON 32%(-1.2), LAB 37%(+0.8), LD 24%(+1.3). Av.Error - 1.1%
ICM/Guardian -  CON 32%(-1.2), LAB 38%(+1.8), LD 22%(-0.7). Av.Error - 1.2%
Populus/Times -  CON 32%(-1.2), LAB 38%(+1.8), LD 21%(-1.7). Av. Error - 1.6%

The other two pollsters, Communicate Research and BPIX, conducted their final polls too early to be counted as proper eve-of-poll predictions, but, for the record, both their final polls were also within the standard 3% margin of error. Their average errors were 0.9% for BPIX and 2.1% for Communicate.

The British Polling Council have a press release out with the same information (although they include the “others” in the average, and use rounded figures for the results, hence the slightly different figures. It doesn’t change the result - everyone was right and NOP did best).

On Election Day, Wells posted the results of the national exit polls, which did even better:

MORI/NOP's exit poll shows a share of the vote of CON 33%, LAB 37%, LD 22%.  It predicts 44 Conservative gains and 2 Liberal Democrat gains for a Labour majority of 66.

Of course, the actual share of the vote was Conservative 33.2%, Labor 36.2% and Liberal Democrat 22.0%, translating into a 67 seat Labor majority.

[Update: Again, these results were posted on Wells' blog on 5/7.  As of today (5/10), BBC is reporting Conservative 32.3%, Labor 35.2%, Liberal Democrat 22.0%.   I have changed the error computations below to reflect the updated results.  Also a reader emails to advise that the final Labor majority will end up being 66 seats not the current 67 seats after a special by-election to be held soon to replace a recently deceased LD MP.  The district should go Conservative in the by-election so the Labor majority will be 66 seats within a few weeks exactly matching the projection from the exit polls].

So does the UK experience show, as some would seem to believe, that exit polls are flawless elsewhere but troubled only in the US?  Hardly.  In fact, much of Well's enthusiasm comes from the previously problematic performance of the UK pre-election and exit polls.  Let's start with the exit polls.  The error on the margin in last week's MORI/NOP exit poll was only 1.10 percentage point in Labor's favor.  As Well reports in a summary, the NOP/BBC exit polls showed a much greater skew to Labor in 2001 (2.7) and especially in 1997 (5.2) and 1992 (3.6).    

What about the pre-election polls?  The average error for each candidate estimate last week across all the polls reported by Wells (I did the math) was 1.10 percentage point.  The comparable average error was higher in 2001 (1.7), 1997 (2.5) and 1992 (3.3).  The error on the margin has shown a consistent skew to the Labor party, overestimating its lead over the Conservatives in 2001 (3.5), 1997 (4.0) and especially 1992 (8.3).  The slight error in Labor's favor this time (1.76) is obviously small by comparison. 

What is interesting about the consistent skew to Labor in recent polling is that pollsters have named it: "Shy Tories" or the "Spiral of Silence" effect (the latter based on the book of the same name by Elisabeth Noelle-Neumann).  As defined by the BBC guide to polling methodology, it means "people who do not like to admit they support a certain party but who vote for them nonetheless."   Some UK pollsters now reallocate undecideds to counter this phenomenon, although Wells' wrap-up notes that "the spiral of silence adjustment made Populus's final poll less accurate."

It is also interesting that there may be remnant traces of the Labor skew in this year's result. Five of seven prelections polls and the exit poll overestimated the Labor vote by one percentage point or more (though my favorite binomial calculator tells me there is a 23% probability of tossing a coin seven times and having it come up heads five times).   Wells -- who identifies himself as a supporter of the Conservative Party -- concludes that the "lingering bias" is "small it is hardly worth worrying about." 

One characteristic that the UK polls shared with their US counterparts is the way widely varying results tended to converge in the final week.  More from Wells:

What is interesting is the comparison between the final result and the polls during the campaign - the results from YouGov during the campaign were pretty close to the final result throughout, especially after the first few polls that showed the parties neck and neck. In contrast during the campaign the phone pollsters showed some whopping great Labour leads that disappeared in their final polls - of all the phone polls during the campaign, only one (MORI/Observer, published on the 1st May), did not report a Labour lead larger than the 3% they finally acheived. Of YouGov’s last 10 polls of the campaign, 8 showed a Labour lead of 3 or 4 percent. It doesn’t, of course, necessarily mean that YouGov were right - the “real” Labour lead at that time could have been larger, only to be reduced by a late swing to the Lib Dems - hence the reason why we only compare the eve-of-poll predictions to the final result.   

[Note:  YouGov draws samples from a panel of pre-recruited respondents and conducts interviews online] 

Finally, back to exit polls:  It is noteworthy that the UK had no controversy about "leaks" of "early" exit polls and no dark suspicions about the re-weighting of the exit polls to match the actual count because neither phenomena occurred.  The British exit pollsters release their "final" results as the polls close for all to see.  It helps, of course, that Britain has a uniform poll closing time, but survey research in the UK seems to have survived the release of imperfect results.  In fact, it is possible that this year's improvements resulted from the public disclosure of those previously problematic surveys.  Hopefully Wells will devote some future posts to discussing what, if anything, the UK exit pollsters did differently this time. 

Also consider that the previous problems with the UK exit polls did not produce election fraud conspiracy theories.  Why not?  One reason, as our friend Elizabeth Liddle points out, is that the count in the UK is "utterly transparent."  Paper ballots are sorted and counted in public at a centrally located place, open to all who wish to observe.  People have faith in the result because of this transparency.  Another reason, as Liddle puts it, "why auditable elections are so important."   

Hear, hear!

5/10 - Update:  Anthony Wells emails with a postscript on the real reason why exit poll leaks are so rare in the UK.  They are illegal.  Leakers are subject to a fine of up to £5,000 or 6 months in prison.  Here is the text of the law:

No person shall, in the case of an election to which this section
applies, publish before the poll is closed—

(a) any statement relating to the way in which voters have voted at the
election where that statement is (or might reasonably be taken to be)
based on information given by voters after they have voted, or

(b) any forecast as to the result of the election which is (or might
reasonably be taken to be) based on information so given.

Why no such law in the US?  Well, there's that funny thing called the First Amendment...

Posted by Mark Blumenthal on May 9, 2005 at 01:22 PM in Polls in the News | Permalink | Comments (8)

May 02, 2005

Blog Vacation

A quick note:  I will be taking a brief “blog vacation” for the rest of this week.  I’ll be back next week when our coverage will include “on scene” coverage of the upcoming conference of the American Association for Public Opinion Research (AAPOR).

Some updates for the truly devoted daily readers who don’t want to miss a word:  I added two “footnotes” this morning to Friday’s long exposition on the Liddle Model (starting here).     Also, DailyKos featured a post on Elizabeth Liddle's paper on their front page by diarist "DemFromCt."

Thank you for your continuing interest in this site an have a great week!

Posted by Mark Blumenthal on May 2, 2005 at 03:03 PM in MP Housekeeping | Permalink | Comments (4)