« May 2005 | Main | July 2005 »

June 30, 2005

Iraq the Vote

A small world story:   On the front page of today's Washington Post, Peter Baker and Dan Balz write about the influence of "an extensive study of public opinion" in guiding the Bush Administration strategy for support for the Iraq war:

The White House recently brought onto its staff one of the nation's top academic experts on public opinion during wartime, whose studies are now helping Bush craft his message two years into a war with no easy end in sight. Behind the president's speech is a conviction among White House officials that the battle for public opinion on Iraq hinges on their success in convincing Americans that, whatever their views of going to war in the first place, the conflict there must and can be won.

MP had two immediate reactions.  First, that this story would make a perfect MP topic, focusing as it does on the nexus between public opinion and political strategy.  Second, though MP fancies himself as reasonably familiar with the "top academic experts in public opinion," he had never heard of Peter Feaver and Christopher Gelpi, the two Duke University political scientists Balz wrote about.   Or so he thought.

But then MP did a bit of searching and discovered that Feaver and Gelpi have a third author on their paper, a Duke PhD candidate named Jason Reifler who will soon join the faculty of Loyola University in Chicago. 

The small world part:  Jason Reifler used to work for MP.  Jason was a guest at MP's wedding.  In fact, Jason sent an early version of their research to MP back in the blur that was October, something that, sadly, MP never got around to reading.

Doh!

So starting this afternoon, MP will correct that mistake and review the voluminous work of Gelpi, Reifler and Feaver.  For now, few quick things that will interest MP's readers:

First, here are links to PDF versions of the two papers the authors have put on their websites:

Second, for those who would rather not wade through 100+ pages of academic research, here is the money quote from the "Iraq the Vote" paper:

We argue that the willingness of the public to pay the costs of war and to reelect incumbent Presidents during wartime are dependent on the interaction of two attitudes - one retrospective and one prospective. In particular, we show that retrospective evaluations of whether President Bush "did the right thing" in attacking Iraq and prospective judgments about whether the U.S. will ultimately be successful in Iraq are two critical attitudes for understanding how foreign policy judgments affect vote choice and one's tolerance for casualties. Further, we show that the retrospective judgments serve as a more powerful predictor for vote choice, while the prospective evaluations of mission success better predict continued support for the war in Iraq. These claims are consistent with the broader literature on how foreign policy influences voting behavior, and the literature that examines the public's response to war and casualties. However, we also show that these retrospective and prospective judgments are interactive, and that a person's attitude on one conditions the effect of the other. This interaction operates on "political" support (vote choice) as well as mission" support (casualty tolerance).

Third, the full text of the two key questions Gelpi, Feaver and Reifler use in their analysis:

We would like to know whether you think President Bush did the right thing by using military force against Iraq. Would you say that you strongly approve, somewhat approve, somewhat disapprove or strongly disapprove of his decision?

Regardless of whether you think that the President did the right thing, would you say that the U.S. is very likely succeed in Iraq, somewhat likely to succeed, not very likely to succeed, or not at all likely to succeed?

Finally, this thought:   Several public pollsters have asked variations of the question about whether the US "did the right thing" in attacking Iraq.  However, MP has yet to find a public polls that track prospective judgments about the likelihood of prevailing in Iraq.   Perhaps he has overlooked something obvious, but MP hopes the Baker & Balz article will prompt a public pollster or two to add such an item to their surveys and provide their own independent assessments of the Gelpi-Feaver-Reifler thesis.

More later . . .

UPDATE:  CBS takes up the challenge

Posted by Mark Blumenthal on June 30, 2005 at 02:35 PM in Polls in the News | Permalink | Comments (15)

June 28, 2005

Liberals and 9/11 - Update

A quick update on Friday's post on what polls had to say about how liberal Americans reacted to 9/11.  In their weekly column, The Nation's Pulse ($), the pollsters at the Gallup Organization added their own data to the debate: 

Gallup researchers looked back at polling data from the weeks and months just after the Sept. 11, 2001, terrorist attacks. Certainly, it is not appropriate to say that liberals and Democrats did not support the administration's military response in Afghanistan at that time. A Gallup Poll conducted Oct. 19-21, 2001, showed 93% of conservatives, 90% of moderates, and 80% of liberals approving of military action in Afghanistan.

Anyone else? 

Posted by Mark Blumenthal on June 28, 2005 at 03:23 PM in Polls in the News | Permalink | Comments (1)

June 27, 2005

A Growing Politicization

In the context of the exit poll debate last year, MP pointed to a Pew Research Center study showing that Republicans perceived greater bias in news media coverage than Democrats.  The point was that the lower levels of trust in the media by Republicans helps to explain why Bush voters may have been a bit more reluctant to cooperate with exit pollsters (who displayed network logos prominently).

Just last week, MP noted results from another Pew study showing a similar "credibility gap" between Republicans and Democrats in their trust of various national media outlets.  In a new study released just yesterday, Pew provides further evidence of a growing "politicization" of views of the news media:

Partisanship has long been a major factor in these attitudes. Even so, there has been a startling rise in the politicization of opinions on several measures  especially the question of whether the news media stands up for America, or is too critical of America.

The partisan gap on this issue has grown dramatically, as Republicans increasingly express the view that the press is excessively critical of the U.S. (67% now vs. 42% in 2002). Over the same period, Democratic opinions on this have remained fairly stable (24% now vs. 26% in 2002).

Republicans are now closely divided as to whether the press protects or hurts democracy; 40% say it protects democracy, while 43% believe it hurts democracy. Two years ago, by a fairly sizable margin (44%-31%) more Republicans felt that the press helped democracy. Democratic opinion on this measure has been more stable. In the current survey, 56% say the press protects democracy while just 27% say it hurts democracy.

Views on whether the press is politically biased have been more consistent over the years. More than seven-in-ten Republicans (73%) say the press is biased, compared with 53% of Democrats. Perceptions of political bias have increased modestly among members of both parties over the past two years.

[Emphasis added].

This latest study includes much more of interest including reports of media usage, ratings of media favorability and believability and detailed comparison of those that read newspapers online and in print.  Read it all.

Posted by Mark Blumenthal on June 27, 2005 at 01:53 PM in Exit Polls, Polls in the News | Permalink | Comments (3)

June 24, 2005

How Did Liberals React to 9/11?

By now most of MP's readers have presumably heard the flap over White House Deputy Chief of Staff Karl Rove's speech on Wednesday that attacked the alleged reaction by "liberals" to the September 11 attacks.  For those who have been avoiding all media for the last 48 hours, here is the "money quote:"

Perhaps the most important difference between conservatives and liberals can be found in the area of national security. Conservatives saw the savagery of 9/11 in the attacks and prepared for war. Liberals saw the savagery of the 9/11 attacks and wanted to prepare indictments and offer therapy and understanding for our attackers. In the wake of 9/11, conservatives believed it was time to unleash the might and power of the United States military against the Taliban.

The debate over Rove's remarks has focused mostly on the 9/11 reaction from liberal political leaders and pundits.  Democrats remind us that the Senate authorized military action against Afghanistan by a vote of 98 to 0 and the House approved 420 to 1.   White House Communications Director Dan Bartlett Republicans argued this morning that when Rove said "liberals" he "cited" only the liberal group, MoveOn.org. But MP is intrigued by a different issue.  How did ordinary, rank-and-file liberals react to the 9/11 attacks? That -- as a pollster I once worked for liked to say -- is an empirical question.

Virtually all of the public pollsters went into the field immediately after the attacks and asked Americans whether the US should take military action or go to war.  While I can find no tabulations by ideology, two polls did provide results at the time by party identification.   Here is a sampling: 

CBS/New York Times, 9/13-14/2001, n=959 adults (source: National Journal's Hotline).

Should the U.S. take military action against those responsible?  Yes: 93% of Republicans, 86% of Democrats, 76% of independents

Should the U.S. take military action against those responsible for attacks, even if it means innocent people are killed? Yes: 74% of Republicans, 64% of Democrats, 67% of independents

What if that meant going to war with a nation harboring those responsible for the attacks, then should the U.S. take military action against those responsible for the attacks?  Yes: 74% of Republicans, 61% of Democrats, 65% of independents

What if that meant thousands of innocent civilians may be killed, then should the U.S. take military action against whoever is responsible for the attacks? Yes vs. No: Republicans 66% to 16%, Democrats 55% to 28%, independents 60% to 19%.

Los Angeles Times9/13-14/2001, n=1,561 adults:

In your opinion, is the United States now in a state of war?  Yes: 74% of Republicans, 70% of Democrats, 66% of independents (Q11)

If it is also determined that the Taliban ruling party in Afghanistan is harboring Osama bin Laden, would you support the United States and its allies retaliating with military action against Afghanistan, even if it could result in civilian casualties, or would you oppose that?   Support: 91% of Republicans, 80% of Democrats, 78% of independents (Q37)

What about Osama bin Laden's organization itself? Do you think the United States should retaliate against Bin Laden's group through military action, or should the United States pursue justice by bringing him to trial in the United States?  Retaliate vs. bring to trial: Republicans 80% to 17%, Democrats 66% to 28%, independents 64% to 27% (Q38)

Thus, in the days after 9/11 overwhelming majorities of both Democrats and Republicans believed America was "at war" and favored some sort of "military action."  Americans of all persuasions were less enthusiastic about military action if it meant all out war or killings "thousands of innocent civilians," but even with these stipulations rank and file Democrats still favored war by a two-to-one margin.  Yes, Democrats were a bit less supportive of waging war than Republicans, but compared to the partisan polarization we see today, the unity on these issues in the aftermath of 9/11 was far more striking than the differences.   

Yes, "some" Democrats expressed reluctance to wage all-out war, but so did "some" Republicans (though not as many).  The bigger point:  The  majority of both Democrats and Republicans believed, as Karl Rove might put it, "it was time to unleash the might and power of the United States military against the Taliban."

Of course, Rove spoke of "liberals" and "conservatives" not Democrats and Republicans, and the results above involve partisanship rather than self-reported ideology.  Not all Democrats are liberals, not all liberals are Democrats.  So it is at least theoretically possible that we might reach different conclusions from tabulation by ideology.  This leads to a suggestion from...

MP's Assignment Desk: The major news media pollsters all have data in their archives from 2001 that they could easily tabulate by self-reported ideology.  Do the results for ideology look like the results above for party?  MP assumes others might like to know.  Sunday morning news show producers (if you're reading), this means you!


UPDATE:  MP Gets (Survey) Results!

The pollsters at CBS News were kind enough to pass along cross-tabulations of their post-9/11 questions by self-reported ideology.  Because of limited time, they did not ask an ideology question on the survey conducted on 9/13-14/2001, but did field a longer survey a week later that included ideology and repeated the questions above. 

I've summarized the findings below, and posted a PDF with the complete results, but they are consistent with the results for party.  The bottom line:  Two weeks after the attacks, 84% of self-described liberals supported "military action" against the terrorists and 75% supported "going to war with a nation that is harboring those responsible."

CBS/New York Times, 9/20-23/2001, n=1216 adults.   Note that the question text below is verbatim from CBS; the wording above came from a Hotline summary. 

Do you think the U.S. SHOULD take military action against whoever is responsible for the attacks?  Yes: 84% of liberals, 93% of moderates, 95% of conservatives.

Do you think the U.S. SHOULD take military action against whoever is responsible for the attacks, even if it means that innocent people are killed? Yes vs. No: liberals 60% to 19%, moderates 64% to 21%, conservatives 76% to 14%.

What if that meant going to war with a nation that is harboring those responsible for the attacks, then do you think the United States should take military action against whoever is responsible for the attacks?  Yes vs. No: liberals 75% to 6%, moderates 83% to 6%, conservatives 89% to 3%

What if that meant that many thousands of innocent civilians may be killed, then do you think the United States should take military action against whoever is responsible for the attacks? Yes vs. No: liberals 62% to 17%, moderates 69% to 18%, conservatives 73% to 15%.

Posted by Mark Blumenthal on June 24, 2005 at 03:30 PM in Polls in the News | Permalink | Comments (22)

June 21, 2005

Ideology as a Diagnostic, Part III

In Part I of this series, MP suggested that small differences in self-reported ideology as reported by four different pollsters during 2004 could be explained by either the composition of the people sampled or the way the respondents answer the survey's questions.  In Part II we looked at some theoretical possibilities for why the composition of the sample might differ.  Today, let's look at the possibilities for "measurement error" in the way people answer the questions.

It is almost a cliche now that a pollster's choice of words can affect the way people answer questions.  However, MP wonders how well the general public appreciates the degree to which minor alterations in wording can result in major differences in results.

The most well known example among survey methodologists involves an experiment that altered one seemingly innocuous word. In a 1941 article in Public Opinion Quarterly, Donald Rugg described an experiment conducted the previous year by his employer,  pollster Elmo Roper.  They administered two questions to "comparable cross-sections of the population." On one sample they asked whether "the U.S. should allow public speeches against Democracy?"   On the other, they asked whether "the U.S. should forbid public speeches against Democracy?" I added emphasis to "allow" and "forbid" to make it clear that they varied only one word.  Yet the results were very different:  46% would "forbid" speeches against Democracy, while 62% would "not allow" such speeches.

More than 35 years later, Howard Schuman and Stanley Presser replicated that test on a controlled "split sample" experiment.  In their 1981 book Questions and Answers in Attitude Surveys.    They found 21% would "forbid" speeches against Democracy, but 48% would "not allow" them.

The lesson here is that small, seemingly trivial differences in wording can affect the results.  The only way to know for certain is to conduct split sample experiments that test different versions of a question on identical random samples that hold all other conditions constant. 

Which brings us back to the self-reported ideology questions.  Here is the verbatim text of the ideology question as asked by the four survey organizations for which we have data:

Gallup -- "How would you describe your political views - Very conservative, Conservative, Moderate, Liberal, or Very Liberal? [Gallup rotates the order in which interviewers read the categories.  Half the sample hears the categories starting with very conservative going to very liberal (as above); half hears the reverse order, from very liberal to very conservative]

Harris -- How would you describe your own political philosophy - conservative, moderate, or liberal? [Harris rotates the order in which interviewers read the answer categories]

New York Times/CBS -- How would you describe your views on most political matters? Generally, do you think of yourself as liberal, moderate, or conservative?

Pew -- In general, would you describe your political views as very conservative, conservative, moderate, liberal or very liberal?

Let's consider the ways these are different.

1) The question "stem" -- Gallup and Pew both ask respondents to describe their "political views."  Harris asks about "political philosophy."  CBS/New York Times asks about "views on political matters." Do respondents hear something different in "views" than they do "views on political matters?"  Does "political philosophy" prompt a different response than "political views?"   These differences seem trivial, but again, without controlled experimentation, we cannot say for certain.

2) The number of answer categories -- All four versions ask respondents to categorize themselves into liberal, moderate or conservative, although Pew and Gallup add categories for "very conservative" and "very liberal."  Does prompting for five categories rather than three alter the percentages that call themselves liberal or conservative (regardless of intensity)?  It seems unlikely, but again, without a controlled experiment we cannot say for certain.

3) The order of the answer categories --   The order of the answer categories is different among the four pollsters.  The CBS/NYT question reads choices from liberal to conservative.  Pew reads from conservative to liberal.  Gallup and Harris rotate the order so that a random half of the respondents hear choices read from conservative to liberal, half hear it the other way around.

Telephone surveys, or any survey where the pollster reads the question aloud, can be prone to what methodologists call "recency effects."  Respondents without firmly held opinions sometimes choose the last answer category they hear.   Pollsters frequently rotate answer choices, as Gallup and Harris do in this case, to help control for order effects.

In this case, Gallup's questionnaire indicates that they have been recording the order in which they present answer categories.   So Gallup has been conducting a potentially very helpful experiment on this issue, and may be seeking to determine whether the ideology question is prone to an order effect.  MP has requested more information from Gallup and will pass along details when and if Gallup makes them available.

4) Question order and context -- On any survey, the order of the questions can affect the way respondents answer them.  One question can help create a context for those that follow that might not exist had the initial question been omitted.  Survey methodologists have often demonstrated such context effects, although they usually affect a series of questions on the same subject.

But not always.  At the recent AAPOR conference, the AP/IPSOS pollsters presented findings from a year-long split sample experiment comparing placement of the party identification question at either the beginning or the end of the survey.   Here's the way I described it last month:

When they asked the party question at the end of the questionnaire, they found that consistently more respondents identified themselves as "independents" or (when using a follow-up question to identify "leaners") as "moderate Republicans." They also found that the effect was far stronger in surveys that asked many questions about the campaign or about President Bush than surveys on mostly non-political subjects.  Also, they found that asking party identification first had also had an effect [on] other questions in between.  For example, when they asked party identification first, Bush's job rating was slightly lower (48% vs. 50%) and Kerry's vote slightly higher (47% vs. 43%).

In the case of the ideology question, the four pollsters ask about ideology at the end of their interview along with other demographic questions. Thus, at least in theory, the ideology question is subject to the context created by everything that comes before it.  We can only guess as to whether that context might be different across the four organizations.

It would be helpful to know what pollsters ask just before the ideology question.  Unfortunately, of the four we have been looking at, only the New York Times regularly releases the demographic section of their questionnaire.  We know that ideology immediately follows the party ID question on the CBS/NYT survey, but we do not know what precedes ideology on the other surveys (although the ABC/Washington Post poll asks about party and ideology in the same order as CBS/NYT).

5) Interviewer Training and "Probing" Procedures -- One "house effect" that MP has seen in practice involves the training and standard practices used by survey interviewers.  The best example involves the procedures that interviewers follow when an answer does not easily fit the provided categories.  For example, suppose the interviewer reads a question and then the respondent says nothing.  Nothing at all, just silence.  How long does the interviewer wait before repeating the question or probing for an answer?  Or suppose the respondent says, "hmmm...I'm not sure."  Does the interviewer repeat the question or just record their answer as a "don't know." Put another way, how willing are the interviewers to take "don't know" for an answer?

Unfortunately, these internal procedures can vary between pollsters and are essentially invisible to consumers of survey data.  One hint that such a "house effect" may be at work comes from a pattern in the Gallup data.  As summarized previously, Gallup shows slightly more self-described conservatives than the other pollsters, but they also have fewer in the "don't know" category:  3% on the ideology question (compared to 5-6% for the others); 1% on the party ID question (compared to 7-11% of the others).   Perhaps their interviewers just push a bit harder for an answer.

* * *

Ideology2


This series kicked off with the data presented in the table above.  Readers have been tempted to leap to the conclusion that these small differences prove a statistical "bias" in the sample, that CBS polls too few conservatives or Gallup too many.  However, there are simply too many variables to reach such a conclusion from the available data.  There are many possible explanations that involve differences in the way comparable samples answer the ideology questions, rather than a bias in the sample.  Unfortunately, based on the information available, we just don't know.

In the comments section of Part II of this series, "YetAnotherJohn" wrote, "I have the feeling this series of post[s] is going to end up with the bloggers equivalent of shrugged shoulders."  In a sense, he is right.  If the question is whether any particular pollster is better or worse at "reflecting reality," MP's shoulders are shrugged.

Moreover, had MP broadened this analysis to consider differences on other questions, such as party ID or the vote, his shrug would be even more pronounced.  For example, the CBS/NYT poll, which reported slightly fewer conservatives than other polls in 2004, had virtually the same percentage of Republicans as Pew and Harris in 2004 and was within a single percentage point of both surveys on the Bush-Kerry vote in their final poll.

However, the point is less about what we make of the small differences between pollsters than what we do about it.  Over the last year or so, partly because of the influence of the blogosphere, public pollsters have moved toward greater routine disclosure of party ID and demographic results.  Increasingly, the challenge will be to gain a greater understanding of these data.

If different pollsters show consistently different results on party ID or political ideology, they need to help their consumers understand the true reasons behind those differences.  It is not enough to say we do not know.  Internal experimental research that might better explain the divergence needs to be placed into the public domain.

Moreover, the survey research profession needs to take very seriously the possibility that identity of the survey sponsor might introduce a bias into the sample.   Conditions have changed markedly over the last decade.  Cooperation rates are lower, while distrust of the media is greater and increasingly polarized along partisan lines.  Last year, a study published in Public Opinion Quarterly showed that "persons cooperated at higher rates to surveys on topics of likely interest to them," although the magnitudes of these differences "were not large enough to generate much bias."  Could the survey sponsorship have comparable effects?  To help answer that question, we need to see similar studies in the public domain.

Thus, if media pollsters want to reassure readers and viewers about the quality of their data, the industry cliche applies:  More research is needed.

[typos corrected]

Posted by Mark Blumenthal on June 21, 2005 at 06:06 AM in Divergent Polls, Measurement Issues | Permalink | Comments (0)

June 15, 2005

Ideology as a "Diagnostic?" - Part II

In Part One of this series, we looked at the differences in yearlong averages for self-reported ideology reported by four different pollsters:  New York Times/CBS, Harris, Gallup and the Pew Research Center.  There were small differences in self-reported ideology -- Gallup reported slightly more self-identified conservatives (40%) than Pew (37%) and Harris (36%), while the New York Times/CBS polls showed slightly fewer (33%).   The key question is whether these differences are about the composition of the people sampled or about the way the respondents answer the survey's questions. 

To cut to the chase, given the information available, it is hard to know for sure.  Survey methodologists try to answer questions like these with experiments.  They will divide a sample (or samples), holding all survey conditions constant except one and see if that experimental condition produces the hypothesized difference.  Unfortunately, we do not have experimental data of this type available to explain the differences in ideology (at least, MP is not aware of any).   

However, it may be worth thinking through some hypotheses for why different survey organizations may show small differences in ideology over the long term.  In this post, I'll consider the potential reasons why surveys might differ in their composition.  In Part III, I'll take up differences in question wording and context. 

Ideally, a survey will be based on a perfectly random sample of the population of interest.  In reality, such perfection is impossible.  In the real world, all sorts of deviations from perfect random sampling occur, and any one of these can introduce some statistical bias into the sample.  That is, various "errors" can cause variation in the kinds of people sampled.   

The four surveys for which we have self-reported ideology data for 2004 -- New York Times/CBS, Harris, Pew and Gallup -- have much in common.  All aim to project opinions of the adult population of the US.  All were conducted by telephone and sampled telephone numbers with a "random digit dial" (RDD) methodology that can theoretically reach every U.S. household with a working land-line telephone.  All typically weight their adult samples to match census estimates for the U.S. population.

Despite the similarities, the different polls are likely using slightly different methods, any of which could theoretically introduce subtle differences in the sample composition.  Here are a few of the more obvious ways that these surveys may differ from one another:

1) Response rates -- The topic of response rates is a big one, of course, worthy of its own series of posts.  The computation of response rates is far more complex that most assume and the subject of continuing debate among pollsters.  Nonetheless, there is little disagreement that cooperation and response rates have been declining in recent years.  To cloud this issue further, very few public pollsters regularly release data on their response rates. 

Still we know that response rates do vary considerably among news media polls.  A study conducted in 2003 by academic methodologists Jon Krosnick, Allyson Holbrook and Alison Pfent analyzed response rates data for 20 surveys provided anonymously by news media pollsters (although both the CBS/New York Times and ABC/Washington Post surveys disclosed independently that they contributed studies to the project).  Krosnick's team found a remarkably wide range of response rates, averaging 22%, but varying between 5% and 39% (using the AAPOR "RR3" definition).

2) Levels of effort and persistence -- Related to response rates is the issue of how hard the pollster tries to interview a respondent at each selected household.  How many times, and over how many days, does the pollster attempt to call each number before giving up?  If a respondent refuses on the first call, will a more experienced interviewer call back to attempt to "convert" that person the refusal into a completed interview?  Does the pollster send an advance letters (when it can match selected phone numbers to addresses) to reassure selected respondents of the legitimacy of the survey? 

All of these measures can affect the response rate, and may differ among pollsters or among individual surveys conducted by the same

Do different levels of persistence and response rates matter to the partisan and ideological composition of surveys?  It is hard to say for certain, controlled research on such effects is rare.  However, a study last year by the Pew Research Center found that "independents are harder to reach with commonly used survey techniques than are Republicans or Democrats."  They also found that the hardest to reach respondents were less Republican (23%) than those reached with standard survey effort levels (32%).  The differences for ideology were more "modest:" fewer conservatives among the hardest to reach respondents (35%) than among those interviewed with the standard techniques used by most media pollsters (39%). 

Of course, as consumers of survey data, we have little information about the response rates and effort levels of the various media pollsters. Some provide quite a bit of information about their call-back and other procedures in standard releases, others not as much (for more information, see the methodology information offered by CBS, the New York Times, Pew, Gallup($) and Harris).

3) Within household selection -- Ideally, random sampling does not end at the household level.  To achieve a perfect random sample, the interviewer would need to get the person that answers the phone to provide a listing of everyone at the household, then select one person at random from that list and interview them.  Of course, that procedure has the obvious drawback of intrusiveness.  Ask for that much personal information at the beginning of a telephone survey and many respondents will simply hang up.   

Thus, pollsters use a variety of techniques that balance the goals of keeping response rates high while introducing as much randomness as possible to the selection of a respondent within the household.  A recent study by Cecilie Gaziano published in Public Opinion Quarterly identified 14 distinct procedures in use by various pollsters, yet concluded that "little systematic, accessible evidence exists" as to which pollsters would be best advised to choose.  Gaizano also raised the possibility that within-household-selection might affect "substantive questions" such as party identification or vote choice, but could conclude only that "much more systematic study" is needed.

4) Survey Sponsor -- In the first post in this series, several readers suggested via comments that the image of the media organization might induce Democrats to participate in a survey more readily than Republicans (or vice versa) because of a greater affinity with the media organizations sponsoring the survey.  This comment came via blogger Gerry Daly:

If the NYT polls identify themselves as an NYT/CBS poll before asking the ideological question, then given the editorial propensity of the paper, it wouldn't be to hard to imagine more conservatives declining to be polled, more liberals agreeing to be polled or people giving the answer they think the pollster wants to hear. I think this would likely be enough to explain the differences.

[A clarification:  This comment comes from one of Daly's readers, not Daly himself].

This is an intriguing possibility (or a frightening one, depending on your perspective), especially given the exit polls were apparently beset with just such a "differential response" problem.  News outlets like CBS and the New York Times have long highlighted the sponsorship of their organizations at the outset of the survey because it gave the call added credibility and increased the response rate.

Evidence does exist to show that Democrats and Republicans now have different levels of trust of the national media brands.  In May of last year, the Pew Research Center found that "only about half as many Republicans as Democrats express a great deal of trust" in most broadcast and national print media outlets.  For example 14% of Republicans versus 31% of Democrats say they trusted the New York Times a great deal.   Only the Fox News Channel got higher marks from Republicans (29%) than Democrats (24%).  Moreover, as the table from the Pew report  shows, the gap has grown in recent years, as trust in the various national media brands among Republicans has fallen off sharply:

Media_cred_gap


We have 2004 self-reported ideology data for only four organizations:  CBS/NYT, Pew, Gallup and Harris.  Of these, only CBS/NYT had slightly fewer conservatives than the other three, so it is reasonable to consider the hypothesis that the CBS/NYT sponsorship contributes to the difference.

[Clarification:  I am not ready to conclude that the differences in self-reported ideology in 2004 for these four polling organizations are about the identity of the sponsor. Also,  I would certainly not endorse the notion, as one emailer put it, that "CBS/NYT is wrong and everyone else is right."   In fact, I am trying to make the opposite point:  There are too many variables between the four polls to know for certain what explains the differences or which ideology number is "right."  It is worth remembering that, in addition to the sampling issues, all four surveys asked the ideology question in a slightly  different way.  Finally, in fairness, please note that I singled out the NYT/CBS poll here largely because they released data on self-reported ideology during 2004 while other well known news outlets, like NBC/WSJ, ABC/Washington Post and Fox News, did not].   

However, I hope the point is clear from all the above:  Surveys differ in many ways that could introduce a bias in the sample or (as we'll see in Part III) that might induce comparable samples to give different answers to a question about ideology.  The only way to test whether the survey introduction influences the composition of the sample is to do an experiment that holds all other factors constant (including the measurement issues that I'll take up in Part Three) and randomly varies the survey introduction.   

MP knows of no such experiments in the public domain, but he certainly hopes that the pollsters for the major networks and national newspapers are thinking about this issue and devising experiments of their own. 

I'll take up the measurement issues in Part III.

--------------------
Note:  in the original version of this post, the following parenthetical remark appeared higher in the post.  I moved it for the sake of clarity: 

(One reader suggested comparing CBS/NYT to the Fox News Poll. That would be an interesting test but for the fact that Opinion Dynamics, the company that conducts the Fox News survey, does not mention Fox News as the sponsor of their survey). 

Posted by Mark Blumenthal on June 15, 2005 at 11:18 AM in Divergent Polls, Sampling Issues | Permalink | Comments (9)

June 09, 2005

Westhill/Hotline Poll on "Moral Values"

Today, the National Journal's Hotline released their latest Westhill Partners/Hotline Poll.  Two things of note there for MP's readers:  First, though the Hotline's daily news summaries are available only through a pricey subscription, National Journal is making the complete Westhill/Hotline poll available online free of charge.  Second, the most recent poll (complete results here, Powerpoint summary here), includes a genuinely interesting finding regarding the "moral values"

I'll let the front page commentary from Hotline Editor Chuck Todd explain:

This month's edition of the Westhill/Hotline poll tries to give folks a better understanding of what "moral values" are in the minds of voters and which party represents which values.

-- In an open-ended question, when asked to define "moral values," 62% of the responses were character tests, while just 37% of the responses named a specific issue (be it gay marriage, abortion or corporate corruption).

Just to be clear, here is the question with responses sorted into the two categories Todd described (and also crosstabulated by party identification):

Westhill_hotline_values_1

Two minor notes:  First, the question allowed for multiple responses, so the percentages in the table add up to more than 100%.  Second, the subtotals (37% and 62%) sum all mentions, so they may may double count some respondents.

Some may quibble with the way the Hotline categorized these responses, but the main point is the division between those responses that clearly referenced public policy issues and those that were more vague.  The implication:  For many voters -- Democrat and Republican -- "moral values" may have as much to do with perceptions of the candidate's character as with their stands on specific issues.  For many, as Chuck Todd put it in an email, "moral values" in the context of politics means mostly "don't act like Bill Clinton in office."

Posted by Mark Blumenthal on June 9, 2005 at 04:47 PM in Measurement Issues | Permalink | Comments (3)

June 08, 2005

Post/ABC Poll Bites Back

Earlier this week, I started a discussion of the notion of using demographic and attitudinal data as "diagnostic" measures to assess political surveys.  Two recent trends make this sort of analysis both possible and important:  (a) Pollsters are starting to disclose more about the demographic and attitudinal composition of their samples and (b) political partisans, mostly in the blogosphere, are starting to dissect and criticize the demographic results of polls they find disagreeable.  Unfortunately, that criticism is often quite wrongheaded.

Today brings another example.  A poll released on Tuesday last night by the Washington Post and ABC News had some unwelcome news for President Bush.    Within hours, the blogger "Bulldogpundit" on AnkleBitingPundits posted a takedown: "Here We Go Again - Debunking Another Slanted Poll From The Washington Post."  Both Instapundit and the National Review Online (NRO) linked to it; John Podhoretz of NRO called the critique "incredibly convincing" and the poll worthy of "shame."  And according to AnkleBitingPundits (henceforth "ABP"), Tony Snow "favorably cited" the post "on his nationally syndicated show."

MP is less than convinced.  Let's take APB's case point by point:

1. Party Leanings - Go to page 20 of the results. The respondents tend to "think of themselves" as follows: 30% Democrat; 31% Republican, and 34% Independent. Sounds fair you say (even though the 2004 exit poll showed R's and D's split at about 37%). Yeah, it sounds about right - till you read the next question and you find that the respondents "lean" towards the Democrats by a percentage of 48% to 34%, which confirms something I long thought. People in polls who ID themselves as "Independent" are mainly Democrats and liberals.

This observation prompted NRO's Podhoretz to ask, "how on earth could the Post actually think a poll whose respondents lean 48 percent Democratic to 34 percent Republican would have any validity?"

First, a bit of explanation, as the Post's presentation of the classic party identification (in their PDF summary) appears to have created some confusion.  They first asked the classic party identification question (Q901), "generally speaking, do you usually think of yourself as a Democrat, a Republican or an Independent?"  Republicans had a one-point advantage (31% to 30%).  Independents got the classic follow-up (Q904): "Do you lean more towards the Democratic Party or the Republican Party?"  The results for Q904 as presented in the Post summary (48% Democrat, 34% Republican) were clearly computed among independents only.  The summary also included a result for "Leaned Party ID" (Q901/904), a combination of the two questions tabulated among all adults: 48% Democrat or lean Democrat, 45% Republican or lean Republican.

Are these results "slanted" toward the Democrats?  If anything, the opposite is true.  Consider first the results from the first part of the question.  The latest Post/ABC poll includes fewer Democrats and more Republicans than their most surveys:

Postabc_party

[Note, the Post released party ID numbers for its two most recent surveys (here and here).  I obtained results for the September and December surveys from the Roper Center IPoll database, and estimated party ID for the October tracking surveys from a cross-tabulation presented by the ABC pollsters at the recent AAPOR conference (see discussion below).  I have not yet been able to obtain results for party ID for surveys done earlier in 2005 by the Post and ABC]. 

The pattern is similar when we compare the Post/ABC poll (30% Democrat, 31% Republican) to what other pollsters measured during 2004:


Finally, what about "leaned Party ID?" The three point Democratic advantage on the Post/ABC poll (48% to 45%) is exactly the same as Gallup's average for 2004 (48% to 45%, my tabulation using their Gallup Brain archive) and slightly more Republican than the combined result from the Pew Research Center during 2004 (47% to 41%).  Both results were also based on samples of all U.S. adults. 

ABP continues:

2. Sample Group and Timing Of Poll -  First of all, the Post polls only "adults," not  "registered" or even "likely" voters.  As you know 36% of the respondents aren't even eligible to vote, and of those that are eligible, only 60% vote.

This is the one aspect of ABP's criticism that gets some support elsewhere, yet the argument is a bit of a red herring.  It is true that the Post/ABC poll surveys only adults. So do the most recent polls taken by Gallup/CNN/USAToday, CBS/New York Times, NBC/Wall Street Journal, Pew, Time, Newsweek, Harris and the LA Times.  It is true that a large portion of the adults in these samples do not vote.  Pre-election surveys that aim to forecast an election or measure the opinions of likely voters - including virtually all done by private campaign pollsters of both parties - routinely screen for registered voters.  But public polls typically have a broader mission:  They measure the opinion of all Americans. 

Both the Washington Post story and the ABC News analysis consistently refer to their poll as representing "the American public" or "Americans."   So the issue is more one of philosophy than slant.  Why should a poll of Americans exclude the views of non-voters? 

There's more...

Next, 1/2 of the polling nights are considered weekend nights, and weekend polling is notoriously unreliable and favorable to Democrats.

MP has his own doubts about the reliability of weekend interviewing but was surprised to see evidence presented by the ABC pollsters at the recent AAPOR conference showing no systematic bias in partisanship for weekend interviews.  The ABC pollsters looked at their pre-election tracking surveys conducted between October 1 and November 1 of 2004, and compared 14,000 interviews conducted on weeknights (Sunday to Thursday) with 6,597 conducted on Fridays and Saturdays.  Party ID was 33% Dem, 30% GOP on weeknights, 32% Dem 30% GOP on weekends, a non-significant difference even with the massive sample size.

3. Age of Respondents - The poll also over samples the number of 18-29 year old voters, the age group that voted most for Kerry. In 2004, 17% of the electorate was between 18-29, and Kerry's advantage among them was +9%. In the Post's poll that age group was 21% of the sample.

Oversample 18-29 year olds?  Nonsense.  Again, the population of "adults" is not the same as voters.  The Post/ABC poll essentially matches the 2000 US Census, which shows 18-29 year olds as 22.2% of those over the age of 18. 

And finally...

4. Income Level of Respondents -  Next take into consideration the annual income of the Post poll's respondents. In 2004, 45% of the electorate was making under $50K, and voted for Kerry 55-44%. But in the Post's poll, 55% of respondents make under $50K. That's a huge jump in likely Democratic voters among the Post poll's respondents.

5. Religion - Next, let's look at the religion of Post poll respondents. In 2004, 54% of voters described themselves as "Protestants" and voted overwhelmingly for Bush (+19%). In the Post poll, only 47% of respondents were "Protestant". Also, in the Post poll 14% of respondents had "no" religion, while in 2004, only 10% of voters had "no" religion, and they voted overwhelmingly for Kerry (+36%). Catholics are also underrepresented by 4% in the Post poll, another group that went for Bush in 2004.

ABP persists in finding fault with a survey of adults for not matching an exit poll of voters.  Again, these populations are different.  MP has not searched for comparable survey results for religion and income, but gives this warning to those who do.  Questions about income and religion use different language that often yields inconsistent results.

The point here:  MP sees nothing wrong with scrutinizing a poll's demographics, but one would expect allegations of a "slanted" or "flawed" poll to have some basis in reality.  These do not. 

Posted by Mark Blumenthal on June 8, 2005 at 11:50 PM in Polls in the News | Permalink | Comments (6)

June 06, 2005

Ideology as a "Diagnostic?" - Part I

A few weeks ago, blogger Gerry Daly (Dalythoughts) took a close look at self-reported ideology as reported on several national polls.  Daly was mostly interested in whether a recent Washington Post/ABC survey sampled too few self-identified conservatives.  In the process, he theorized that "ideology is an attribute rather than an attitude."  I questioned that theory in the comments section here, and Gerry followed-up with a reaction on his own blog.  This discussion, along with a few reader emails, led me to want to take a closer look at both self-reported ideology and the whole notion of using party identification and self-reported ideology as "diagnostic" measures to assess political surveys. 

The more I thought about it, I realized that this topic is bigger than a single blog post.   It leads to many of the questions that come up about polls repeatedly and, as such, suggests a longer important conversation about using attitudes like party identification and ideology as diagnostics.  So rather than try to consider all the issues that Gerry raised in one shot, I'd like to take this topic slowly.  Today I'll raise some questions that I'll try to pursue over the next few days or weeks or wherever the thread takes us.

Let's start with self-reported ideology.  One thing Gerry did was to look at average results for self-reported ideology for a few polling organizations.  I took the values that he started with and obtained a few more.  Here's what we have - the following table shows either the average or rolled together responses for self-reported ideology.  Each question asked respondents to identify themselves in some form as "conservative, moderate or liberal" (more on the differences in question wording below):

Ideology1

A quick note on the sources:   Harris provided annual averages in an online report.  Daly computed results for Pew using cell counts for ideology in a cross-tab in this report; the Pew Research kindly provided the appropriately weighted results for 2004 on request.   We calculated average results for 2004 for the New York Times and Gallup.  The Times reports results for all questions for all surveys they conducted in partnership with CBS for 2004 (via a PDF available via the link in the upper right corner of this page - note, surveys conducted only by CBS are not included).   I obtained results for the 2004 Gallup surveys from their "Gallup Brain" archive.   Please consider this table a rough draft - -I'd like to verify the values with Gallup and the New York Times and request similar results from other national pollsters for 2004.

While the results in the table are broadly consistent (all show far more conservatives then liberals and 38-41% in the moderate category), there are small differences.  The Gallup survey shows slightly more self-identified conservatives (40%) than Pew (37%) and Harris (36%), and the New York Times shows slightly fewer (33%).  For today, let's consider the possible explanations. 

Academics organize the study of survey methodology into classes of "errors" -- ways that a survey statistic might vary from the underlying "true" value present in the full population of interest.  The current full-blown typology is known as the "total survey error" framework.  I will not try to explain or define all of it here (though I can suggest a terrific graduate course that covers it all).   Rather, for the purposes of this discussion, let me oversimplify that framework and lump everything into three primary reasons the results for ideology might differ from pollster to pollster: 

1) Random Sampling Error - All survey statistics have some built in random variation because they are based on a random sample rather than on counting the full population.  We typically call this range of variation the "margin of error." 

In this example, sampling error alone does not account for the small differences across surveys.  Since each line in the table represents at least 10,000 interviews, the margin of error is quite small.  Assuming we apply 95% confidence level, the margin of error for the Harris and New York Times results will be roughly  1%, for Gallup and Pew roughly  0.5%   Thus, random statistical variation alone cannot explain differences of two percentage points or more in the table above. 

2) Errors of Representation - If a survey is not truly random, the statistics that result may have some error.  In telephone surveys, a fairly large percentage of those contacted do not agree to participate.  Others are not home or are available when called.  If these "non-respondents" are different from those that respond, the survey might show some statistical bias.  So, to use a hypothetical example, if liberals are more likely to be home or more willing to be interviewed, the survey will over-represent them.  This is "non-response bias." 

Similarly, some respondents may be left out of a random digit dial telephone survey because their residence lacks a working landline telephone.  If those without telephones are different in terms of their self-reported ideology than those included, the result is a "coverage bias" that tilts the sample in one direction or another. 

These are the two big potential reasons for a less than representative sample.  Another potential problem is a deviation from purely random selection of respondent who gets interviewed.  The pollster should strive to pick a random person within each household, but this is hard to do in practice.  Differences in the way pollsters choose respondents at the household level can introduce differences between surveys. 

Most of the discussion of the differences between polls in terms of party identification or ideology assumes that these errors of representation are the only possible problem (other than random error).  They overlook a third category. 

3) Errors of Measurement - Even if the samples are all representative and all consist of the same kinds of people, the polls may still differ in terms of self-reported ideology because of the way they ask the ideology question.  In this example, we need to try to separate the underlying concept (whether Americans have a political ideology, whether they conceive of a continuum that runs from liberal to conservative and classify themselves accordingly) from the mechanics of how we ask the ideology question (what wording we use, how previous questions might define the context, how interviewers interact with respondents when they are not sure of an answer).  The short answer is that very small differences in wording, context and execution can make very small differences. 

(Another theoretical source of error that I have not discussed is that the four organizations conducted a different number of surveys, and we were not in the field on precisely the same dates.  However, all four did periodic surveys during 2004 with slightly greater frequency in the fall.  As I see no obvious trend during 2004 in the ideology results for NYT/CBS and Gallup.  So, my assumption is that despite differences in field dates, the data were collected in essentially comparable time periods).

Tomorrow, I want to consider how we might go about distinguishing between measurement error and problems of representation.  I also want to suggest some specific theories - call them "hypotheses" if you want to get all formal about it - for why different pollsters showed slightly different results in self-reported ideology during 2004.  Let me say, for now, that it is not obvious to me which source of "error" (representation or measurement ) is to blame for these small differences. 

For today, I do want to provide the verbatim text of the ideology question for the four survey organizations cited above:

New York Times/CBS - How would you describe your views on most political matters? Generally, do you think of yourself as liberal, moderate, or conservative?

Harris -- How would you describe your own political philosophy - conservative, moderate, or liberal?

Pew -- In general, would you describe your political views as very conservative, conservative, moderate, liberal or very liberal? 

Gallup - "How would you describe your political views - Very conservative, Conservative, Moderate, Liberal, or Very Liberal? [Gallup rotates the order in which interviewers read the categories.  Half the sample hears the categories starting with very conservative going to very liberal (as above); half hears the reverse order, from very liberal to very conservative]

Feel free to speculate about the differences in the comments.    More in the next post.   

Posted by Mark Blumenthal on June 6, 2005 at 02:30 PM in Divergent Polls, Measurement Issues, Sampling Issues | Permalink | Comments (5)

June 02, 2005

USCV vs. USCV

Back to exit polls for a moment.  Bruce O'Dell, the founding Vice President of U.S. Count Votes (USCV), the organization that has been arguing that the official explanations for the "exit poll discrepancy" are "implausible," has just released a paper that refutes, well...the most recent "working paper" by U.S. Count Votes. 

Some background: Back in April, they released a report titled "Analysis of Exit Poll Discrepancies" (discussed on MP here and here) that purported to show the implausibility of explanations for the discrepancies provided by the exit pollsters themselves.  Subsequently, Elizabeth Liddle, a self-described "fraudster" who had reviewed early drafts of that report did her own analysis and showed that a statistical artifact undermined the conclusions of the US Count Votes report (discussed by MP here).  At the AAPOR conference last month, exit pollster Warren Mitofsky presented findings based on Liddle's work that confirmed her hypothesis.  At the conference, US Count Votes author Ron Baiman distributed another "working paper" that claimed to refute Liddle's work.  The new paper was signed by only four of the twelve authors of the original USCV report.  He subsequently posted several very long comments on this site in the same vein.

Got that?  It's been quite a "debate."

Yesterday Bruce O'Dell, one of the original USCV authors, stepped forward with his own forceful evisceration of Baiman's arguments based on the computer simulations O'Dell did for USCV.  MP had considered doing a summary of O'Dell's paper, but could not find a way to improve on O'Dell's: 

The key argument of the USCV Working Paper is that Edison/Mitofsky's exit poll data cannot be explained without either (1) highly improbable patterns of exit poll participation between Kerry and Bush supporters that vary significantly depending on the partisanship of the precinct in a way that is impossible to explain, or (2) vote fraud. Since they rule out the first explanation, the authors of the Working Paper believe they have made the case that widespread vote fraud must have actually occurred.

However, a closer look at the data they cite in their report reveals that Kerry and Bush supporter exit poll response rates actually did not vary significantly by precinct partisanship. Systematic exit poll bias cannot be ruled out as an explanation of the 2004 Presidential exit poll discrepancy - nor can widespread vote count corruption. The case for fraud is still unproven, and I believe will never be able to be proven through exit poll analysis alone.

This paper should not be misinterpreted as an argument against the likelihood of vote fraud. Quite the opposite; I believe US voting equipment and vote counting processes are severely vulnerable to systematic insider manipulation and that is a clear and present danger to our democracy. I strongly endorse the Working Paper's call to implement Voter-Verifiable Paper Ballots and a secure audit protocol, and to compile and analyze a database of election results.

Judging only by the word count of the comments in my last post on this subject, there may appear to be some genuine question about whether the explanations provided by Edison-Mitofsky for the discrepancy between the exit poll results and the actual count are "plausible."  There is to be sure, much sound, fury and name-calling in this debate, but on the substance and the evidence the jury is in.  Even US Count Vote's founding Vice President can see it.  As O'Dell says, "systematic exit poll bias cannot be ruled out as an explanation of the 2004 Presidential exit poll discrepancy."

UPDATE - DemFromCt over at DailyKos chimes in on O'Dell's paper and stresses a point I neglected.  Dem takes issue with:

The multiple attacks on Elizabeth Liddle's credentials, motivation, etc. (and those of anyone who agrees with her) that's become a cottage industry at DU [Democratic Underground]  and at times, here at Daily Kos by a minority of posters. Kudos to Bruce O'Dell to have the intellectual integrity to write this; my hat is doffed. I hope his paper (and post) is read in the spirit in which it was written. And we really need to move on to something else [link and emphasis added].

Agree on all counts

* * *

Note: As always, MP welcomes dissenting opinions in the comments section.   However, the subject of exit polls and voter fraud seems to generate an unusual level of invective.  One comment in the last post on this topic inexplicably mocked the religious faith of another commenter.  I have deleted that comment -- the first time I have ever seen the need to do so on Mystery Pollster.

MP is generally libertarian when it comes to the comments section, but found the earlier comment to be repugnant and unacceptable, so please be advised:  There is no room for slurs against the gender, race, ethnicity, religion or sexual orientation on Mystery Pollster.  In the future, I will not hesitate to delete comments I consider morally offensive.  My board, my rules. 

Posted by Mark Blumenthal on June 2, 2005 at 04:44 PM in Exit Polls | Permalink | Comments (98)