« June 2005 | Main | August 2005 »

July 28, 2005

About that YouGov Poll of British Muslims

Reader BK emailed with a question about a survey of Muslims in the United Kingdom conducted for the London Daily Telegraph by the company YouGov just after the first London bombings.  It asked respondents:  "Do you think the bombing attacks in London on July 7 were justified or not?"  Our reader was horrified that 6% of the British Muslims selected the answer "on balance justified" (11% answered "on balance not justified," 77% answered "not at all justified" and 6% were not sure).  BK asks, "Is this a well designed survey? Do you see these results as strong?"

The short answer is that MP is uncertain.  The longer answer provides a good opportunity to talk about the ongoing debate over Internet based surveys.

YouGov conducts its surveys over the Internet, using a panel of respondents that agree to be interviewed at regular intervals for some financial incentive.  According to British polling blogger Anthony Wells, the company has "used advertising and in some cases even specialist recruitment consultants to try and build a panel that reflects all areas of society."  When they conduct a survey, YouGov draws a random sample of its volunteers and weights the results to match the known demographics of the population of Great Britain.

The key issue is that YouGov's panel is not a probability sample. Probability sampling is the basis for "scientific" polling. Draw a true random sample where every member of the population has an equal probability of being selected (or at least a known probability) and, the results of the survey can be considered projective of the larger population within a statistical "margin of error."  But the YouGov panel (like those used in the US by Harris Interactive and Zogby) are not random samples at all.  They involve hundreds of thousands of volunteers that "opt-in" to the panel through various sources, most often ads placed on web sites. 

Conventional survey researchers argue that polls based on non-probability samples cannot be considered "scientific," that they have no sound theoretical basis.  YouGov, and the other online pollsters, argue that the challenges now facing telephone surveys -- especially the response rates that typically plunge below 30% on US surveys sponsored by the news media -- undermine the theoretical basis conventional polling.  They argue that with statistical adjustments, their non-probability samples will yield results that are just as reliable as those obtained with conventional methods.

This debate rages among survey researchers.  YouGov has produced notable successes in Britain (see Anthony Wells' summary), but was way off in its polls of the US presidential election last year.  Their "final prediction" had Kerry beating Bush by three points (50% to 47%), an error comparable to that experienced by the exit pollsters [For a more detailed discussion of this debate, see the paper by the noted academics Morris Fiorina and Jon Krosnick, posted on the YouGov/Economist web site].

All things being equal, MP trusts polls that start as probability samples over those that do not.  In the case of the survey of British Muslims, however, all things are not equal.  Muslims -- at 2.7% of the British population -- qualify as what pollsters call a "rare population."  That means that trying to survey British Muslims with a standard random digit dial (RDD) probability sample is prohibitively expensive.  To get a sample of 500, the pollster would need to reach a sample of over 18,500 adults and then hand up on all but the 500 Muslims.  YouGov simply sent an email to those in its panel they had already identified as Muslims, and then weighted the results obtained from the 526 that responded "to reflect the Muslim population of Great Britain by age, gender and country of birth." 

Of course, an Internet Panel is not the only way to survey a rare population.  Interestingly, the British survey organization MORI also conducted a survey of British Muslims in mid July.   The MORI online summary says their survey was "conducted on-street and in-home among British Muslims aged 16+" and then, like the YouGov survey, "weighted by age, gender and work status to reflect the profile of Muslims in Britain according to 2001 Census Data."  Presumably, MORI sent interviewers to heavily Muslim neighborhoods where they went door-to-door or stood on street corners, using some sort of random method to select respondents.  Thus, MORI conducted a probability sample, but the same is only representative of the neighborhoods and street corners they sample from. 

MP will not speculate as to which approach is superior.  Neither produces a true probability sample of all British muslims, although no such sample is feasible in this situation.   The two surveys asked very different questions, so we cannot compare the results looking for differences.

One possible advantage of the YouGov Internet poll is that it might have less "measurement error" on a question like the one that troubled our reader.  Ironically, Jonah Goldberg, writing at the NRO's The Corner wondered about that:

Presumably people who declined to answer or people who shaved their responses did so in order to downplay or conceal their sympathies. I suppose it's possible that some folks felt pressure from family members to sound more militant thant they are, but I'd have to guess this poll underestimates the problem.

Actually, the fact that it was conducted online probably mitigated the sort of "underpolling" that Goldberg worried about.  Consider:  We know that respondents will often give a less than truthful answer when the truth might create some social discomfort between the respondent and the interviewer.  Of course, the YouGov poll did not involve an interviewer.  Respondents replied by computer.  So in this case, it is not hard to imagine a British Muslim who believed the attacks to be "justified" responding more truthfully to an impersonal web questionnaire than to a person on the other end of the telephone. 

So how reliable is the YouGov survey and what can we make of the results?  MP will leave it up to readers to decide, but urges caution.  The results are interesting and could not have been obtained by other means, but both the YouGov and MORI polls depart from true probability samples.  As such, the results may represent the views of all British Muslims. 

Or they may not. 

PS: Some additional commentary on this poll by Anthony Wells, Gerry Daly and John O'Sullivan.

Posted by Mark Blumenthal on July 28, 2005 at 05:50 PM in Measurement Issues, Polls in the News, Sampling Issues | Permalink | Comments (8)

July 27, 2005

Sneaky Plame Poll? Part II

So, as promised in Part I, let's continue to consider the post on Redstate.org last week by Jay Cost (aka the Horserace Blogger)  that sharply criticized a recent ABC news poll on the Plame/Wilson/Rove imbroglio.  Cost had some additional criticisms I did not address, but will below.  Two are specific to this poll, but one is of much broader general interest.  As it happens, reader FR emailed to raise a similar query:  "Why should we trust public opinion polls on issues where respondents probably know very little about the topic?"   That is a very good question.

First, a review.  Cost's post zeroed in on three questions from the ABC poll.  I've also copied those questions below, in the order they were asked, along with a few others that ABC asked about Rove/Plame, et. al. (for full results, see the ABC pdf summary):

1. As you may know, a federal prosecutor is investigating whether someone in the White House may have broken the law by identifying an undercover CIA agent to some news reporters. One reporter has gone to jail rather than reveal her source. How closely are you following this issue - very closely, somewhat closely, not too closely or not closely at all?

2. Do you think this is a very serious matter, somewhat serious, not too serious or not serious at all?

3. Do you think the White House is or is not fully cooperating with this investigation?

4. It's been reported that one of George W. Bush's closest advisers, Karl Rove, spoke briefly with a reporter about this CIA agent. If investigators find that Rove leaked classified information, do you think he should or should not lose his job in the Bush administration?

5. Do you think the reporter who has gone to jail is doing the right thing or the wrong thing in not identifying her confidential source in this case?

Cost had criticisms about the first question that we discussed in the last post.  He also raised other objections.  I'd like to comment on three:

A) The ABC release did not include "any kind of cross-tabulation to see if those who are not paying attention are the ones who think it's serious."   He goes on to  speculate that the 47% that are not paying attention (on Q1) might be the bulk of the 47% that thinks the White House is not cooperating (on Q3).

It is certainly true that ABC did not provide complete cross-tabulations of these questions by the attentiveness question, but they did characterize what such cross-tabs would show.  As a commenter on RedState.org points out, the ABC release did include the following text:

Those paying close attention (who include about as many Republicans as Democrats) are more likely than others to call it very serious, to say the White House is not cooperating, to say Rove should be fired if he leaked, and to say Miller is doing the right thing.

So, Cost guessed wrong.  On this count, ABC is not guilty of trying to "make it seem like" the public is less happy with the White House than it is.

Now, MP certainly agrees (and apparently, so do the analysts at ABC) that such a cross-tab analysis is appropriate.  MP would always like to see more data than less, although in this case, ABC certainly provided enough information to allay Cost's suspicions.

On the other hand, MP does not agree with Cost when he asks rhetorically, "Why should we care what the uninformed think on the matter?"   Two reasons:  First, in this instance at least, ABC asked about attentiveness, not information level (although one is a reasonable surrogate for the other).  Second, people will sometimes possess strong opinions about issues they are not following closely. 

Consider, for example...Jay Cost.  He tells us in his opening line that, "I really have no interest in this Plame/Wilson/Rove 'scandal.'"  He may not have any interest, but he certainly seems to have an opinion (the quotation marks around the word scandal seem like a pretty good hint).  How would he feel if a pollster threw ignored his opinion (and those with similar views) just because his interest level is low?   I'm pretty sure I know the answer. 

B) Cost argues that the information about the Rove/Plame affair provided in the first question of the series provides a "frame" that influences respondent answers to the questions that follow

Here MP must concede that Cost may have a point.  Survey methodologists have shown that "order effects" can occur.  Put another way, questions asked early in a survey can affect the answers provided to questions that follow.  We noted a few weeks back that,

Small, seemingly trivial differences in wording can affect the results.  The only way to know for certain is to conduct split sample experiments that test different versions of a question on identical random samples that hold all other conditions constant.

Unfortunately, the academic research on this issue is relatively limited. We know that order effects can occur, but often do not.  The only way to know for sure is with extensive pre-testing and split sample experiments which public and campaign pollsters rarely have the time or budget to conduct.  So we try to follow some general rules:  We try to write questionnaires to go from the general to the specific, from open-ended questions to those that provide answer categories, from those that provide little or no information to those that provide a great deal. 

MP will grant that he is a bit uncomfortable with the amount of information provided in the first question.  We also tend to agree with Cost that it is odd to ask respondents if they consider this a "serious matter," after informing them that it involves breaking the law "by identifying an undercover CIA agent," and that a reporter has already gone to jail.  How is that not "serious?"  Nevertheless, MP doubts that the first two questions provide anywhere near the sort of bias or "framing" effect that Cost hypothesizes. 

As for the other questions, we can speculate about it endlessly.  Different pollsters will take different approaches.  Consider the recent survey by Gallup on this issue, released earlier this week (results also posted here).  They found results consistent with the ABC poll on how closely Americans are following the issue, but on a follow-up, found that 40% think Bush should fire Rove.  On the ABC poll, 75% said Bush should fire Rove "if investigators find that Rove leaked classified information."  Very different results, but also very different questions. 

C) The third and most important question that Cost raises is a more general one:  Can we trust any polls that ask about subjects about which respondents are poorly informed?

Cost argues:

Political scientists have found that when people are not paying much attention to an issue, they are quite susceptible to "framing effects" that can be created through question wording and question ordering (for more detail, see John Zaller's The Nature and Origins of Mass Opinion, 1992).

This a good point, although MP does not agree that the ABC pollsters "designed" their poll "to give the impression that the public thinks something that it does not."  However, Costs more general point is worth consideration:  Just what should we make of polls about issues about which the public is poorly informed? 

Arguing that we should never poll on such issues is a non-starter.  The market will not tolerate it.  We follow (and argue about) poll questions on issues like these because we care about them.  Telling political junkies to ignore polls on such topics is like asking us to stop blinking.   Consider the blogger who warned on Election Day, "don't pay attention to those leaked exit polls."  That sure worked.   

More importantly, political partisans are usually interested in how to persuade, how to move public opinion, not just where it stands now.  So we have good reason to want to gauge how the public will react to new information.  We just need to be careful in reporting the results to distinguish between questions that measure how the public feels right now, and those that provide a "projective" sense of how they might feel in the future. 

More specifically, MP has two pieces of advice for what to make of polls about issues on which the public appear to be poorly informed:

  • Be careful!  Pollsters can push respondents easily on such issues, and results can be very sensitive to question wording.  Any one poll may give a misleading impression.
  • As we have suggested before, look at many different polls.  No one pollster will have a monopoly on wisdom.  Yet use a resource like the Polling Report or the Hotline Poll Track Archives ($$), and you will begin to "asymptotically approach the truth" (as our friend Kaus often puts it). 


Posted by Mark Blumenthal on July 27, 2005 at 05:31 PM in Measurement Issues, Polls in the News | Permalink | Comments (2)

July 21, 2005

Sneaky Plame Poll? A Reality Check

Earlier this week, ABC News released poll results concerning the federal investigation of the leak of CIA operative Valerie Plame's identity.  The poll showed a sharp decline in the percentage of Americans who say that the White House is "fully cooperating" with the investigation and a large number who want Bush advisor Karl Rove to "lose his job . . . if investigators find that Rove leaked classified information." 

On Wednesday, Jay Cost (aka the Horserace Blogger) excoriated the poll in comments posted on RedState.org. MP is instinctively skeptical of ad hominem attacks wrapped in overheated rhetoric.  Cost's piece -- which labels the ABC poll variously as "patently absurd," "lousy or should I say tendentious," "done with the ostensible purpose of adding lighter fluid to a story," "screams in an unequivocal voice, 'I am garbage, please debunk me!" -- certainly seems to fall into that category.  However, Cost is a well-read student of political science.  While his conclusions about the ABC poll are questionable, his arguments are worthy of our consideration. 

Let's begin with the first question ABC asked about the Plame investigation:

As you may know, a federal prosecutor is investigating whether someone in the White House may have broken the law by identifying an undercover CIA agent to some news reporters. One reporter has gone to jail rather than reveal her source. How closely are you following this issue - very closely, somewhat closely, not too closely or not closely at all?

Cost has three complaints about this question:  He doesn't like the opening clause ("as you may know") because, he says, it is a "priming or cueing mechanism" intended to get the issue "to the front of people's mind."  He is troubled by the amount of information provided and the apparent narrative it creates.  He argues that ABC "puts all [the] pieces together" in a way that "frames" them "into one sensible story."  He concludes:

ABC News is playing a subtle psychological trick with the public here -- trying to make them respond that they are paying attention when they are not actually paying attention.

Let's start with some explanation.  Pollsters frequently use the phrase "as you may know," to introduce unfamiliar information.  The phrase typically serves as both a transition from the previous question and a polite softener to avoid insulting knowledgeable respondents. An interview is a conversation, and this clause is a nice way of saying, "yes, I realize you may already know these details; no, I don't think you're stupid, so please bear with me."  MP grants that it is a bit odd to use the phrase in a probe of awareness, but if research exists to prove this phrase inflates awareness,  MP has not seen it. 

Second, Cost has a point when he warns that "informed" measures may exaggerate reported awareness (Cost refers to this as a "self selection" problem).  In this case, the pollsters have two ways to ask about awareness.  They can ask a purely open-ended (or "unaided") question: "What are some of the stories in the news you have been following lately?"  Or they can ask a closed-ended "aided" question (How closely are you following this issue?) that provides just enough detail to trigger the respondent's memory. 

Both approaches have drawbacks.  The open-ended question may tend to underestimate true awareness, as some less verbal respondents may hold back opinions.  Others may have genuine memories that require a "trigger." (Imagine:  "Oh, the story about the reporter who went to jail?  Oh yes, I remember now...").  On the other hand, as Cost suggests, the social pressure of the interview can induce some respondents to want to seem better informed than they are.   Thus, unaided questions may slightly understate awareness while aided questions may slightly overstate it.  The truth usually falls somewhere in between.

Having said all this, MP tends to agree with Cost that the ABC awareness question includes an unusual amount of detail.  The issue is whether the combination of (a) the introductory phrase "as you may know," (b) the level of detail and (c) the supposed narrative "framing" produce a meaningfully higher level of reported awareness than a more bare-bones question.

Fortunately for us, the Pew Research Center included just such a question with nearly identical answer categories on a survey fielded on the very same dates (July 13-17).  The Pew question was:

Thinking again about news stories:  How closely have you followed reports that White House adviser Karl Rove may have leaked classified      information about a CIA agent very closely, fairly closely, not too closely, or not at all closely?

The question has no "as you may know" introductory clause and describes the issue in just 14 words (compared to 37 for ABC).  Yet the results are remarkably similar.  Pew shows 23% say they followed the story "very closely," 25% "somewhat closely" (for a total of 48%).  ABC shows 21% "very closely," 32% "fairly closely" (for a total of 53%).   The ABC survey gets a slightly lower percentage for "very closely," but a larger response in the second category (only the second difference appears to be statistically significant).  However, since ABC labels their second category "somewhat closely" compared to Pew's "fairly closely," we cannot be certain what caused the slight difference -- the answer category or the text before it. 

Regardless of the explanation, the differences in the results are trivial and, in MP's view, not worth all of Cost's huffing and puffing.    We would reach the same substantive conclusion about awareness of the Plame story from either poll's results. 

But what about his s point that informed questions slightly overstate the true level of awareness? What do we make of that?  When analyzing these sorts of results, an astute survey consumer should always ask, "compared to what?"  In this case, how does awareness of the Plame leak compare to awareness of other issues.   The numbers become much more useful and meaningful as we put them into context. 

Once again, the Pew survey provides an answer:  At 23% very closely, awareness of the leak ranked well behind the terrorist bombings in London (48%), the war in Iraq (43%) and the recent Gulf Coast Hurricanes (38%).  It ranked at the same level as the O'Connor retirement (24%) and ahead of "the move by a Chinese firm to buy the American oil company Unocal (11%). 

In preparing this post, MP also stumbled on an amazing resource for these sorts of comparisons buried on the Pew Research Center website. Pew has been asking awareness questions about public issues using the same methodology for nearly twenty years.  On this page, they provide familiarity ratings (the "very closely" percentage) for over 1,100 different stories they have asked about from 1986 to 2004.  That's a lot of context!

Cost has more complaints about the ABC survey that are also worth of some discussion, but my blogging time is up for today. I'll come back to the rest in a subsequent post.

UPDATE (7/22):  MP wondered about why the Pew News Interest Index reports the percentage that say they follow an issue "very closely," so he emailed Scott Keeter, the director of Survey Resarch at Pew ans asked.  Here is Keeter's answer: 

We tend to find that the “very closely” category is more sensitive to change and to differences across items. It is also probably less subject to social desirability pressures, since respondents have at least two other categories of attention to use if they feel the need to show they are not completely tuned out.

Also, Jay Cost responds in the comments section

Posted by Mark Blumenthal on July 21, 2005 at 06:12 PM in Measurement Issues, Polls in the News | Permalink | Comments (6)

July 18, 2005

Iraq the Vote: More from CBS

Unfortunately, it's a busy day in MP-land, with time for just another quick update on recent polling on retrospective and prospective views of the war in Iraq.  As noted here two weeks ago, CBS News included two questions on an April 2004 survey at the urging of the Duke academics whose work we have been discussing.   Last week (July 13-14), CBS conducted a new poll (n=632 adults) that updates these results. 

Here's a quick summary (see the CBS PDF release for full results):

Q10. "Looking back, do you think the United States did the right thing in taking military action against Iraq, or should the U.S. have stayed out?"  Forty-eight percent (48%) now say it was the "right thing;" not much different than 45% in June and 47% in April 2004.

Q13. "Regardless of whether you think taking military action in Iraq was the right thing to do, would you say that the U.S. is very likely to succeed in establishing a democratic government in Iraq, somewhat likely to succeed, not very likely to succeed, or not at all likely to succeed in establishing a democratic government there?"  They asked this question of a random half sample on this survey:  61% said the US is very or somewhat likely to succeed, statistically unchanged from 60% in April 2004.

CBS split their sample to try a more general variant of the prospective question:

Q14. "Regardless of whether you think taking military action in Iraq was the right thing to do, would you say that the U.S. is very likely to succeed in Iraq, somewhat likely to succeed, not very likely to succeed, or not at all likely to succeed in Iraq?"  62% said the US was very or somewhat likely to succeed. 

These results are nearly identical to those obtained on similar questions asked on the Hotline/Westhill poll conducted in early July (MP commentary here).

Posted by Mark Blumenthal on July 18, 2005 at 02:36 PM in Polls in the News | Permalink | Comments (0)

July 14, 2005

Iraq the Vote: Hotline/Westhill Takes Up the Challenge

Another update on the work of Duke academics Peter Feaver, Christopher Gelpi and Jason Reifler concerning the underpinnings of popular support for the war in Iraq (covered here, here and here).  The results of the latest poll from The Hotline & Westhill Partners, released today, includes questions comparable to those used by Feaver and his colleagues and some new measures that can help us understand what Americans mean when they say they expect the U.S. to "succeed" in Iraq.

Some background - A column by the David Ignatius in yesterday's Washington Post provides a concise summary of one aspect of the Gelpi-Feaver-Reifler thesis:

They argue that it isn't casualties per se that drive U.S. public opinion about war. Instead, it's the public perception of whether a war is winnable.

"When the public believes the mission will succeed, then the public is willing to continue supporting the mission, even as costs mount. When the public thinks victory is not likely, even small costs will be highly corrosive," the authors write.

They also argue, in a second paper, that voters willingness to reelect George Bush in 2004 had more to do with retrospective judgments about whether he did the "right thing" in deciding to go to war than about prospective attitudes about success. 

The Hotline/Westhill poll released today includes two questions that closely resemble the measures used in the Gelpi-Feaver-Reifler research:

Looking back, do you think the United States did the right thing in taking military action against Iraq, or should the U.S. have stayed out?  46% did the right thing; 46% should have stayed out; 8% neither/don't know

Regardless of whether you think taking military action in Iraq was the right thing to do, would you say that the U.S. is very likely to succeed in Iraq, somewhat likely to succeed, not very likely to succeed, or not at all likely to succeed?  60% very/somewhat likely; 36% very/somewhat unlikely; 4% don't know

As Gelpi, et. al. put it, "out in their paper, the central role of 'expectations of success' begs the obvious next question: how does the public define and measure success in Iraq? (p. 37)"   They try to answer this question with their own survey measures, summarized on pp. 37-38 (and tables 6 & 7) of their paper.  They present a list of possible measures of success and asked respondents to choose the best.   Their bottom line: 

The public does not measure success in terms of body bags. On the contrary, the public claimed to focus on whether the coalition was in fact winning over the hearts and minds of the Iraqi people, as measured by Iraqi willingness to cooperate with coalition forces.

The Hotline/Westhill survey offers some additional data on this issue.  Rather than ask respondents how they might judge success, they ask more detailed questions about expectations.  Specifically, they ask respondents to rate "how confident" that each of the following might occur "in the next year" (numbers are the total of those answering "very confident" or "somewhat confident"):

  • 64% are confident "that the Iraqi people will be better off than they are today"
  • 47% are confident "that there will be significant reductions in U.S. troop strength"
  • 41% are confident "that the U.S. will have achieved its goals in Iraq"
  • 37% are confident "that Iraq will have a stable, democratic government"

MP notes that 60% of the same respondents expect the US to "succeed in Iraq."  This result suggests that judgments about whether the Iraqi people are now "better off" than before are more influential in driving an overall expectation of success than judgments about whether a "stable, democratic government" in Iraq is a realistic possibility. 

The folks at The Hotline and Westhill Partners can use their data to test this proposition.  Which of these four measures of expectations can best explain why Americans expect the US to succeed or fail in Iraq?  Specifically, MP wonders:

  1. How do the results to the four specific expectation questions look when cross-tabulated by their expectations for success?
  2. More to the point (but a bit harder to explain):  What would a simple regression analysis of show that treated the expectation-for-success question as a dependent variable and the four specific questions as independent variables?

Here is hoping our friends at The Hotline indulge us one more time.

(Typos corrected)

Posted by Mark Blumenthal on July 14, 2005 at 10:47 PM in Polls in the News | Permalink | Comments (4)

July 11, 2005

Internet Polling: Unfulfilled Promise

An update on the topic of Internet based political polling: Last week, I noted that some influential academic research on public opinion and the Iraq war had been conducted on the Internet.  It involved  a "panel" of respondents maintained by a company known as Knowledge Networks and recruited using conventional random digit dial (RDD) telephone sampling.  The company offers free Internet access to willing participants that not already online.

However, Knowledge Network's use of "probability sampling" is unique to Internet based surveys.  Many other research companies now offer to poll panels of self-selected respondents that volunteered to participate in their surveys, usually for some sort of monetary incentive.  These non-probability panels are all the rage in commercial market research, but for now, have seen little application in political polling. Why?

In a guest column that appeared in National Journal's Hotline two weeks ago, Doug Usher, a pollster with the Democratic firm the Mellman Group, offered some sensible analysis:

When it comes to quantitative data for political applications -- statistically reliable answers that are projectable onto the entire population -- telephone surveys remain critical. And for good reason: when it comes to cost, population coverage, and predictive accuracy for nearly all political polling, telephone surveys remain the gold standard...

Telephone surveying today -- as always -- has significant problems, and critics are correct for pointing them out. Moreover, the Web may be a critical tool for supplementing telephone polling - indeed, the time may be ripe for the transition. But there are very real impediments to its use in polling in most campaign contexts. Until we overcome those impediments, telephone surveying will continue to dominate the political polling landscape.

Usher goes into depth on three "oft repeated falsehoods about the tradeoffs between Internet and telephone polling."  These are worth reading in full but -- until today -- were trapped behind The Hotline's subscription wall.  However, both the author and the folks at the National Journal have kindly consented to allow MP to reproduce Usher's column in full.  It appears after the jump.  Read it all.

As it happens, The Hotline today also unveiled a web version of its very cool "Blogometer" feature that is now free to non-subscribers.  It's definitely worth a click.   

The following originally appeared in The Hotline:

The Internet's Unfulfilled Promise For Political Polling

By Doug Usher, Hotline guest contributor
Thursday, June 30, 2005

The Internet has dramatically changed the face of most aspects of today's political campaigns. Key elements of campaigns, including fundraising, grassroots network development, and GOTV, now rely on Web innovations for a large and growing portion of the work they do.

At the same time, the Internet has not lived up to its early promise for one key part of campaign consulting: polling. 

And this may not change in the future. Unless major changes are made in e-mail registries, Internet privacy laws, and the way we interact with the Internet, it may never be the same tool that the telephone has been quantitative research for over 50 years.

At the Mellman Group, we've been harnessing the power of the Web in many of our qualitative research applications. Web-based ad testing allows us to very quickly gain input from hundreds of voters in a matter of days, giving us (and campaigns) new flexibility to test ads on the fly.

Online qualitative tracking allows pollsters to advise campaigns about changes on the ground, from all over candidates' home turf. This direct give-and-take helps clients learn about surprise advertising and direct mail, in addition to other changes on the ground that can have an impact on public opinion.

But when it comes to quantitative data for political applications -- statistically reliable answers that are projectable onto the entire population -- telephone surveys remain critical. And for good reason: when it comes to cost, population coverage, and predictive accuracy for nearly all political polling, telephone surveys remain the gold standard.

The argument for moving to the Internet for quantitative survey research usually comes down to a cataloguing of the reasons why it is harder to reach people over the phone: caller ID, growing reluctance to answer the phone, younger people don't have landlines, etc.

But this begs the question: is the Web an adequate replacement for telephone surveys? Here are three oft-repeated falsehoods about the trade-off between Internet and telephone polling.


MYTH #1: Telephone polling is no longer accurate, because of low response rates, cell phones, caller ID, and other factors.

It is true that it is becoming harder to reach people via the telephone. Response rates are down (although the exact meaning of "response rates" often lies in the eyes of the beholder). Some voters only have cell phones, which pollsters are not allowed to call. And yes, some people will never take a phone call. All of those are problems, for sure. And they're getting worse.

There was a lot of huffing and puffing about the hard-to-reach cell-phone wielding Deaniacs in Iowa. Around election time, newspapers spilled a lot of ink (and blogs a lot of bandwidth) on the problems that this lack of reachability was going to have.

Despite all of these pre-election complaints, did polling mislead us last election cycle? Were there any results in the last election that deviated in any substantial way from pre-election polling? Naturally, we Democrats would have liked to see Kerry win Ohio and win the general election. But the results were all within the margin of error of public polling (almost exclusively telephone surveys). In the Senate races, Republicans probably hoped that Pete Coors could beat Senator Salazar, and Democrats hoped that Dan Mongiardo could carry out his improbable effort to unseat Jim Bunning. But a review of the pre-election polling shows the actual outcomes to be, frankly, unsurprising.

The nice thing about polling ballot items (as opposed to issue-based polling and corporate market surveys) is that we have a check. Not only was our internal surveying accurate, but the public polls (with just a few exceptions) were as accurate as they have ever been this cycle.

MYTH #2: The high rates of Internet usage mean that Americans are now reachable via the Web.

It is certainly true that more people are using the Web than ever -- in every demographic category. According to the Pew Center on the Internet and American Life, 70 million voting aged Americans go online on an average day, up from 51 million in 2000. Overall 136 million people use the Web -- 67 percent of those over 18.

But many make a fundamental error in interpreting this statistic: just because somebody uses the Web does not mean that they are reachable via the Web. And unless the architecture of the Web is changed dramatically, voters may never be adequately reachable via the Web for useful quantitative research for political campaigns.

All telephone numbers are knowable. Even unlisted numbers can be found, via random digit dial technology. More important, because of area code and exchange standards, phone numbers are knowable within relevant political geography -- including states, counties, and most Congressional Districts. Skilled phone match vendors also do an excellent job getting phone numbers for voter files, which allows pollsters to reach voters at the more local levels of political geography.

E-mail addresses are not universally knowable -- because of the structure of the Internet and e-mail, there are infinite permutations of addresses and domain names. Spammers are continuously trying to circumvent this by generating random e-mail addresses and sending bulk e-mails. While that may be a good way to sell cheap prescription drugs to a small segment of the population, it is much less effective in reaching a representative cross-section of an electorate.

Even with ever-increasing online activity, this problem may never be solvable. Assuming you are able to get through to people, how would you know that they live in the district of interest? There is no connection between e-mail addresses and home addresses, rendering them unreliable in most applications for political polling.

At this point, the only truly "reachable" online population is those who have opted in to survey research. Some have argued that these samples are adequately representative of the country, and even for some large states, when weighted to demographic characteristics. Assuming that this is true, that still leaves nearly all relevant political geography uncovered - most states, and nearly every Congressional District, not to mention every state legislative seat, county executive seat, and mayoralty. And how about finding likely primary voters who are actually properly registered?

Opt-in surveys may be adequate at some point in the future; indeed, they may be adequate now for looking at a very limited political geography. However, they are not now usable for most political polling. And as we move forward, it does not seem particularly likely that their usability will expand to the bulk of political geography in the United States.

MYTH #3: The Internet now allows us to reach voters whom we can't reach by phone.

This is one of the most promising aspects of Internet survey research. If we could supplement telephone survey with Internet research that covers those who are unreachable, we might be able to develop a better research instrument. But the jury is still out on whether it can serve this purpose.

It seems reasonable to assume that "cell-phone only" twenty-somethings spend time online. Additionally, many of those who use technology (like caller ID) to avoid phone calls from people they don't know also have some connection to the Internet.

But - as with the voting population more generally -- the fact that they are online does not make them reachable online. Indeed, it seems unlikely that a person who uses technology to guard their time from telephone surveys will then turn around and spend their time online participating in the very same activity. Do some take surveys online? Probably. But are they representative of the larger population? We just don't know.

For the only population that is truly unreachable by traditional phone methods - those without a landline - Internet polling may provide some inroads. But fundamental questions will remain, as discussed above: are you sure that the people you are speaking with are a) without a landline, b) from the appropriate political geography, and c) are available to you in an adequately representative way? If you can be reasonably sure that those three questions are answered, then Internet polling may provide a nice supplement.

What's the future for political public opinion research over the Web?

The Web is a growing force in qualitative public opinion research in politics -- an invaluable tool for testing ads and exploring the nature of individual public opinion. Our Web-based research is now part of many of the campaigns we work with, allowing them to do research in time frames that were never before possible. In time, more applications will be available online, allowing us to explore different populations quicker than we've ever been able to in the past.

And there are grave concerns about the future of telephone polling, as response rates continue to decline, and reachability narrows.

Yet ten years after Netscape's IPO, the Internet has not lived up to its billing as the panacea for quantitative research. Indeed, I have yet to hear of a single candidate facing a competitive race that used Internet polling as the basis for strategic decisionmaking.

As of now, there are a number of ways for certain types of quantitatively precise surveys to be conducted over the Web. Many organizations, including universities and corporations -- use the Internet to conduct surveys of their employees, students or members. This makes sense, as those organizations tend to have comprehensive and accurate e-mail lists, and are recognized by the membership as credible invitees.

Additionally, at least one national panel has been formed which recruits survey respondents and provides them with access to the Internet in exchange for survey participation. This helps overcome some of the obstacles discussed above, but is also expensive and time consuming. Moreover, its accuracy for quantitative research is largely limited to national efforts.

Telephone surveying today -- as always -- has significant problems, and critics are correct for pointing them out. Moreover, the Web may be a critical tool for supplementing telephone polling - indeed, the time may be ripe for the transition. But there are very real impediments to its use in polling in most campaign contexts. Until we overcome those impediments, telephone surveying will continue to dominate the political polling landscape.

© 2005 by National Journal Group Inc., 600 New Hampshire Avenue, NW, Washington DC 20037.  Any reproduction or retransmission, in whole or in part, is a violation of federal law and is strictly prohibited without the consent of National Journal. All rights reserved.

Posted by Mark Blumenthal on July 11, 2005 at 06:14 PM in Sampling Issues | Permalink | Comments (1)

July 08, 2005

When is a Poll Really a Poll?

Reader PW emailed yesterday with a question about a poll that appeared on Tuesday in the blogosphere.  "Is there anything you can quickly glean from this that would suggest that it is obviously phony  . . . Or perhaps real?"  The general question -- how can one tell when a leaked poll is real? -- is a good one.  Expect it to come up again and again during the 2006 campaign.  Let's take a closer look.

Unfortunately, the nature of the dissemination of polling data allows for the possibility of shenanigans by the ethically challenged.  It is not hard to find examples of "overzealous" partisans who have leaked highly misleading or even fictional results.  How can an ordinary consumer of polling data tell the difference?  MP suggests three rules of thumb:

1) Has the pollster gone "on the record" about the results?   We have all seen stories like the following:  It's the final weekend before Election Day and a campaign spokesperson is spinning their chances for success.  Invariably, they cite "internal polling" that shows their campaign surging ahead, holding their lead, etc.  How many times have we heard such a statement only to see a totally different result a few days later when all the votes are counted? 

Spin happens.  A good rule of thumb in these situations is to consider whether the pollster that conducted the survey is willing to put their reputation on the line and release and discuss the actual results?  Or is a campaign spokesperson or unnamed source simply characterizing unspecified "internal polling."  MP puts little faith in the latter. 

Of course, private campaign pollsters (like MP) often release results when they show good news for our clients.  Such releases typically follow an unwritten Washington convention:  We prepare a one or two-page memorandum on our company letterhead that summarizes the most favorable highlights of the survey and let our clients distribute the memo as they see fit.  Most statewide and congressional campaigns now typically send such memos to National Journal's Hotline ($$) which routinely includes the  results in their daily news summary. 

Bottom line:  look for an official release, an attribution to the polling firm from a mainstream news source or an on-the-record quotation from the pollster. 

2) Does the pollster disclose their basic methodology? According to the principles of disclosure put out by the National Council of Public Polls (NCPP),** public survey reports should include the following:

  • Sponsorship of the survey;
  • Dates of interviewing;
  • Method of obtaining the interviews (in-person, telephone or mail);
  • Population that was sampled;
  • Size of the sample;
  • Size and description of the sub-sample, if the survey report relies primarily on less than the total sample;
  • Complete wording of questions upon which the release is based; and
  • The percentages upon which conclusions are based.

Most legitimate reports -- including the memoranda released by internal campaign pollsters -- will meet the NCPP standards for disclosure.  If the survey report does not include this information, take it with a huge grain of salt. 

3) Does the pollster go beyond the NCPP disclosure standards?  This rule may be of little practical help for ordinary readers, since very few pollsters report more than the basics.  Nonetheless, MP hopes that pollsters will begin to go beyond the sensible NCPP standards and that reporters will begin to ask tougher questions about how polls are done.  Specifically:

  • The sampling frame -- did the pollster sample all telephone households (random digit dial, RDD) or use some sort of a list (such as a list of registered voters)?
  • What weighting procedures, if any, were used? 
  • What was the full text and order of all questions released, including all questions that preceded those on which results were based (questions that may have created a response bias)? 
  • What filter questions, if any, were used to screen to the population of interest?
  • If the pollster reported results of "likely voters," how were such voters defined and selected? 
  • What is the demographic composition for the weighted and unweighted samples?  If results are based upon a subsample, what is the demographic composition of that subsample?
  • What was the response rate for the survey? 

Now, MP assumes that other pollsters will question the wisdom of routinely releasing such information, but the point here is simple:  If reporters or readers are in doubt about whether a poll is genuine, they can tell a lot from the pollster's willingness and ability to disclose this level of detail.  Conversely, if a pollster is not willing to disclose information on such items as the sample frame, the composition of the sample, the way they defined likely voters, the text of screening questions or those preceding the questions of interest, reporters and readers should be highly skeptical. 

An important caveat:  These rules of thumb are only useful in distinguishing real surveys from spin.  Judging the quality of survey is more difficult.  For example, a pollster may disclose every last detail of their methodology, but if they do not begin with a probability sample (in which every member of the population has an equal or known chance of being included) the results are questionable.  (Judging survey quality is a very big subject, but readers may want to consult the suggestions of various pollsters as gathered by The Hotline and posted by MP back in April). 

Let's consider the two examples that readers brought to MP's attention in the last few days.

The first, a poll of Ohio showing results for various match-ups in the 2006 governor's race, came to MPs attention via reader PW.  Posted on the blog Polipundit, the poll purportedly showed Democrat Ted Strickland running ahead of all four Republicans tested (Betty Montgomery, Ken Blackwell, Jim Petro and John Kasich), while Democrat Michael Coleman ran ahead of Montgomery and Blackwell but essentially even with Petro and Kasich.  On Thursday, DailyKos posted the same results in virtually identical format, attributed only to "a trusted source" though cautioning readers to "take with appropriate grain of salt."

While the numbers are interesting, this particular "release" fails every one of MPs rules of thumb.  The two blog items tell us nothing about who sponsored or conducted the poll and virtually nothing about how the survey was conducted.  Survey dates?  Question text?  Sample frame (adults, registered voters, likely voters)?  Who knows?  The sample size specified is also a bit odd -- 501 Republicans, 501 independents and some unknown number of Democrats.  We know nothing about how these separate "samples" were combined.  Moreover, I can find no reference to this survey in any mainstream media source, including The Hotline. 

Thus, readers should be very, very skeptical about this "poll." I cannot say that the results look "obviously phony," and it seems odd that a conservative blogger like Polipundit would blindly pass along such negative results about the Ohio GOP.  However, we have virtually nothing to reassure us that the poll is real. Without some attribution, MP would not place much faith in it. 

DailyKos posted results from another Ohio poll that at first glance appears more legitimate.  This one, "leaked" by a "DC source," showed a surprisingly close race in a theoretical U.S. Senate match-up between Republican incumbent Mike DeWine and Democratic Congressman Sherrod Brown.  The Kos item tells us that the poll was conducted for the Democratic Senatorial Campaign Committee (DSCC) by well known Democratic pollster Diane Feldman.  It specifies the total number of interviews (1,209) and verbatim text and results from three questions.  Oddly, it specifies a single date (6/27) which may be a release date or the final night of interviewing rather than the complete field period (MP knows Feldman well enough to doubt she would attempt to complete 1200 interviews in a single evening).  On the whole this report meets many (though not all) of the NCPP disclosure standards.  So far, so good.

But remember rule of thumb #1.  Is Feldman quoted on the record anywhere?  Do we have a release on Feldman or DSCC letterhead?  No.  Also, consider that if the DSCC had put out an official release, it would have appeared in The Hotline this week.  They have not yet published any such poll.  For whatever reason, neither the sponsor nor the pollster has chosen to confirm these numbers for the record.  Until that happens, readers should treat these results with caution.  We may not know the full story.

**UPDATE (10/17/2007): The code of professional ethics of the American Association for Public Opinion Research (AAPOR) offers similar disclosure standards that now appear on their web site along with a helpful set of frequently asked questions about those standards.

Posted by Mark Blumenthal on July 8, 2005 at 05:07 PM in Interpreting Polls, Polls in the News | Permalink | Comments (4)

July 07, 2005

One America

Not quite two weeks ago, I discussed what public opinion polls had to say about how Americans reacted to the 9/11 terrorist attacks.   In a widely reported speech, Bush advisor Karl Rove had said:

Conservatives saw the savagery of 9/11 in the attacks and prepared for war. Liberals saw the savagery of the 9/11 attacks and wanted to prepare indictments and offer therapy and understanding for our attackers.

My original post looked at polls done at the time that tabulated their results by party identification.  I asked several public pollsters if they might provide cross-tabulations by self-reported ideology, and the pollsters at CBS News quickly obliged.  Gallup released similar results a few days later. 

A sad irony:  Last night, I received an email from Susan Pinkus, the director of the LA Times Poll.  She had been on vacation when my first post ran and was just catching up, and she emailed their results tabulated by ideology.  In a poll conducted just days after the attacks (9/13 to 9/14) they showed roughly the same numbers of liberals (69%) and conservatives (72%) agreeing that "the United States is now in a state of war (the sampling error for those subgroups was at least 4%).    In a question that forced a choice eerily similar to the rhetorical contrast offered by Rove, 68% of liberals wanted to "retaliate against bin Laden's group through military action," while 29% preferred that the "United States pursue justice by bringing him to trial in the United States?"  Conservatives preferred war over a trial by a 72% to 22% margin.

In her email, Pinkus remarked that the in the aftermath of 9/11, "We were one America."  Yes, at the time, some liberals fit Rove's stereotype, but the overwhelming majority did not.  Following 9/11, liberals and conservatives, Democrats and Republicans were in far more agreement about the use of military force in response to terrorism and threats abroad than they are today. 

Last night, 9/11 seemed remote.  Not so this morning.  As we were one America four years ago, MP hopes we feel a similar solidarity with the citizens of Great Britain today. I trust few will object if I go "off topic" for a moment, but it seems appropriate to quote the words of Indian blogger Amit Varma (hat tip: Instapundit): 

This isn't just an attack on the UK, but, like the attacks of 9/11, they're an attack on a way of life and a value-system, one that is dear not just to Western countries, but to millions in the developing world, like me. Concepts like personal freedom, equality of women and, in fact, human rights are alien to those behind the attack, and they must be defeated.

[typos corrected]

Posted by Mark Blumenthal on July 7, 2005 at 02:44 PM in Polls in the News | Permalink | Comments (8)

July 06, 2005

Iraq the Vote: Epilogue

A few quick follow-up thoughts on the work of Duke academics Peter Feaver, Christopher Gelpi and Jason Reifler, that I commented on last week. 

The findings in the two papers by Feaver, Gelpi and Reifler are relatively straightforward.  They conclude that the public's prospective views of the potential for success in Iraq and judgments about the wisdom of going to war are both important in driving both electoral support for George Bush and a willingness to endure casualties in Iraq.  However, their main finding is that prospective judgments about the chances for success are more important in determining the public's willingness to tolerate casualties, while retrospective judgments about the wisdom of going to war were more important in driving electoral support for George Bush in 2004. 

The papers are persuasive -- but as always with this sort of academic study -- will be the subject of continuing debate and discussion.  Those interested in the nitty-gritty details should review the two papers, as well as the reactions from various academics with expertise in this area in the comments section of MP's two posts last week. 

Regular MP readers may be more interested in a methodological footnote.  The data reported by Feaver, Gelpi and Reifler were collected on the Internet by the company Knowledge Networks (see the footnote on page 13 of the "Iraq the Vote" paper).   They maintain a nationally representative panel of respondents that agrees in advance to participate in surveys fielded on the Internet.  As MP summarized back in October:

What makes Knowledge Networks unique is that they recruit members to their panel with traditional random digit dial (RDD) sampling methods, and when a household without Internet access agrees to participate, they provide those households with free access to the Internet via Web TV. So in theory, at least, this approach allows a random sample of all US households.

Presumably, cost was the primary rationale for using Internet based research in this case.  The expense involved in fielding six separate RDD telephone surveys of the adult population would have been considerably more expensive than the cost of using the Knowledge Networks (KN) panel, largely because the Internet surveys do not require paid interviewers.   While the use of a pre-recruited panel rather than a stand-alone survey involves some trade-offs, Feaver, Gelpi and Reifler obviously considered those compromises worthwhile.   

The use of the Knowledge Networks (KN) panel in this instance is unique for two reasons.  First, the authors were interested in a sample of all U.S. adults, not just a small and hard to identify subgroup.  The best known applications of KN panel for political polling has involved surveys of debate watchers, such as those conducted by CBS News and the Democratic pollster Stan Greenberg last fall.  Second, as the article in last week's Washington Post makes clear, the results from the surveys have been taken seriously at the very highest levels of the U.S. government.  If nothing else, this confluence of events strikes MP as something of a milestone for Internet based opinion research. 

And before we turn to other subjects (including at some point, I promise, a more thorough discussion of Internet based surveys), I'd like to correct one misimpression I may have left about the work of the Duke academics.  While Peter Feaver has apparently taken leave of his academic position to accept a position with the Bush administration, the polling cited above was not conducted on behalf of President Bush or any other political partisan.  It was academic research funded by the National Science Foundation and the Carnegie Corporation and released into the public domain.  Both Christopher Gelpi and Jason Reifler remain in academia. 

Posted by Mark Blumenthal on July 6, 2005 at 05:53 PM in Polls in the News | Permalink | Comments (0)

July 01, 2005

Iraq the Vote 2: CBS Takes up the Challenge

A quick update on yesterday's "Iraq the Vote" post.  Kathy Frankovic, polling director for CBS News, emails with two questions they asked on a survey last year at the suggestion of Peter Feaver, then a Political Science professor at Duke.  Here are the results and the full text of the questions (n=1,042 adults, conducted 4/23-27/2004):

56. Looking back, do you think the United States did the right thing in taking military action against Iraq, or should the U.S. have stayed out?

47% - Right thing
46% - Stay out

57.  Regardless of whether you think taking military action in Iraq was the right thing to do, would you say that the U.S. is very likely to succeed in establishing a democratic government in Iraq, somewhat likely to succeed, not very likely to succeed, or not at all likely to succeed in establishing a democratic government there?

10% - Very likely
40% - Somewhat likely
31% - Not very likely
15% - Not at all likely

Frankovic points out this survey occurred during a low ebb in support for the Iraq war.  At the time, only 38% thought things were going well in Iraq while Bush's overall job rating was net negative:  46% approved and 47% disapproved of his performance as President.

Posted by Mark Blumenthal on July 1, 2005 at 08:48 AM in Polls in the News | Permalink | Comments (17)