Main | October 2004 »

September 30, 2004

Debates II: Instant Polls

As I mentioned in the last post, past history shows that the media’s coverage of presidential debates typically has more impact on voter preference than the debate itself. So, as is often the case, we will not know for sure what impact tonight’s debate has until we get back results from new surveys over the next week. However, if you are reading these words, and you’re a political junkie like me, you just can’t wait until next week. Are the instant polls done by the networks tonight worthy of our attention? My take on that follows on the jump page.

Instant polls done to assess the impact of the debate face two big challenges.

The first was anticipated by this set of questions posed by astute commenter Simka, who asked:
· What about people who work at night when pollsters call?
· What about people who go to school at night?
· What about people who work two jobs?
· What about people who work late?
· What about people who work swing shifts?
· What about people who are traveling? How many Americans are on the road at any given time for work or family

This question identifies one of the biggest sources of non-response (the other being those who simply hang up): At any given time, not everyone is home. Pollsters have to be persistent, and call over multiple nights at different times get to a reasonable number of those not always home. Although the specific procedures vary, most pollsters call at least four times over at least 2-3 evenings. This doesn’t get everyone, but gets close. Without rigorous "call-backs,” the sample will be biased against the people who are often away from home. That is why you rarely see one night polls done nationally.

That problem is greater for a post debate poll since, duh, people at home tonight will be more likely to watch the debate. As a result, most of the surveys done tonight will concentrate on those who actually watched the debate.

The second problem is one of interpretation. More often then not, the actual debate serves to reinforce voters’ preexisting preferences. In other words, all things being equal, Bush supporters will come away more impressed with Bush, Kerry supporters more impressed with Kerry. Thus, if Bush goes into the debates with a five-point margin among those in the sample, and the debate does nothing to change initial preferences, then expect the question that asks who "won” the debate to show a similar five-point margin.

The most sophisticated way to try to measure the impact of the debate itself would be to contact a random sample just before the debate, ask about their vote preference and attitudes toward the candidates, then call back just after the debate, repeat the same questions and ask the respondents to judge each candidate’s performance. Such a design allows the pollster to compare the reactions of pre-debate Kerry supporters to pre-debate Bush supporters and measure whether either candidate gained or lost support among individual respondents.

That is exactly the design used by the ABC News Polling Unit for all four of the presidential and vice-presidential debates in 2000. Here are a few examples of their results: The first debate looked like a dead heat (Gore fell in the polls in the week that followed due to coverage of the debate, but reaction to the debate itself was more favorable). Although slightly more Gore supporters (79%) than Bush supporters (70%) thought their man won, Bush’s support increased by 1 percentage point during the debate. After the second debate, 76% of Bush supporters judged their man the winner vs. 63% of Gore supporters who thought their man won. Bush’ margin grew from 10 to 13 points during the debate. (Press releases for all four of these surveys are still available online here, here, here and here).

Of course, one drawback of ABC’s approach is that, even with respondents waiting by the phone for the second interview, it will take at least an hour or so to complete the calls and tabulate the results. Other respondents might spend time watching the commentary following the debate before doing the interview.

The CBS polling unit tried a different approach four years ago. My sources tell me they will use the same approach tonight. In 2000, CBS conducted a survey online with a company called Knowledge Networks, which maintains a nationally representative "panel” of households that agree to do surveys on the Internet. What makes Knowledge Networks unique is that they recruit members to their panel with traditional random digit dial (RDD) sampling methods, and when a household without Internet access agrees to participate, they provide those households with free access to the Internet via Web TV. So in theory, at least, this approach allows a random sample of all US households.

The advantage of the KN online poll is that every selected respondent receives a pre-debate invitation, so they can log-on and fill out the survey immediately after the conclusion of the race. In 2000, for example, 617 surveys had been completed within 15 minutes of the conclusion of the debate (See the releases from 2000 by CBS, here and here, and by Knowledge Networks. Also, the raw data from these surveys are available to scholars here).

One disadvantage, at least in the way the CBS survey was done four years ago, was that they either did not do a pre-debate interview or did not report the post-debate results that way. Perhaps the design this year will be different.

The bottom line: If you are willing to be patient, the best methodology is the one that ABC News used four years ago, which interviews voters before and after the debate and compares results among pre-debate Kerry supporters and pre-debate Bush supporters.


Posted by Mark Blumenthal on September 30, 2004 at 02:48 PM in Debates | Permalink | Comments (6)

Debates: Focus Groups as Reality TV

Ok, so apparently, MSNBC has pulled the plug on its plan to have Republican Pollster Frank Luntz conduct a live focus group on MSNBC among uncertain voters tonight before and after the debate. According to Roll Call (subscription required), they have abandoned the plan to do the focus groups altogether. For the reasons I’ll provide on the jump, that’s probably a good thing. Nonetheless, since another network may well be planning something similar, it’s probably worth a providing a little background on focus groups, on their limitations why I don’t have much faith in focus groups as reality TV. Click for more:

Few tools of survey research are as widely used, much derided and frequently misunderstood as the focus group. A focus group is a free flowing, in-person discussion among 8-12 participants led by a specially trained moderator. Researchers usually conduct focus groups in special rooms equipped with microphones and a one-way mirror that allow others to watch and record the discussion without being seen (though all of this is always fully disclosed to the participants beforehand).

The advantage of a focus group is that it allows the researcher to go beyond the limits of standardized survey questionnaires. Participants can speak freely and the moderator can improvise, probing unexpected issues as they arise. Because of the in-person format, focus groups also allow for show-and-tell. Market researchers use focus groups to test every imaginable consumer product, while political pollsters use them to gauge reactions
to television, radio and direct mail advertising.

Focus groups do have important limitations that are not well understood. Although focus group recruiters try to make the participants as representative as possible, the focus group is not a projective random sample, like a poll. Participants usually live near the facility. As the response rates are miniscule given the time commitment, participants usually receive monetary incentives (usually $50-$75) to encourage participation. Recruiters also seek to fill specific quotas for specified demographic characteristics (a mix of ages, for example). Thus, we simply cannot count answers in a focus group to estimate the reactions of a larger population. In other words, if 20 of 30 "undecided voters” react a certain way to the debate tonight, we cannot conclude that 66% of all undecided voters nationally feel the same way.

A second limitation is what researchers call "group dynamic.” In a focus group, participants are often influenced or cowed by the opinions of others in the group. If one dominant personality loudly stakes out a position, others tend to hide or modify their contradictory views. To control this dynamic, we typically ask focus group participants to write down initial impressions at intervals throughout the groups to root them in their initial reactions. Moreover, the role of the moderator is critical to countering the loudest and most vocal while encouraging the more timid participants to share their true feelings.

Finally, the artificial nature of the focus group is often a poor way to judge how the information from advertising (or the fallout of a debate) will be processed in the real world. For example, focus group participants often express genuine antipathy for negative advertisements and reject the information contained in them as false and unfair. Yet in the real world, as the recent campaign has demonstrated powerfully, such advertising can still communicate negative information with ruthless effectiveness. Also, People no doubt watch advertising much more closely and critically in focus group than in their living rooms. Finally, as has noted about debates in recent days (here, here and here), the media coverage in the days after the debate typically has more impact on the race than initial reactions to the debate itself.

All of this brings us back to the growing practice of conducting focus groups as live reality TV. Unlike the traditional focus group, the networks put people on a soundstage in a brightly lit studio, where the participants surely know they are on live television. I am not aware of any formal research on TV focus groups, but it seems that if peer pressure from ten strangers leads a participant to hide or alter an opinion, what is the effect speaking freely to several million? I would also imagine that those willing to participate in a live broadcast differ from those who might shy away. Having watched these in the past, it seems that many of the participants come ready to perform rather than just share opinions. They often look like they are imitating the schtick of the cable news pundits, rooting themselves in a position and arguing it, rather than reacting as they might while watching the debates at home.

The appeal of these groups to the television producers is obvious. The groups allow for immediate reactions, unlike instant polls that take hours to conduct and analyze. And I suppose it is better to get immediate reactions from ordinary people, no matter how artificial the setting, than to listen to journalist/pundits prattle on and on.

Later today, some additional thoughts on other ways you might see the networks try to poll debate reaction tonight…

Posted by Mark Blumenthal on September 30, 2004 at 11:20 AM in Debates | Permalink | Comments (1)

September 29, 2004

MoveOn vs. Gallup

I have to admit I didn’t see the MoveOn.org newspaper ad attacking Gallup until very late last night. And when I did I was somewhat taken aback by its ferocity, but given the subject matter of this blog, a response is obviously in order.

Please remember as you read what follows that I too have questions about Gallup’s likely voter model. I doubt that their selection procedure – which includes probes of knowledge of polling place location, interest in the campaign and reports of past voting – is as appropriate in July and August as it may a week before the election. Gallup’s methods are also worthy of scrutiny given the disproportionate attention their surveys receive due to their prominence on CNN and in USAToday.

It is also obvious that Gallup’s likely voter results in August and early September had Bush farther ahead than other polls of likely voters conducted in the same period, although MoveOn picked the week when that difference was greatest. Gallup’s survey of likely voters conducted September 9-14 did show a 14 -point Bush lead on the three-way vote (54%-40%), and there were seven survey organizations that reported over a roughly comparable period (IBD/TIPP, CBS/New York Times, Pew, Harris, Democracy Corps, New Democratic Network/Penn Schoen Berland and ICR) that average to a 48%-43% Bush lead among likely voters on the three way vote. Note that two of these (Democracy Corps and NDN) were conducted by Democratic polling firms.

So yes, it is appropriate to question Gallup’s likely voter model, and likely voter models generally, but the tone and substance of the MoveOn advertisement just goes too far. If Ruy Teixeira dances on the line between spin and demagoguery in his daily calls for weighting by party, this attack by MoveOn leaps across it.

Whatever doubts I have about Gallup’s model, I don’t believe for a minute that they are intentionally "Gallup-ing to the Right,” as MoveOn loudly charges. They say Gallup "refuses to fix a longstanding problem with their likely voter methodology” and imply that weighting by party is the fix, never mind that most of the "other publicly available national likely voter polls” they tout to counter Gallup do no such thing. And then they slime everyone involved by implying that the company slants its surveys to suit the whims of George Gallup, Jr, an evangelical Christian no longer involved in Gallup’s political polling operation. According to the MoveOn ad, Gallup, Jr. called his polling work a "kind of a ministry” whose "most profound purpose…is to see how people are responding to God.”

Call me a partisan, but I always thought that this sort of guilt-by-association smear of someone based on an exercise of a constitutional right – no matter how disagreeable – was something that Liberals fought against.

Also, consider this observation by Richard C. Rockwell, a professor of Sociology at the University of Connecticut (and former director of the Roper Center) who posted the following on an listserv of survey researchers (quoted with permission):

The Gallup Organization has historically been among the most forthcoming of all polling organizations about their methods and about any problems that might arise from those particular methods. This goes back to the 1940s, when Gallup (i.e., George [Sr.]) was among the founders of AAPOR. Moreover, the Gallup Organization makes its data available for public inspection through the archives of the Roper Center for Public Opinion Research at the University of Connecticut - the raw data, not just the tabular reports. Anyone can check out these data for any evidence of error or bias. You can even re-weight the data as you wish. The Gallup archives go back to the 1930s. Given the public availability of their data on a site not owned or controlled by the Gallup Organization, it would be extraordinarily difficult for Gallup to mess with the data for political or any other reasons.
I may not always agree with the decisions of the methodologists at Gallup, but I have no doubt they are professionals who exercise their best objective judgment in an atmosphere of intellectual freedom. We should respect that.

Finally, let me take off my survey research hat for a moment and put on my Democratic Party hat. I have admired MoveOn’s efforts, but I have to ask, is it now so flush with cash that it can afford to buy a full page ad in the New York Times a few weeks before "the most important election of our lifetimes” attacking a polling company? Do swing or less-than-likely voters really care? Wouldn’t it be better to spend that money, say, making a case against George Bush or just turning out the vote?

Advice from one Democrat to another: let’s keep our eyes on the prize.

Posted by Mark Blumenthal on September 29, 2004 at 01:47 PM in Likely Voters, Weighting by Party | Permalink | Comments (30)

Why & How Pollsters Weight, Part I

I want to return to the issue of weighting, beginning with a question commenter Ted H:

What puzzles me about the idea of weighting by party ID is that surveys are supposedly designed to sample from the population randomly. This assumption is also the basis of the margin of error they calculate. So if you sample randomly but then adjust the results because you didn't get the percentages of D and R that you expected, you have thrown out the design of the survey.

Unfortunately, as many of you guessed, perfect random samples are impossible under real world conditions. Although we begin with randomly generated telephone numbers that constitute a true random sample of all US telephone households, some potential respondents are not home when we call. Others use answering machines or Caller ID to avoid incoming calls, and increasingly large numbers simply refuse to participate. All of these unreachable respondents create the potential for what methodologists call "non-response bias.” (This is a major topic for further discussion – if you can’t wait, this review of the vexing issue by ABC Polling Director Gary Langer is one of the best available anywhere).

Telephone surveys also exclude the small percentage of households lacking home telephone service. Unfortunately, this percentage is growing rapidly due to the number (estimated at 2-5%) that has switched off their wired phone service altogether in favor of cell phones (another important topic that I am purposely glossing over for now – we’ll definitely come back to it).

These missing respondents can cause error – also known as bias – only if the missing respondents are different from those interviewed. Pollsters weight data as a strategy to reduce observed bias.

Pollster strategies for weighting seem to fall into three general categories. I’ll take up the first tonight and discuss the others in subsequent posts over the next few days.

The first involves the classic strategy used by most of the major national media surveys, including CBS/New York Times, ABC/Washington Post,* Gallup/CNN/USA Today, Newsweek, Time, The Pew Research Center and the Annenberg National Election Survey among others. They begin by interviewing randomly selected adults in a random sample of telephone households. Even if they ultimately report results for only registered voters, they ask demographic questions of all adults. They then typically weight the results to match the estimates provided by the U.S. Census for gender, age, race education and usually by some geographic classification. This weighting eliminates any demographic bias, including chance variation due to sampling error.

The key point is that they weight only by attributes that are fixed at any given moment, easily described by respondents and matched to bulletproof Census estimates using language that typically replicates the Census questions.

Finally, to be clear, none of these organizations weights by party! Contrary to what I have seen written elsewhere neither CBS/New York Times, ABC/Washington Post,* Gallup/CNN/USA Today, Time, Newsweek, The Pew Research Center and nor the Annenberg National Election Survey weights their results by Party ID. [UPDATE: ABC News does weight its October tracking survey of likely voters by party (but not registerd voters and not surveys prior to October)]

What about CBS? I’ve read several posts – including one on the comments section of this blog – that repeat the myth that CBS News weights by party. The proof? The table below (which I reproduced from the last page of this release) which shows weighted an unweighted samples sizes for Democrats, Republicans and Indepedents. CBS provides this data because their report includes tabulations by party for each question, and they are providing disclosure of their sub-sample sizes exactly as required by the disclosure standards of the National Council for Public Polls. They go even one step further, providing unweighted counts to assist in those who want to calculate sampling error.

UNWEIGHTED WEIGHTED
Total Respondents  1083
Total Republicans  377 339
Total Democrats  376 381
Total Independents  330 364
Registered Voters  931 898
Reg. Voters –- Republicans 343 323
Reg. Voters –- Democrats 324 332
Reg. Voters –- Independents 264 246

So why does it appear that Republicans have been weighted down? Among total respondents, for example, the share of Republicans is lowerhigher for the unweighted sample (35% or 377/1083) than the unweighted sample (31% or 339/1083). The reason is that respondents are less cooperative in urban and suburban areas and more cooperative in rural areas. Presumably, when they correct the sample to match verifiable census data, the weighting indirectly – but appropriately – altered the party balance.

If I’m getting this wrong and anyone at CBS is reading this, please do not hesitate to email me with a correction.

Tomorrow, those who weight by exit polls.

9/29: Typo corrected above

[Continue with Why & How Pollsters Weight, Part II]

Posted by Mark Blumenthal on September 29, 2004 at 12:06 AM in Weighting by Party | Permalink | Comments (8)

September 28, 2004

Good Analysis of Post/ABC Poll

Unfortunately, the crush of time has prevented me from doing much analysis of the recent surveys beyond the horse race numbers, and yet I’ve also argued that we all need to pay less attention to horserace numbers. Not very helpful, I know.

So rather than try to dig into the numbers myself, I want to recommend this morning's poll analysis story by the Washington Post’s Dan Balz and Vanessa Williams. It nicely describes the ongoing decision making process among swing voters conflicted between "deep concerns about Iraq and the pace of the economic recovery” and the perception that George Bush is "a stronger leader with a clearer vision.” Two key paragraphs:

Bush's relentless attacks on Kerry have badly damaged the Democratic nominee, the survey and interviews showed. Voters routinely describe Kerry as wishy-washy, as a flip-flopper and as a candidate they are not sure they can trust, almost as if they are reading from Bush campaign ad scripts. But Kerry's problems are also partly of his own making. Despite repeated efforts to flesh out his proposals on Iraq, terrorism and other issues, he has yet to break through to undecided voters as someone who has clear plans for fixing the country's biggest problems…

Among those voters who dislike Bush's policies and are still making up their minds, the three presidential debates may offer Kerry his last opportunity to show them that he has what they are looking for in a president.


My only quarrel with the Balz/Williams analysis is their use of "solid” to describe Bush’s current lead. ABC Polling Director Gary Langer, in a separate analysis of the same numbers concludes,

That result [Bush’s current lead] is not predictive — the race has been tied and it can be again. But these results present three prime worries for the Kerry camp. One is that, unlike Kerry, Bush has maintained his immediate post-convention gains (the candidate evaluations in this ABC/Post survey are little changed from those in the last). A second is Kerry's weak personal position, which sends him into the debates with a certain lack of good will. And the third is a very broad sense that Kerry hasn't enunciated a clear message; registered voters by 2-1 say Bush has taken clearer stands on the issues.
Taken together, that analysis seems on target. Bush has a small and imortant advantage, but the debates provide Kerry with a potential opening.


Posted by Mark Blumenthal on September 28, 2004 at 12:32 PM in Interpreting Polls | Permalink | Comments (13)

How do pollsters select "likely voters?”

I am currently at work on a detailed series of postings on how pollsters select likely voters.

Clicking on this link will display short excerpts of all of the posts concerning likely voters in reverse chronological order (most recent first).

Posted by Mark Blumenthal on September 28, 2004 at 09:15 AM | Permalink | Comments (0)

Why are polls showing contradictory results?

In a sense, every post on this site seeks to answer the question of why polls seem contradictory. A good place to start is with the following three posts on the random variation inherent in all sample surveys and what a consumer can do to minimize it (each post has a link at the end connecting to the next post in the series):

Beyond that, much of the variation in political polls comes from the way the pollster defines a "likely voter.” These two questions on the FAQ list deal with two issues that explain much of the systematic variation between polls

Finally, at the most basic level, political campaign polls typically seek to accomplish three tasks. Click on any of these links to display short excerpts of all of the posts in the category in reverse chronological order (most recent first):

Posted by Mark Blumenthal on September 28, 2004 at 09:01 AM in Divergent Polls, FAQ | Permalink | Comments (0)

What does the "margin of error" mean?

A good place to start is with the following three posts on the random variation inherent in all sample surveys and what a consumer can do to minimize it (each post has a link at the end connecting to the next post in the series):
Divergent Polls
More Divergent Than They Should Be?
So What Should a Junkie Do?

Also, click on this link to display short excerpts of all of the posts concerning sampling error in reverse chronological order (most recent first).

Posted by Mark Blumenthal on September 28, 2004 at 08:59 AM in FAQ, Sampling Error | Permalink | Comments (0)

How do pollsters select "likely voters?"

I have done a lot of posting on this topic. If you want a very thorough reading, start with Part I and continue through the series. You might also want to skip to Part VIII, which is something of a guide to the likely voter models used by all the major polling organizations:

Also,you can click on this link to display short excerpts of all of the posts concerning likely voters in reverse chronological order (most recent first).

Posted by Mark Blumenthal on September 28, 2004 at 08:58 AM in FAQ, Likely Voters | Permalink | Comments (0)

Should pollsters weight by party identification?

The debate over whether pollsters should statistically adjust (or weight) their samples by party identification has been heated during campaign 2004.  Here is a list of sequence of posts that covered this topic exhaustively:

It is worth noting that the debate about weighting by party is really a part of a larger debate also raging in 2004 about how pollsters define likely voters – click this link for more information.

Finally, click on this link to display short excerpts of all of the posts concerning weighting by party in reverse chronological order (most recent first).

Posted by Mark Blumenthal on September 28, 2004 at 08:57 AM in FAQ, Weighting by Party | Permalink | Comments (0)