March 31, 2005
Disclosing Party ID: American Research Group
Here is another response to my query to pollsters who do not typically report party identification in online releases to explain that policy. Today we hear from Dick Bennett, president of the American Research Group (ARG):
We have provided party registration or party ID for almost all of our political surveys posted online and because of your post, we will include it for all our political surveys posted online.
Our interviewing system has a look-up function by area code which places a party registration question ("Are you currently registered to vote as ...") in the screen (at the beginning of the survey) for states with party registration and a party ID question ("Do you consider yourself to be ...") in the screen for the other states. Party registration and party ID from this process get combined in national surveys.
We are currently asking a political party point-of-view question near the end of surveys and the responses to that question show much more survey-to-survey movement than the registration question (which is very stable) and the ID question (less stable). I can't tell you if it is the question or question order.
Thank you, ARG.
It is important to note that what ARG does, as explained above, is very different from the way other pollsters ask about party ID. ARG asks about party registration on some states, party identification in others and then combines the two results into a single variable. Whatever the merits of this approach, the will results not be comparable to those of other polling organizations.
Party registration is not the same as party identification - respondents will sometimes provide a different answer when asked how they are registered as compared to which party the feel closer to. In states that require it, some voters may choose a party affiliation in order to cast a ballot in a contested primary when they "consider themselves" independent or even closer to the other party. In southern states, this phenomenon has a name - the Dixiecrat - which describes those who register as Democrats in order to vote in local primaries in areas where Democrats almost always win local general elections.
My own firm often asks about both party registration and party ID, often in the same survey (e.g. Are you registered? [If yes] are you registered as a Republican, Democrat or independent? Now regardless of how you are registered, do you consider yourself...?]. The results are often quite different. Those tempted to weight by party identification to match statistics for party registration provided by election officials risk introducing serious bias into their results.
Party Disclosure Archive:
March 30, 2005
Disclosing Party ID: Fox/Opinion Dynamics
As a follow-up to the queries made of representatives of Gallup, the Pew Research Center and Time/SRBI, MP followed up with similar requests of other public polling organizations that do not typically release results for party identification to explain their policy. Here is response from John Gorman, president of Opinion Dynamics, the company that conducts the Fox News Poll:
Frankly, we have always treated party as a crosstab variable and not as something interesting in itself. It varies a little bit from survey to survey, but the short-term shifts have never been that interesting. We have received requests for the answers from time to time and have supplied them. More often we receive requests for a particular question broken down by party and we normally supply that to whomever asks.
As you know, we don't weight by party and as I understand what he is saying from your email, I generally agree with Frank Newport. (I'll wait to see his AAPOR paper before endorsing his position 100%.)
I've discussed your inquiry with Dana Blanton at Fox News Channel in New York and we see no problem with adding the party breakdown to the standard poll releases.
[Emphasis and links added]
Thank you John Gorman.
MP has already heard from about a half dozen other pollsters and will post their comments in the order received over the next week or so. For some guidance on what to make of data on party identification, those new to this subject may wish to consult yesterday's post as well as MP's FAQ on weighting by party ID.
March 29, 2005
Realignment or a "Crappy" Poll?
Last Friday, in the Today's Paper's feature, Slate's Eric Umansky observed:
USA Today goes Page One with a poll showing President Bush's rating at a record-low 45 percent, seven points below what the paper had last week. USAT notes (with a straight face) that the "poll also found an increased number of Democrats," from 32 percent last week to 37 percent this week. TP is no pollster, but which is more likely: 1) an enormous political realignment over the past seven days, or 2) a crappy poll?
"Crappy" is not the word MP would ordinarily choose to describe that statistically inevitable one-poll-in-twenty that will produce, by random chance alone, results outside the margin of error. Gallup had the misfortune to field such a poll in early February (or so it seemed to MP). While time will tell if Gallup's latest effort falls into the same category, the most likely answer to Umansky's question is probably "neither," given other surveys released over the last week or so.
As always, more explanation is in order. First, let's acknowledge something significant. Not only did Gallup, as promised, include the party ID numbers as part of their official release (subscription required), but those results also made it into USA Today's very brief (~400 word) poll story. That's progress. Much credit is due to Gallup and USA Today, of course, but also to the bloggers who have long pushed for this sort of disclosure and coverage, especially Chris Bowers, Steve Soto and Ruy Teixeira.
Now that more survey organizations are including party identification data in their standard releases, we need to talk a bit about how best to interpret that data. MP worries that in pushing for more disclosure he has implicitly endorsed the notion of treating ID as an overall measure of the survey quality, as if it were analogous to car tire pressure gauge. If your tire pressure is low, you add air. It is certainly helpful to understand the level of party ID for any given survey, but if Republican (or Democratic) identification seems low, I would not advise automatically adding more Republicans (by weighting) or concluding that the survey is "crap." As noted here many times, party ID is an attitude, highly resistant to change to be sure, but still capable of short term variation.
A first step in analyzing the latest Gallup data would be to apply Professor M's advice about questionnaire wording to the composition of the recent Gallup sample. On their most recent survey (conducted March 21-23), 37% of respondents described themselves as Democrats (up a statistically significant five points in a week) and 35% as Republicans (down 3 points). Are other surveys showing a similar trend?
- The Pew Research Center survey conducted March 17-21, shows a smaller (but non-significant) shift in the same direction. They currently have Party ID among US adults at 34% Democrat, 30% Republican. The four point edge is slightly higher than in February (32%D-31%R) or compared to their average for 2004 (33%D-30% R) on average throughout 2004.
- The Time/SRBI poll conducted March 22-24 showed an eight point Democratic advantage (33%D-26%R), up from a four point advantage measured a week earlier (33%D-29%R March 15-17). The difference, of course, was a statistically insignificant three point drop in Republicans.
- The CBS News survey conducted March 28, 2005 showed a five point Democratic edge (32%D-27%R). Although CBS had more independents than usual, the five point Democratic advantage was about the same as on the average of the three surveys conducted with the New York Times in 2005 (35%D-30%R) and the average of all surveys in 2004 (34% D-30% R - click the Complete Results link on the right column of the poll story for full CBS/NYTimes results).
So the answer is mixed. The Pew and Time surveys show movement in the same direction by not nearly the same magnitude as Gallup; CBS shows virtually no change.
But what about the trend for the President's job rating? The big news in the Gallup survey was the seven-point decline (from 52% to 45%), "the lowest such rating Bush has received since taking office" (though statistically equivalent to the 46% measured in July 2004). Are other surveys showing the same pattern?
Here the answer is an unequivocal yes. As the table above shows, the Time/SRBI survey shows an even larger drop in the Bush job rating over the same time period. Surveys released by Pew, Newsweek and the American Research Group show similar declines since earlier in the year. The most recent Pew survey shows only a one percentage point drop, although their most recent approval rating (45%) represents a five-point drop from 50% in January (not included in the table). The robo-polls conducted by Rassmussen show little variation, though note that Rassmussen is the only pollster on the list that regularly weights his data by party ID. But even Rassmussen has Bush down over the last week, "the first time since mid-February that the President's Approval rating has been below the 50% mark on four consecutive days."
So why might Gallup show a bigger shift in party ID than other pollsters? There are several reasons that fall between random sampling error and "realignment." For today, let's consider one I neglected to raise in my long series of posts on this subject last fall: Gallup uses a variant of the party identification question that tends to produce more short term variation.
The version originally developed for the National Election Studies at the University of Michigan (and currently used by ABC/Washington Post, CBS/New York Times, the National Annenberg Election Survey, NBC/Wall Street Journal, the Los Angeles Times, Time/SRBI and Qunnipiac) asks: "Generally speaking, do you consider yourself a Republican, a Democrat, an independent or what?"
An alternative developed by Gallup (and also used by the Pew Research Center and the Newsweek poll) asks "in politics TODAY, do you consider yourself a Republican, Democrat, or Independent?" Note the obvious difference: the Michigan question focuses the respondent on politics "generally," while the Gallup question emphasizes "politics TODAY."
Two Political Science professors at Michigan State University -- Paul Abramson and Charles Ostrom -- reviewed the results for party ID over time as measured by Gallup and compared them to other organizations, including the CBS/New York Times survey. They found that Gallup has always shown more short term variation in Party ID than the other surveys. They then conducted a series of side-by-side experiments on surveys in the state of Michigan using both forms of the question and concluded: "the Gallup measure responds more to short-term political conditions and...is less stable over time" (citations on the jump).
Speaking at a recent seminar sponsored by DC-AAPOR, Gallup's Frank Newport said they have been conducting similar internal experiments recently that generally confirmed Abramson and Ostrom's findings. Newport plans to present these findings, among others, at the AAPOR annual meeting in May. He also shared an anecdote from Alec Gallup, son of the company's founder, who said that the Gallup question stressed "politics TODAY" with the express purpose of picking up more short term change.
There are other reasons why Gallup, or other pollsters sometimes pick up small aggregate shifts in party ID that quickly recede, but I will save that discussion for another day. For now, I will condlue with the analysis by Gallup's David Moore in last week's release (available to Gallup subscribers only) that provides a reasonable answer to Eric Umansky's rhetorical question:
The factors contributing to increasing dissatisfaction with the way Bush is handling his job also appear to be causing some Americans to drift toward identification with the Democratic Party, at least temporarily. The poll shows that the percentages of Americans who say they identify "as of today" as either a Republican or an independent are down slightly, from 35% Republican in Gallup's last poll to 32% in this poll, and from 31% independent to 29% independent. Identification with the Democratic Party is up from 32% to 37%. These relatively slight changes do not suggest a fundamental shift in the partisan structure in America today so much as they reflect a more negative mood at the moment toward both the president and his party.
(References after the jump)
Abramson, Paul R., and Charles W. Ostrom Jr. 1991. Macropartisanship: An Empirical Reassessment. American Political Science Review, Vol. 85, pp. 181-92.
Abramson, Paul R., and Charles W. Ostrom Jr. 1991. Question Wording and Partisanship: Change and Continuity in Party Loyalties During the 1992 Election Campaign. The Public Opinion Quarterly, Vol. 58, No. 1. (Spring, 1994), pp. 21-48. (JSTOR link)
March 25, 2005
Schiavo: The Return of Professor M
I thought that in light of the interest in the last post on the recent Schiavo polls that it would be good to take a step back from the microanalysis of and write generally about how pollsters write questions about issues and public policy. I was pleasantly surprised to find that our old friend Professor M, a member of the Political Science faculty at a small midwestern college, had posted some comments that accomplished much of this task for me, and said it better than I would have. For those who do not browse the comments section, his comments are more than worthy of promotion to the main page. This is today's must read:
Mark, I think that your discussion here implicitly endorses a commonly held error about the best way to interpret polling data about matters of public interest. (And this error underlies the criticism of the ABC poll as well.)
The error is the incorrect belief that there is a "right" or "unbiased" way to ask a question about any given public issue. There is no such thing. Everyone who works within the polling field is well aware that small changes in wording can affect the ways in which respondents answer questions. This approach leads us into tortuous discussions of question wording on which reasonable people can differ. Further, as you have pointed out many times in the past, random variation in the construction of the sample or in response rates can skew the results of any single poll away from the true distribution of opinions in the population.
So how do we look at public opinion on an issue such as the Schiavo case? The answer is NOT to find a single poll with the "best" wording and point to its results as the final word on the subject. Instead, we should look at ALL of the polls conducted on the issue by various different polling organizations. Each scientifically fielded poll presents us with useful information. By comparing the different responses to multiple polls -- each with different wording -- we end up with a far more nuanced picture of where public opinion stands on a particular issue. If we can see through such comparisons that stressing different arguments or pieces of information produces shifts in responses, then we have perhaps learned something. Like our own personal opinions, public opinion is not some sort of simple yes/no set of answers; it is complex, and it can see both sides of complicated issues when presented with enough information.
If we were to lock pollsters of all partisan persuasions in a room and force them to pick the "best" question wording on the Schiavo issue, we might end up with everyone asking the same question, but overall we would end up with less information about public opinion, not more. We are better off having the wide variety of different polls, with questions stressing different points of view on the issues, and then comparing them all to one another. This is precisely what you do in your discussion of the ABC poll, but I think you are asking entirely the wrong question -- not "is the ABC wording defensible?" but rather "what does the ABC poll, when compared to other polls with different wording, add to our overall understanding of public opinion on this issue?"
Of course, this sort of contextualizing of polling results is exceedingly rare in the media. Much more common is the front page story saying "here is our poll, and here is what it found, and it is a true representation of public opinion" -- and by implication, no other poll matters. Intellectual honesty is trumped by competition. The best we usually get are vague generalizations of all of the polls lumped together ("polls have consistently shown disapproval of Congress' actions"), and even those generalizations almost never appear in the initial story trumpeting the "exclusive" poll fielded by the newspaper/network itself.
The end result is that even those who pay close attention to the news media and the chattering classes often have very little real understanding of how to interpret polls in a thoughtful way -- which is one of the reasons your blog is so valuable.
P.S. Polls which attempt to predict election results are a rather different kettle of fish, for two important reasons: (1) Pollsters have been experimenting with questions wording for over 50 years and can keep wording the same regardless of the issues in a race; and (2) There is an actual real-world "check" on pollsters' work in the form of the actual election results. Neither of these characteristics apply to polling about issues of public interest.
For those who want to look at all the recent polls on the Schiavo case, the PollingReport provides a great compilation that includes complete wording, sample sizes, interview dates and margins of error.
I've got some additional thoughts...but it's late. More on this topic tomorrow.
March 23, 2005
Schiavo "Push Poll?"
[3/24 - 2:45 p.m. EST - posted additional updates below]
Another day, another polling controversy. The latest involves a survey released on Monday by ABC News that shows 63 to 28 percent support for removal of Terry Schiavo's feeding tube. The survey drew intense interest in Washington and immediate allegations of biased question wording from the blogosphere's right wing. Captain's Quarters called it a "push poll for euthanasia." Wizbang adds another adjective, calling it a "bogus push poll for euthanasia."
Do they have a point? The quick answer: The evidence of bias or deliberate untruth in the ABC poll is scant, though the issue raises some interesting questions about the appropriateness of "informed" questions.
Now here's the long version.
First, a plea for reporters, editors and bloggers of all ideologies: Can we please stop using the term "push poll" to describe every survey we consider objectionable? Yes, complain about bias when you see it, but the phrase push poll belongs to a higher order offense. To summarize the definitions posted online by the American Association for Public Opinion Research (AAPOR), The National Council on Public Polls (NCPP) and the Council for Marketing & Opinion Research (CMOR): A push poll is not a poll at all but rather a form of fraud - an effort to spread an untrue or salacious rumor under the guise of legitimate research. "Push pollsters" are not pollsters at all. They do not care about collecting data or measuring opinions (even in a "bogus" way). They only care about calling as many people as possible to spread a false or malicious rumor without revealing their true intent. Whatever complaint one might have about the wording or reporting of the ABC poll, it was certainly not a "push poll."
Now to the more debatable question of whether the ABC poll was biased or unfair. The complaints center mostly on the text of this question:
Schiavo suffered brain damage and has been on life support for 15 years. Doctors say she has no consciousness and her condition is irreversible. Her husband and her parents disagree about whether she would have wanted to be kept alive. Florida courts have sided with the husband and her feeding tube was removed on Friday. What's your opinion on this case - do you support or oppose the decision to remove Schiavo's feeding tube?
As noted above, 63% of the 501 adults surveyed on March 20 said they supported the decision, 28% opposed it and 9% had no opinion. Sampling error was reported as 4.5%.
The main objection seems to be the use of the term "life support" in the second sentence. Again, from Captain's Quarters:
Terri [Schiavo] has never been on life support. The only medical treatment Terri received for the past five years has been food and water through a feeding tube, which is nothing at all like artificial life support. Artificial life support consists of ventilation for people unable to breathe on their own. The question sets up a strawman argument that so completely contradicts reality that the entire poll must be considered invalid.
One test of this argument is a survey released by Gallup earlier this week (subscription only, also summarized here) conducted from Friday to Sunday, that asked a similar but more concise question without the use of the phrase "life support."
As you may know, on Friday the feeding tube keeping Terri Schiavo alive was removed. Based on what you have heard or read about the case, do you think that the feeding tube should or should not have been removed?
Fifty-six percent (56%) of the 909 Gallup respondents said the tube should be removed, 31% said it should not be removed and 13% had no opinion. Support for removing the tube is five points less than on the ABC poll, though the difference is not quite statistically significant.
The Fox News Poll also asked the following "informed" question on a survey conducted March 1-2:
Terri Schiavo has been in a so-called 'persistent vegetative state' since 1990. Terri's husband says his wife would rather die than be kept alive artificially and wants her feeding tube removed. Terri's parents believe she could still recover and want the feeding tube to remain. If you were Terri's guardian, what would you do? Would you remove the feeding tube or would you keep the feeding tube inserted?
Fifty nine percent (59%) of Fox's sample of 900 registered voters would remove the feeding tube, 24% would keep it inserted and 17% were unsure. Note that the 35-point margin of support for removing Schiavo's feeding tube is the same as on the ABC survey.
It is also worth noting that the ABC poll was completed in a single evening. As the National Council on Public Polls (NCPP) points out: "Surveys conducted on one evening, or even over two days, have more sampling biases -- due to non-response and non-availability -- than surveys which are in the field for three, four or five days."
Between the sampling error and the vagaries of one night samples, we cannot say conclusively that the ABC language produced more support for removing Schiavo's feeding tube. However, for the sake of argument, let's concede that the ABC informed had such an effect. Was the language of their question defensible? [3-24 On some reflection a better word here would be "fair" - see comments below]
According to an article on the issue in yesterday's New York Sun, ABC News Polling Director Gary Langer "said in an e-mail to the Sun that the descriptions were taken from an appellate court decision in Florida that described Mrs. Schiavo's condition." Here is one example from the Florida Supreme Court decision:
In this case, the undisputed facts show that the guardianship court authorized Michael to proceed with the discontinuance of Theresa's life support after the issue was fully litigated in a proceeding in which the Schindlers were afforded the opportunity to present evidence on all issues. (p. 15 - emphasis added)
Moreover, the contention that the phrase "life support" in the ABC question automatically conjures up images of an artificial respirator rather than a feeding tube, thus creating a "strawman" that "completely contradicts reality" (as Captain's Quarters put it) does not hold up. In fact, the ABC question uses the phrase "feeding tube" twice, ultimately asking whether respondents support "the decision to remove Schiavo's feeding tube." In Cruzan v. Director, the U.S. Supreme Court held that tube feeding was legally no different from other forms of life support (see also this article). Legality aside, it is hard to imagine that most respondents would interpret "the food fluids or medical treatment necessary to sustain her life" (the language in the law enacted by Congress over the weekend) as meaning something other than "life support."
If the greater poll support for removing the feeding tube was more than random error, my hunch is that the effect had more to do with the statements that "Florida courts have sided with" the husband (they certainly did) and that "doctors say she has no consciousness and her condition is irreversible" (hard to quarrel with given reports like this one). It might have been better to first ask a question that presented less information (as Gallup did), but calling the ABC description "untrue" or "deliberately slanted" is quite a stretch.
This point leads to a more general objection that Rick Brady raised about the whole notion of "informed" questions:
Polling organizations like ABC News are not supposed to educate people regarding the issues they are polling. If a large portion of the public is not well informed on a subject matter related to an area in which they already have solidly formed opinions (I wouldn't want to be on "life support" or a "vegetable," therefore I think Terri's tube should be removed and Congress should stay out of it), in most cases, three sentences of preamble will not be sufficient to illicit a respondents true opinion.
I have to disagree with Rick, though I think it is fair to say he speaks for a vocal minority of academic survey methodologists. Political pollsters frequently encounter complex issues about which the public lacks knowledge or "solidly formed opinions." The Schiavo case is a perfect example. Even after the blanket coverage of last weekend, nearly half of ABC's respondents said they had been following the case "not very closely" (16%) or "not at all" (28%).
As a result, we frequently ask questions that first provide a bit of information or context, especially when an issue is poised to get much greater attention or become the focus of a political campaign. I have written hundreds, perhaps thousands of such questions, and 99% of the time the results are not intended for public consumption. Our goal is not to create propaganda but to accurately gauge how opinions might develop with more information, and we struggle to find language that simulates the dialogue that will ultimately play out in the media. As the Schiavo example proves, this task is not easy.
Republican pollster John McLaughlin told the NY Sun that he "would have worded the [Schiavo] questions differently." I am sure that's true - I probably would have taken a different tack as well. However, as with recent "informed" questions on Social Security, if you give this task to 20 pollsters, you will likely get 20 different questions. It is easy to quibble with ABC's approach but the charge from Captain's Quarters that they were either "incompetent" or "attempted to fool their viewers and readership with false polling that essentially lies about the [Schiavo] case" is grossly unfair.
UPDATE: Gallup released a subsequent one-night poll this morning. Among the questions asked:
As you may know, Terri Schiavo is a Florida woman in a persistent vegetative state who was being kept alive through the use of a feeding tube. The feeding tube was removed on Friday, an action that will result in her death within about two weeks. A federal judge made a ruling in the case today. First: Do you agree with the federal judge's decision that resulted in the feeding tube being left unattached, or do you disagree and think the federal judge should have ordered the feeding tube to be re-attached?"
52% agree with judge
39% disagree with judge
UPDATE II: Our friend Mickey Kaus disagrees, to put it mildly. The crux of his argument is the notion that the public perceives "life support" to mean a respirator and a patient who will stop breathing within minutes of its removal, a condition they argue is considerably worse than
Jennifer Terri Schiavo's. If that were true, and if the public perceived it that way, the term "life support" would likely bias the results. I am asking if we have any evidence of such a perception.
The best place to look is the other polls that make no reference to "life support" or what "doctors say" about Schaivo's state of consciousness or chances of survival. Fox showed 59% support for removing the tube, Gallup showed 58% support. Thus, Mickey points out that "no other poll has as large an anti-tube majority (63%) as ABC's."
Q13. Terri Schiavo has been in a persistent vegetative state since 1990. Terri's husband says his wife would not want to be kept alive under these circumstances and he wants her feeding tube removed. Terri's parents believe her condition could improve and they want the feeding tube to remain. How closely have you been following news about the case -- have you been following it very closely, somewhat closely, not too closely, or not at all?
32% very closely
44% somewhat closely
17% not too closely
6% not at all
1% don't know
Q14. What do you think should have happened in this case -- should the feeding tube have been removed or should it have remained?
61% Should be removed
28% Should remain
11% Don't know
Q19 What should happen now? Should the feeding tube be re-inserted, or not?
7% Don't know
These questions appear to indicate an "anti-tube" majority of roughly the same size as indicated by the ABC poll without any mention of "life support" or what "doctors say" about Schiavo's condition.
However, I’ll hedge for the time being, only because the CBS release is a bit confusing. First, the heading "Partial Sample" appears over the results for Q14 appears to also apply to Q19. This would usually imply that the sample of 737 adults was randomly divided in two, with half the respondents hearing Q14 and half Q19. That would normally make sense, although asking “what should happen now” in Q19 without the introduction from Q14 seems odd. Adding to my confusion is the jump in the labeling from Q14 to Q19, followed by Q15 through Q18. Hopefully, someone from CBS will help clarify the mechanics.
UPDATE III: I spoke with Kathy Frankovic at CBS who helped clarify their release. The two questions on whether the Schiavo feeding tube should have been removed or reattached were labeled "partial sample" because they were only asked on the second night of interviewing. The survey had originally included a different version of Q14 that had been written erroneously in the future tense (e.g. What do you think should happen in this case -- should the feeding tube be removed or should it remain?). Since that language was inaccurate (the tube had already been removed), they decided to replace that question with the two-question sequence above for the second night of interviewing.
As a result, these two questions (Q14 & Q19) were asked of fewer respondents (n=321 unweighted) than the full sample (n=737) and thus had a bigger margin of error (+/-6%) than the full sample (+/- 4%) . Also, results for these two questions, like the survey done by ABC earlier in the week, are subject to the same caveats about one night polls described above.
According to Frankovic, the second new question was labeled Q19 because they typically leave gaps in question numbering for insertions of new questions when these situations arise. The questions were asked in the order the appear in the relese. Note that the PDF document has no Q7, Q11 or Q12.
Frankovic also provided the party identification results for the weighted data: 27% Republican, 32% Democrat, 41% other or don't know. Some may have seen different numbers posted in error today on DailyKos.
ONE LAST THOUGHT: After reflecting on the comments on this post, there is one word I wish I had written differently: “defensible” (as in, “was the language of their question defensible?”). A better word would have been “fair” or as Kaus put it, “reasonably calculated to produce an accurate poll of what people think.”
Gerry Daly, who also had problems with my use of “defensible,” wrote: “We should strive for them to conduct polls that have fair wording and that provide the most bias-free read of the public that is reasonably attainable.” No argument there.
My point about the court documents and the legal definition stemming from the Cruzan decision was not just about technical definitions but about language: This is a matter of opinion (for now), but I doubt that ordinary Americans are making the same distinctions regarding “life support” and “tube feeding” as those who are passionately interested in this story. Remove the feeding tube and Terri Schiavo dies. As a matter of language and plain meaning, how is that not “life support?”
So in that regard, I think that while far from perfect, the ABC question was fair. Others -- obviously -- disagree. Read the comments for a sampling.
March 21, 2005
Tired of Exit Polls Yet?
For at least a month, I have intended to do a wrap-up post (or posts) on what the exit polls can (and cannot) tell us about allegations of fraud in the 2004 elections. I keep putting off this task for a number of reasons, not the least of which is a concern that I have already devoted too much time to this subject. Most of what I would write I have already written in some form. I wonder if regular readers are getting or would get bored with it.
So I'd like to hold an informal vote: Should MP continue to focus on the exit poll controversy? If you have feelings on this question either way, please EMAIL me with your opinion.
I ask for email because I want to try to adhere to the notion of "one reader, one vote. " I will ignore messages from addresses that bounce back, should I reply. The comments section is open, and thoughts are welcome there as always, but I will not count votes from phony email addresses. As always, signed responses are greatly appreciated and possibly given greater weight. I promise I will not quote you without permission, and MP never sends unsolicited email to anyone.
Hello Exit Polls My Old Friend
The release of several new papers on the 2004 exit poll controversy brings me back to this familiar topic. The first paper, from a team of academics with considerable survey expertise, breaks no new ground but provides a good overall summary of the controversy. The second, by frequent Mystery Pollster commenter Rick Brady, goes further, taking on those whose widely circulated Internet postings proclaim evidence of fraud in the exit polls. The academic paper is an excellent overall primer on the issue, but Brady's work breaks new ground.
The first, a "working paper" released on March 11 by the Social Science Research Council (SSRC) of the National Research Commission on Elections and Voting, is noteworthy for the expertise of its authors. Michael Traugott of the University of Michigan, Benjamin Highton of the University of California (Davis) and Henry Brady of the University of California (Berkeley) are political scientists with scores of journal articles on voting behavior and survey methodology to their names. Traugott, the principal author, is a past president of the American Association for Public Opinion Research (AAPOR) and the co-author or editor (with Paul Lavrakas) of several books on survey methodology, including the very accessible Voters Guide to Election Polls (which MP would include on a list of recommended books, if he ever got around to putting such a list together. Full disclosure: twenty years ago, Traugott was the third reader on MP's undergraduate honors thesis).
In short, these guys know what they are talking about.
Yet for all the academic firepower that Traugott and his colleagues bring to the exit poll debate, they break little new ground. They do present a balanced and thorough summary of the short history of the controversy and its key issues and include the most complete bibliography on the issue (including URL links) MP has seen to date. If nothing else, the Traugott paper is an excellent starting point for anyone grappling with this issue for the first time.
Traugott and his colleagues also make a very important point about the key issue that continues to frustrate those seeking an "explanation" from the exit pollsters for the discrepancy between the exit polls and the final results: When it comes to "nonrespondents" -- those who refuse to participate in a survey -- "proof" is inherently elusive. In reviewing the report from the National Election Pool (NEP) released earlier this year, they write:
[The report] is complicated in a way that many post-survey evaluations are by the fact that some information is essentially unknowable. This is especially true when one of the concerns is nonresponse, and there is no information from the nonrespondents to analyze. As a result, there are some sections of the report in which there is an extremely detailed level of disclosure about what the exit poll data show, but in other parts of the report there are only hypotheses about what might have been the cause for a particular observation. These hypotheses can guide future experiments in exit polling methodology or even direct changes in the methods, but they cannot explain in a strict causal sense what happened in the 2004 data collection (emphasis added, pp. 8-9).
A second paper, posted over the weekend by our friend Rick Brady of the blog Stones Cry Out, is a point-for-point rebuttal of the final version of Stephen Freeman's well known paper, The Unexplained Exit Poll Discrepancy (MP reviewed the first version of the paper back in November). Brady, who has been studying graduate level Statistics on the way to a Master's Degree in Public Planning, assails every statistical weakness in Freeman's thesis. Many of the issues that Brady raises will be familiar to MP's readers, but he does an excellent job putting it all together and raising some statistical issues not included in the Traugott paper.
For MP, the most interesting aspect of Brady's review is his discussion of a subsequent paper by a team of PhDs (including Freeman) affiliated with the organization US Count Votes. Kathy Dopp, the President of US Count Votes (USCV), issued a public challenge "for any PhD level credentialled (sic) statistician who is affiliated with any university in America to find any statements in our 'Response to Edison/Mitofsky Report' that they believe are incorrect and publicly refute it."
Brady may be just a Master's Degree candidate, but he steps up to the challenge, essentially picking up where the Traugott paper leaves off. He observes:
The US Count Votes authors conclude that only one of two hypotheses are worthy of exploration: 1) the exit polls were subject to a consistent bias of unknown origin; or 2) the official vote count was corrupted. The question then becomes; did the NEP Report substantiate the first hypothesis? [p. 12]
Reviewing the NEP report, Brady concludes:
Given the number of NEP Report conclusions that included qualifiers such as "likely," "may," and "could," I understand how US Count Votes is concerned with the analysis. In effect, the NEP Report never (from what I can tell) rejected the null hypothesis in a classical sense. However, the contention that "[no] data in the report supports the hypothesis that Kerry voters were more likely than Bush voters to cooperate with pollsters" is not in the least bit accurate. The NEP Report presented volumes of information that most analysts agree "suggests" support for the hypothesis that differential non-response was the cause of the observed bias in the exit polls [pp. 13-14, emphasis added].
The paper has much more. Brady has been a loyal FOMP (Friend of Mystery Pollster), so I may be accused of some bias on this score. Yet I hope other prominent observers will agree: Brady's paper is a must read for those still genuinely weighing the arguments on the exit poll controversy.
March 16, 2005
Disclosing Party ID: Time/SRBI
As promised, here are comments from Mark Schulman founder and president of Schulman, Ronca & Bucuvalas, Inc. (SRBI) in response to my query about the disclosure party ID results on the Time magazine survey.
I should note that in my original email to Schulman, I wrongly concluded that the Time/SRBI poll "routinely" omitted the results for party ID, because those results were not reported on the release for the most recent survey conducted in January. In fact, a review of the marginal results from their 2004 preelection studies available for download at the SRBI archives confirms that they regularly included full results for party ID (see the link for "election trend frequencies" at the bottom of each poll analysis).
Schulman clarifies their policy:
Our policy is to post full marginals for each study. Thanks for noting that the demographics are missing from the January survey, just an oversight. Deadlines are very tight for the Time surveys. We're often scrambling to get everything done in a very short time. Looks like the demographics didn't get posted for that survey. I'll make sure that full marginals are posted for that survey.
SRBI did report party ID distributions for registered and likely voters starting with the early September Time Poll and we continued reporting these distributions through the election. We posted the party data on our releases to anticipate queries about these distributions and in the interest of full disclosure to interested parties, since it had become a contentious issue.
You'll remember that the party ID weighting issue surfaced with a bang in early September 2005 when we and some other media polls reported major Kerry horse-race slippage. At least some of the partisan pollsters pooh-poohed the reported slippage, arguing that we should reweight based upon party. They believed that we had somehow oversampled Republicans, hence the Kerry drop. I understand that the reweighting by party ID virtually wiped out the Bush surge...problem solved. I should add that some weighting schemes [used by other pollsters] do attach a minor weight to party ID based upon smoothing the estimation over time. We don't do that, but I don't have a real problem with that, since the impact is minor.
By way of background, our Time polls do collect full demographic data on the entire sample, registered voter or not, and we do weight the entire sample by multiple census demographics, adults in households, and number of phone lines.
Several of us challenged the party ID reweighting strategy on AAPORNET and on several blogs. In my postings, I warned that reweighting by party ID "can result in serious distortion." In media interviews, my stance was, if you think all is well with the Kerry campaign and that the slippage was just an artifact of 'too many Republicans,' then it should be "business as usual" for Kerry. However, if you believe the Kerry drop that we reported, then the Kerry campaign needs to rethink its strategy. (I was grilled on this point in an interview with Air America Radio, for example.)
Schulman also sent along a longer article he posted on the subject of weighting by party ID that originally appeared on the members-only AAPOR listserv in September. The full text appears after the jump.
Party ID weighting....September 11 Posting on various websites....Mark Schulman
Since we released last week's poll with the Bush bounce, we're gotten lots of inquiries about why our poll aand many of the other media polls differ from some of the partisan polls, particularly Zogby, which found little bounce. (I have not actually seen the Zogby poll, but have gotten second-hand reports.) The major reason for the disparity is that most of the media polls, including ours, weight by Census data. Zogby and some others weight on party id. I just penned a response to some academic folks who raised the weighting issue. I've attached it below, fyi. Please feel free to discuss.
Weighting by party ID can result in serious distortion of the horserace numbers. Here's why:
1. As an observer of party identification tallies day after day on our election surveys, it's clear that we're not measuring a constant factor. It varies slightly, sometimes even significantly, day by day, week by week.
2. Why does it vary? Most polls place the party ID question near the end of the questionnaire, so that it does not interact or contaminate the horserace measure and any other head-to-head candidate comparisons. David Moore has an excellent piece on the Gallup web site discussing the likely impact of question order on party ID measurement. The horserace always takes priority over party ID in question order, since that's the topline number we report. As a result, respondents may tend to bring their party ID in line with their partisan choice, particularly after having gone through an extensive battery of election items. It's simply "cognitive consistency." Hence, a Bush surge, for example, might elevate the number of voters later in the survey identifying themselves as Republicans.
3. Since party ID is a "variable" and not an enduring constant, as is age or gender, it varies!
4. Voting behavior literature from the 1950s and 1960s (Campbell, Converse, Miller and Stokes, The American Voter, for example), used to posit party ID as anchoring partisan choice, as if it were a constant. It's likely that party ID was a more enduring "constant" in the 1950's, but, that was then, and this is now. Voters are just not as tied to party as in the past. Let's get over this likely out of date notion that party ID is a constant that anchors the vote. The causal arrows here are unclear, which influences which? We can construct several models of party ID as both a dependent and an independent variable. The traditional model posits party ID as an independent variable. We now see it likely as both an independent and dependent variable, with all sorts of interactions.
5. Hence, weighting by party ID, and it's party ID, not party registration, can seriously distort the horserace data. Weighting by party ID would damp down the Bush surge over the past few weeks. Yes, there may be some "at home" selection bias when we interview during party convention periods. However, not all that many folks watch the conventions and the networks provide little convention coverage.
Finally, my choice, and the choice of most the major media polls, is to weight by factors that we know are real, such as age, gender, region, education, number of adults in household, number of voice phone lines, etc. While you can argue about the reliability of Census data, I'll place my bets with the Census rather than party ID.
David Moore has a good discussion of this issue as well on the Gallup web site.
That's the short version of my views. I really do believe that we need to put this issue to rest and stop pretending that there's legitimacy to party ID weighting. I look forward to further comment!
Very best wishes, Mark Schulman Schulman, Ronca & Bucucvalas, Inc.
March 15, 2005
Disclosing Party ID: Pew
Scott Keeter, Associate Director of the Pew Research Center, also responded to my question about why his organization does not typically include results for party identification for each survey. Like Gallup, the Pew Center has disclosed party results upon request. Pew also frequently prepares summary reports on trends in party ID, including their must-read report last September on the pitfalls of weighting by party, which included data from other organizations.
MP asked -- as explained in full in the previous post -- why not include results for party ID in each survey release? Here is Keeter's answer:
You correctly note that the Pew Research Center freely reports party identification marginals for any survey upon request. And of course we have written several reports in which we present the trends in party identification and offer an analysis of the changes.
Given the evolution of the dialogue on the subject - for which MysteryPollster deserves a lot of credit -- and the greater understanding among political observers regarding the perils of weighting party ID to an arbitrary parameter (clearly illustrated by the party ID distribution on Election Day 2004), we will begin posting party ID and its trend in our toplines in future survey releases [emphasis added].
Thank you Dr. Keeter! We now have two prominent survey organizations committing to release information on the party leanings of each survey. That's one small step for consumers of political data everywhere. Hopefully other organizations will follow the lead of Gallup and Pew.
Tomorrow: The response from SRBI's Mark Schulman (and it should be noted again that the Time/SRBI poll included results for party identification in each pre-election survey released last fall).
March 14, 2005
Disclosing Party ID: Gallup
Party identification, the question that measures whether Americans consider themselves Democrats, Republicans or Independents, got a lot of attention in the 2004 campaign. Some pollsters choose to weight by party ID -- that is they statistically adjust the number of self-identified Republicans and Democrats in their samples -- but most public pollsters do not. Last week, MP attended a seminar on pre-election polling in 2004, sponsored by the DC chapter of AAPOR (the American Association for Public Opinion Research). Not surprisingly, the party identification debate was a central topic.
After the seminar, I posed asked a question of the participants - Frank Newport of the Gallup Organization, Scott Keeter of the Pew Research Center and Mark Schulman of Schulman, Ronca, and Bucuvalas (SRBI), the firm that conducts the Time Magazine poll - about their willingness to disclose party identification and received some truly unexpected responses that are good news for those of us pushing for greater transparency in public polling I want to share those with this week, starting with Gallup's Editor in Chief Frank Newport.
Some background. At the seminar, Newport said he is in the midst of "zero-basing everything we know about party [identification] in an election year." Newport is preparing a paper on the topic to be delivered at AAPOR's May conference, and he devoted much of his presentation to a preliminary discussion of the topic.
While Newport has not reached final conclusions, he did reiterate that he considers party identification more a "survey statistic" than a "stable population parameter," meaning that party is a potentially changeable attitude that surveys measure rather than a stable demographic like race or gender. Pew's Scott Keeter agreed, describing party identification as a measure that "provides useful information about the political climate." MP generally agrees with this philosophy, as explained in these two posts, but like many political data consumers, is often frustrated by the habit of many public pollsters of withholding party identification when releasing results online. So after the seminar, I sent Newport, Keeter and SRBI's Schulman the following question via email (edited slightly to correct an error in the orignal):
If party identification is a "survey statistic" and not a "stable population parameter" (as Frank Newport put it), if it provides important and useful information on the changing political environment (to paraphrase Scott Keeter -- and I agree on both counts), why do your organizations routinely exclude results for party identification from survey releases?
To be more specific, I cannot find results for party identification in the otherwise excellent and comprehensive questionnaire/reports put out for the most recent surveys for Pew, Time/SRBI and Gallup (neither in USAToday nor to paid subscribers on Gallup.com), nor in other such releases from Gallup or Pew for the last six months or so (though Time/SRBI did routinely report party ID results on surveys released last fall).
Now, obviously, I am aware that your organizations have disclosed much party ID data at academic conferences, in selected special reports online (especially this superb summary from Pew last September) or through raw data made available to the Roper Center. I also know from personal experience that your organizations have been very responsive to specific requests for such data -- even from your harshest critics. You deserve praise for doing so. But given that you routinely put out long summary documents online, again, why are such specific requests necessary? Why not include the results for party ID in each release?
I also do not want to pick on Gallup, Pew and Time/SRBI alone. Other organizations that -- as far as I can tell -- routinely omit party ID from releases include the American Research Group, Fox/Opinion Dynamics, Harris, LA Times, Marist, Mason Dixon, Newsweek, Quinnipiac and Washington Post/ABC News (although ABC News occasionally reports on the partisanship of its samples in written releases). Although Zogby and Rasumussen weight by party, they do not routinely disclose their weight targets in their releases or on their web sites.
The noteworthy exceptions -- those that routinely include party ID results in online releases -- are CBS/New York Times, NBC/Wall Street Journal, AP/IPSOS (albeit to paid subscribers only) and SurveyUSA.
Here is the response from Gallup's Frank Newport:
As you can tell from that meeting and my forthcoming paper at the national AAPOR conference in May, party ID is a matter of significant interest to us here at Gallup. I think there's been a good deal of misinformation and misunderstanding about what party ID measures and what its importance is -- as you gathered from the paper I gave in Washington. Your commentary on your site has been very helpful in clarifying this area to those who don't know a lot about it. Gallup has always been happy to give party ID figures for any survey to anyone who wants them. In addition, we write articles and reviews about party ID when we think the aggregated trends are showing something significant (e.g., http://www.gallup.com/poll/content/login.aspx?ci=14347 [subscription required]) , and have discussed party ID and its implications at great length in our Gallup editors' blog [free to all] on our website.
As far as I know, Gallup has no history over the last 70 years of routinely posting the party ID composition of each survey we conduct, just as we routinely don't report ideology and a lot of other measures regularly asked in each survey. As noted, we send the party ID composition percentages to anyone who is interested (actually, we really don't get that many requests for them). But since this seems to be an area in which there is perhaps bourgeoning interest, we'll probably start posting them on our website for each survey, along with rolling trends and some explanations of how Gallup measures party ID and what it's significance is [emphasis added].
On behalf political survey data consumers everywhere, let me way, thank you Dr. Newport! If Gallup, the most important brand name in survey research, is willing to take this step, other survey organizations are likely will follow your lead to greater transparency.
Next up: Responses from Pew's Scott Keeter and SRBI's Mark Schulman.