March 25, 2005
Schiavo: The Return of Professor M
I thought that in light of the interest in the last post on the recent Schiavo polls that it would be good to take a step back from the microanalysis of and write generally about how pollsters write questions about issues and public policy. I was pleasantly surprised to find that our old friend Professor M, a member of the Political Science faculty at a small midwestern college, had posted some comments that accomplished much of this task for me, and said it better than I would have. For those who do not browse the comments section, his comments are more than worthy of promotion to the main page. This is today's must read:
Mark, I think that your discussion here implicitly endorses a commonly held error about the best way to interpret polling data about matters of public interest. (And this error underlies the criticism of the ABC poll as well.)
The error is the incorrect belief that there is a "right" or "unbiased" way to ask a question about any given public issue. There is no such thing. Everyone who works within the polling field is well aware that small changes in wording can affect the ways in which respondents answer questions. This approach leads us into tortuous discussions of question wording on which reasonable people can differ. Further, as you have pointed out many times in the past, random variation in the construction of the sample or in response rates can skew the results of any single poll away from the true distribution of opinions in the population.
So how do we look at public opinion on an issue such as the Schiavo case? The answer is NOT to find a single poll with the "best" wording and point to its results as the final word on the subject. Instead, we should look at ALL of the polls conducted on the issue by various different polling organizations. Each scientifically fielded poll presents us with useful information. By comparing the different responses to multiple polls -- each with different wording -- we end up with a far more nuanced picture of where public opinion stands on a particular issue. If we can see through such comparisons that stressing different arguments or pieces of information produces shifts in responses, then we have perhaps learned something. Like our own personal opinions, public opinion is not some sort of simple yes/no set of answers; it is complex, and it can see both sides of complicated issues when presented with enough information.
If we were to lock pollsters of all partisan persuasions in a room and force them to pick the "best" question wording on the Schiavo issue, we might end up with everyone asking the same question, but overall we would end up with less information about public opinion, not more. We are better off having the wide variety of different polls, with questions stressing different points of view on the issues, and then comparing them all to one another. This is precisely what you do in your discussion of the ABC poll, but I think you are asking entirely the wrong question -- not "is the ABC wording defensible?" but rather "what does the ABC poll, when compared to other polls with different wording, add to our overall understanding of public opinion on this issue?"
Of course, this sort of contextualizing of polling results is exceedingly rare in the media. Much more common is the front page story saying "here is our poll, and here is what it found, and it is a true representation of public opinion" -- and by implication, no other poll matters. Intellectual honesty is trumped by competition. The best we usually get are vague generalizations of all of the polls lumped together ("polls have consistently shown disapproval of Congress' actions"), and even those generalizations almost never appear in the initial story trumpeting the "exclusive" poll fielded by the newspaper/network itself.
The end result is that even those who pay close attention to the news media and the chattering classes often have very little real understanding of how to interpret polls in a thoughtful way -- which is one of the reasons your blog is so valuable.
P.S. Polls which attempt to predict election results are a rather different kettle of fish, for two important reasons: (1) Pollsters have been experimenting with questions wording for over 50 years and can keep wording the same regardless of the issues in a race; and (2) There is an actual real-world "check" on pollsters' work in the form of the actual election results. Neither of these characteristics apply to polling about issues of public interest.
For those who want to look at all the recent polls on the Schiavo case, the PollingReport provides a great compilation that includes complete wording, sample sizes, interview dates and margins of error.
I've got some additional thoughts...but it's late. More on this topic tomorrow.
March 23, 2005
Schiavo "Push Poll?"
[3/24 - 2:45 p.m. EST - posted additional updates below]
Another day, another polling controversy. The latest involves a survey released on Monday by ABC News that shows 63 to 28 percent support for removal of Terry Schiavo's feeding tube. The survey drew intense interest in Washington and immediate allegations of biased question wording from the blogosphere's right wing. Captain's Quarters called it a "push poll for euthanasia." Wizbang adds another adjective, calling it a "bogus push poll for euthanasia."
Do they have a point? The quick answer: The evidence of bias or deliberate untruth in the ABC poll is scant, though the issue raises some interesting questions about the appropriateness of "informed" questions.
Now here's the long version.
First, a plea for reporters, editors and bloggers of all ideologies: Can we please stop using the term "push poll" to describe every survey we consider objectionable? Yes, complain about bias when you see it, but the phrase push poll belongs to a higher order offense. To summarize the definitions posted online by the American Association for Public Opinion Research (AAPOR), The National Council on Public Polls (NCPP) and the Council for Marketing & Opinion Research (CMOR): A push poll is not a poll at all but rather a form of fraud - an effort to spread an untrue or salacious rumor under the guise of legitimate research. "Push pollsters" are not pollsters at all. They do not care about collecting data or measuring opinions (even in a "bogus" way). They only care about calling as many people as possible to spread a false or malicious rumor without revealing their true intent. Whatever complaint one might have about the wording or reporting of the ABC poll, it was certainly not a "push poll."
Now to the more debatable question of whether the ABC poll was biased or unfair. The complaints center mostly on the text of this question:
Schiavo suffered brain damage and has been on life support for 15 years. Doctors say she has no consciousness and her condition is irreversible. Her husband and her parents disagree about whether she would have wanted to be kept alive. Florida courts have sided with the husband and her feeding tube was removed on Friday. What's your opinion on this case - do you support or oppose the decision to remove Schiavo's feeding tube?
As noted above, 63% of the 501 adults surveyed on March 20 said they supported the decision, 28% opposed it and 9% had no opinion. Sampling error was reported as 4.5%.
The main objection seems to be the use of the term "life support" in the second sentence. Again, from Captain's Quarters:
Terri [Schiavo] has never been on life support. The only medical treatment Terri received for the past five years has been food and water through a feeding tube, which is nothing at all like artificial life support. Artificial life support consists of ventilation for people unable to breathe on their own. The question sets up a strawman argument that so completely contradicts reality that the entire poll must be considered invalid.
One test of this argument is a survey released by Gallup earlier this week (subscription only, also summarized here) conducted from Friday to Sunday, that asked a similar but more concise question without the use of the phrase "life support."
As you may know, on Friday the feeding tube keeping Terri Schiavo alive was removed. Based on what you have heard or read about the case, do you think that the feeding tube should or should not have been removed?
Fifty-six percent (56%) of the 909 Gallup respondents said the tube should be removed, 31% said it should not be removed and 13% had no opinion. Support for removing the tube is five points less than on the ABC poll, though the difference is not quite statistically significant.
The Fox News Poll also asked the following "informed" question on a survey conducted March 1-2:
Terri Schiavo has been in a so-called 'persistent vegetative state' since 1990. Terri's husband says his wife would rather die than be kept alive artificially and wants her feeding tube removed. Terri's parents believe she could still recover and want the feeding tube to remain. If you were Terri's guardian, what would you do? Would you remove the feeding tube or would you keep the feeding tube inserted?
Fifty nine percent (59%) of Fox's sample of 900 registered voters would remove the feeding tube, 24% would keep it inserted and 17% were unsure. Note that the 35-point margin of support for removing Schiavo's feeding tube is the same as on the ABC survey.
It is also worth noting that the ABC poll was completed in a single evening. As the National Council on Public Polls (NCPP) points out: "Surveys conducted on one evening, or even over two days, have more sampling biases -- due to non-response and non-availability -- than surveys which are in the field for three, four or five days."
Between the sampling error and the vagaries of one night samples, we cannot say conclusively that the ABC language produced more support for removing Schiavo's feeding tube. However, for the sake of argument, let's concede that the ABC informed had such an effect. Was the language of their question defensible? [3-24 On some reflection a better word here would be "fair" - see comments below]
According to an article on the issue in yesterday's New York Sun, ABC News Polling Director Gary Langer "said in an e-mail to the Sun that the descriptions were taken from an appellate court decision in Florida that described Mrs. Schiavo's condition." Here is one example from the Florida Supreme Court decision:
In this case, the undisputed facts show that the guardianship court authorized Michael to proceed with the discontinuance of Theresa's life support after the issue was fully litigated in a proceeding in which the Schindlers were afforded the opportunity to present evidence on all issues. (p. 15 - emphasis added)
Moreover, the contention that the phrase "life support" in the ABC question automatically conjures up images of an artificial respirator rather than a feeding tube, thus creating a "strawman" that "completely contradicts reality" (as Captain's Quarters put it) does not hold up. In fact, the ABC question uses the phrase "feeding tube" twice, ultimately asking whether respondents support "the decision to remove Schiavo's feeding tube." In Cruzan v. Director, the U.S. Supreme Court held that tube feeding was legally no different from other forms of life support (see also this article). Legality aside, it is hard to imagine that most respondents would interpret "the food fluids or medical treatment necessary to sustain her life" (the language in the law enacted by Congress over the weekend) as meaning something other than "life support."
If the greater poll support for removing the feeding tube was more than random error, my hunch is that the effect had more to do with the statements that "Florida courts have sided with" the husband (they certainly did) and that "doctors say she has no consciousness and her condition is irreversible" (hard to quarrel with given reports like this one). It might have been better to first ask a question that presented less information (as Gallup did), but calling the ABC description "untrue" or "deliberately slanted" is quite a stretch.
This point leads to a more general objection that Rick Brady raised about the whole notion of "informed" questions:
Polling organizations like ABC News are not supposed to educate people regarding the issues they are polling. If a large portion of the public is not well informed on a subject matter related to an area in which they already have solidly formed opinions (I wouldn't want to be on "life support" or a "vegetable," therefore I think Terri's tube should be removed and Congress should stay out of it), in most cases, three sentences of preamble will not be sufficient to illicit a respondents true opinion.
I have to disagree with Rick, though I think it is fair to say he speaks for a vocal minority of academic survey methodologists. Political pollsters frequently encounter complex issues about which the public lacks knowledge or "solidly formed opinions." The Schiavo case is a perfect example. Even after the blanket coverage of last weekend, nearly half of ABC's respondents said they had been following the case "not very closely" (16%) or "not at all" (28%).
As a result, we frequently ask questions that first provide a bit of information or context, especially when an issue is poised to get much greater attention or become the focus of a political campaign. I have written hundreds, perhaps thousands of such questions, and 99% of the time the results are not intended for public consumption. Our goal is not to create propaganda but to accurately gauge how opinions might develop with more information, and we struggle to find language that simulates the dialogue that will ultimately play out in the media. As the Schiavo example proves, this task is not easy.
Republican pollster John McLaughlin told the NY Sun that he "would have worded the [Schiavo] questions differently." I am sure that's true - I probably would have taken a different tack as well. However, as with recent "informed" questions on Social Security, if you give this task to 20 pollsters, you will likely get 20 different questions. It is easy to quibble with ABC's approach but the charge from Captain's Quarters that they were either "incompetent" or "attempted to fool their viewers and readership with false polling that essentially lies about the [Schiavo] case" is grossly unfair.
UPDATE: Gallup released a subsequent one-night poll this morning. Among the questions asked:
As you may know, Terri Schiavo is a Florida woman in a persistent vegetative state who was being kept alive through the use of a feeding tube. The feeding tube was removed on Friday, an action that will result in her death within about two weeks. A federal judge made a ruling in the case today. First: Do you agree with the federal judge's decision that resulted in the feeding tube being left unattached, or do you disagree and think the federal judge should have ordered the feeding tube to be re-attached?"
52% agree with judge
39% disagree with judge
UPDATE II: Our friend Mickey Kaus disagrees, to put it mildly. The crux of his argument is the notion that the public perceives "life support" to mean a respirator and a patient who will stop breathing within minutes of its removal, a condition they argue is considerably worse than
Jennifer Terri Schiavo's. If that were true, and if the public perceived it that way, the term "life support" would likely bias the results. I am asking if we have any evidence of such a perception.
The best place to look is the other polls that make no reference to "life support" or what "doctors say" about Schaivo's state of consciousness or chances of survival. Fox showed 59% support for removing the tube, Gallup showed 58% support. Thus, Mickey points out that "no other poll has as large an anti-tube majority (63%) as ABC's."
Q13. Terri Schiavo has been in a persistent vegetative state since 1990. Terri's husband says his wife would not want to be kept alive under these circumstances and he wants her feeding tube removed. Terri's parents believe her condition could improve and they want the feeding tube to remain. How closely have you been following news about the case -- have you been following it very closely, somewhat closely, not too closely, or not at all?
32% very closely
44% somewhat closely
17% not too closely
6% not at all
1% don't know
Q14. What do you think should have happened in this case -- should the feeding tube have been removed or should it have remained?
61% Should be removed
28% Should remain
11% Don't know
Q19 What should happen now? Should the feeding tube be re-inserted, or not?
7% Don't know
These questions appear to indicate an "anti-tube" majority of roughly the same size as indicated by the ABC poll without any mention of "life support" or what "doctors say" about Schiavo's condition.
However, I’ll hedge for the time being, only because the CBS release is a bit confusing. First, the heading "Partial Sample" appears over the results for Q14 appears to also apply to Q19. This would usually imply that the sample of 737 adults was randomly divided in two, with half the respondents hearing Q14 and half Q19. That would normally make sense, although asking “what should happen now” in Q19 without the introduction from Q14 seems odd. Adding to my confusion is the jump in the labeling from Q14 to Q19, followed by Q15 through Q18. Hopefully, someone from CBS will help clarify the mechanics.
UPDATE III: I spoke with Kathy Frankovic at CBS who helped clarify their release. The two questions on whether the Schiavo feeding tube should have been removed or reattached were labeled "partial sample" because they were only asked on the second night of interviewing. The survey had originally included a different version of Q14 that had been written erroneously in the future tense (e.g. What do you think should happen in this case -- should the feeding tube be removed or should it remain?). Since that language was inaccurate (the tube had already been removed), they decided to replace that question with the two-question sequence above for the second night of interviewing.
As a result, these two questions (Q14 & Q19) were asked of fewer respondents (n=321 unweighted) than the full sample (n=737) and thus had a bigger margin of error (+/-6%) than the full sample (+/- 4%) . Also, results for these two questions, like the survey done by ABC earlier in the week, are subject to the same caveats about one night polls described above.
According to Frankovic, the second new question was labeled Q19 because they typically leave gaps in question numbering for insertions of new questions when these situations arise. The questions were asked in the order the appear in the relese. Note that the PDF document has no Q7, Q11 or Q12.
Frankovic also provided the party identification results for the weighted data: 27% Republican, 32% Democrat, 41% other or don't know. Some may have seen different numbers posted in error today on DailyKos.
ONE LAST THOUGHT: After reflecting on the comments on this post, there is one word I wish I had written differently: “defensible” (as in, “was the language of their question defensible?”). A better word would have been “fair” or as Kaus put it, “reasonably calculated to produce an accurate poll of what people think.”
Gerry Daly, who also had problems with my use of “defensible,” wrote: “We should strive for them to conduct polls that have fair wording and that provide the most bias-free read of the public that is reasonably attainable.” No argument there.
My point about the court documents and the legal definition stemming from the Cruzan decision was not just about technical definitions but about language: This is a matter of opinion (for now), but I doubt that ordinary Americans are making the same distinctions regarding “life support” and “tube feeding” as those who are passionately interested in this story. Remove the feeding tube and Terri Schiavo dies. As a matter of language and plain meaning, how is that not “life support?”
So in that regard, I think that while far from perfect, the ABC question was fair. Others -- obviously -- disagree. Read the comments for a sampling.
March 21, 2005
Tired of Exit Polls Yet?
For at least a month, I have intended to do a wrap-up post (or posts) on what the exit polls can (and cannot) tell us about allegations of fraud in the 2004 elections. I keep putting off this task for a number of reasons, not the least of which is a concern that I have already devoted too much time to this subject. Most of what I would write I have already written in some form. I wonder if regular readers are getting or would get bored with it.
So I'd like to hold an informal vote: Should MP continue to focus on the exit poll controversy? If you have feelings on this question either way, please EMAIL me with your opinion.
I ask for email because I want to try to adhere to the notion of "one reader, one vote. " I will ignore messages from addresses that bounce back, should I reply. The comments section is open, and thoughts are welcome there as always, but I will not count votes from phony email addresses. As always, signed responses are greatly appreciated and possibly given greater weight. I promise I will not quote you without permission, and MP never sends unsolicited email to anyone.
Hello Exit Polls My Old Friend
The release of several new papers on the 2004 exit poll controversy brings me back to this familiar topic. The first paper, from a team of academics with considerable survey expertise, breaks no new ground but provides a good overall summary of the controversy. The second, by frequent Mystery Pollster commenter Rick Brady, goes further, taking on those whose widely circulated Internet postings proclaim evidence of fraud in the exit polls. The academic paper is an excellent overall primer on the issue, but Brady's work breaks new ground.
The first, a "working paper" released on March 11 by the Social Science Research Council (SSRC) of the National Research Commission on Elections and Voting, is noteworthy for the expertise of its authors. Michael Traugott of the University of Michigan, Benjamin Highton of the University of California (Davis) and Henry Brady of the University of California (Berkeley) are political scientists with scores of journal articles on voting behavior and survey methodology to their names. Traugott, the principal author, is a past president of the American Association for Public Opinion Research (AAPOR) and the co-author or editor (with Paul Lavrakas) of several books on survey methodology, including the very accessible Voters Guide to Election Polls (which MP would include on a list of recommended books, if he ever got around to putting such a list together. Full disclosure: twenty years ago, Traugott was the third reader on MP's undergraduate honors thesis).
In short, these guys know what they are talking about.
Yet for all the academic firepower that Traugott and his colleagues bring to the exit poll debate, they break little new ground. They do present a balanced and thorough summary of the short history of the controversy and its key issues and include the most complete bibliography on the issue (including URL links) MP has seen to date. If nothing else, the Traugott paper is an excellent starting point for anyone grappling with this issue for the first time.
Traugott and his colleagues also make a very important point about the key issue that continues to frustrate those seeking an "explanation" from the exit pollsters for the discrepancy between the exit polls and the final results: When it comes to "nonrespondents" -- those who refuse to participate in a survey -- "proof" is inherently elusive. In reviewing the report from the National Election Pool (NEP) released earlier this year, they write:
[The report] is complicated in a way that many post-survey evaluations are by the fact that some information is essentially unknowable. This is especially true when one of the concerns is nonresponse, and there is no information from the nonrespondents to analyze. As a result, there are some sections of the report in which there is an extremely detailed level of disclosure about what the exit poll data show, but in other parts of the report there are only hypotheses about what might have been the cause for a particular observation. These hypotheses can guide future experiments in exit polling methodology or even direct changes in the methods, but they cannot explain in a strict causal sense what happened in the 2004 data collection (emphasis added, pp. 8-9).
A second paper, posted over the weekend by our friend Rick Brady of the blog Stones Cry Out, is a point-for-point rebuttal of the final version of Stephen Freeman's well known paper, The Unexplained Exit Poll Discrepancy (MP reviewed the first version of the paper back in November). Brady, who has been studying graduate level Statistics on the way to a Master's Degree in Public Planning, assails every statistical weakness in Freeman's thesis. Many of the issues that Brady raises will be familiar to MP's readers, but he does an excellent job putting it all together and raising some statistical issues not included in the Traugott paper.
For MP, the most interesting aspect of Brady's review is his discussion of a subsequent paper by a team of PhDs (including Freeman) affiliated with the organization US Count Votes. Kathy Dopp, the President of US Count Votes (USCV), issued a public challenge "for any PhD level credentialled (sic) statistician who is affiliated with any university in America to find any statements in our 'Response to Edison/Mitofsky Report' that they believe are incorrect and publicly refute it."
Brady may be just a Master's Degree candidate, but he steps up to the challenge, essentially picking up where the Traugott paper leaves off. He observes:
The US Count Votes authors conclude that only one of two hypotheses are worthy of exploration: 1) the exit polls were subject to a consistent bias of unknown origin; or 2) the official vote count was corrupted. The question then becomes; did the NEP Report substantiate the first hypothesis? [p. 12]
Reviewing the NEP report, Brady concludes:
Given the number of NEP Report conclusions that included qualifiers such as "likely," "may," and "could," I understand how US Count Votes is concerned with the analysis. In effect, the NEP Report never (from what I can tell) rejected the null hypothesis in a classical sense. However, the contention that "[no] data in the report supports the hypothesis that Kerry voters were more likely than Bush voters to cooperate with pollsters" is not in the least bit accurate. The NEP Report presented volumes of information that most analysts agree "suggests" support for the hypothesis that differential non-response was the cause of the observed bias in the exit polls [pp. 13-14, emphasis added].
The paper has much more. Brady has been a loyal FOMP (Friend of Mystery Pollster), so I may be accused of some bias on this score. Yet I hope other prominent observers will agree: Brady's paper is a must read for those still genuinely weighing the arguments on the exit poll controversy.