On Reporting Standards and “Scientific” Surveys

Internet Polls IVR Polls JunkPolls Legacy blog posts

Two weeks ago, I posted a twopart series on an online spring break study conducted by the American Medical Association (AMA).  That discussion coincided with an odd confluence of commentary on online polling and the standards used by the news media to report on or ignore them.  I want to review some of that commentary because it gets to the heart of the challenges facing the field of survey research, and not coincidentally, the central theme of much of what I write about on Mystery Pollster:  What makes for a "scientific" poll? 

Let’s start with the very reasonable point raised by MP reader JeanneB in the comments section of my second post on the online AMA spring break survey:

I’m disappointed that you seem to go easy on the media’s role in this. More and more news outlets have hired omsbudsmen in recent years. They also need to add someone to filter polls and ensure they’re described acurately (if they get used at all) . . .

[The media] have a responsibility to police themselves and set some kind of standard as to which polls will be included in their coverage and how they will be described. Please don’t let them off the hook for swallowing whole any old press release disguised as a "poll".

I am sympathetic to the media in this case because of the way the AMA initially misrepresented its survey, calling it a "random sample" complete with a margin of error.  As one pollster put it to me in an email, "if the release has the wrong information (eg margin of error) it is very hard to expect the media to police that."

However, the media standards for poll reporting are certainly worth discussing.  Many do set rigorous standards for the sorts of poll results they will report on.  Probably the best example is the ABC News Polling Unit, which according to its web site,

vets all survey research presented to ABC News to ensure it meets our standards for disclosure, validity, reliability, and unbiased content. We recommend that our news division not report research that fails to meet these standards.

Thanks to the Public Eye — the online ombudsman-like blog site run by CBS News — and the pollsters at CBS, we can now read their internal "standards for CBS News poll reporting" (posted earlier today in response to an email I sent last week).  The Public Eye post is well worth reading in full, but here is a pertinent excerpt. The underlined sentence represents "new additions to the CBS News standards:"

Before any poll is reported, we must know who conducted it, when it was taken, the size of the sample and the margin of statistical error. Polling questions must be scrutinized, since slight variations in phrasing can lead to major differences in results. If all the above information is not available, we should be wary of reporting the poll. If there are any doubts about the validity, significance or interpretation of a poll, the CBS News director of surveys should be contacted. The CBS News Election and Survey Unit will maintain a list of acceptable survey practices and research organizations.

Other news organizations clearly apply standards for what they will air or publish, although those standards may vary.  Conveniently, just a few days after my posts on the AMA spring break survey, Chuck Todd, editor in chief of The Hotline, posted his explanation of the Hotline’s policy with regard to online and automated surveys on the On Call blog:

There are a bunch of new poll numbers circulating in a bunch of states, thanks to the release of the latest online polls Zogby Int’l conducts for the Wall Street Journal‘s web site. We don’t publish or acknowledge the existence of these numbers in any of our outlets because we are just not comfortable that online panels are reliable indicators.

Todd’s commentary also spurred Roll Call political analyst Stu Rothenberg to weigh in on online polls and other newer methodologies and the news media’s habits with regard to reporting them.  He agreed with Todd that "pollsters have not yet figured out how to conduct online polls in a way that’s accurate."  He continued:

Like the Hotline, both Roll Call and my newsletter, the Rothenberg Report, don’t report on online polls either.  Well-regarded pollsters are right to say their methodology is unproven.

Of course, we aren’t the only ones skeptical about the reporting about online polls.  Other media outlets, including CNN and the print editions of the Wall Street Journal, generally don’t report on online polls either.  Many media outlets also ignore polls taken by automated phone systems rather than real people, because of concerns about their accuracy.  Unfortunately, others in the news media aren’t as discriminating

[Rothenberg goes on to recount his efforts to contact the Wall Street Journal web site regarding its reporting and sponsorship of the Zogby’s online polls.  The Roll Call column is available to all on Rothenberg’s site, for now, available only to Roll Call subscribers.  I will definitely have more to say about Rothenberg’s column soon]. 

No one can argue with the need for standards in the way the news media reports polls.  News organizations have to determine which polls are newsworthy, just as they must judge the news worthiness of any other story.  To use the language of computer programming, these judgments are inherently binary, all or nothing.  A poll is either worthy of publication or it isn’t. 

Unfortunately, the big challenge is finding the line that separates "scientific" surveys from lesser research.  It is not always easy.  A "scientific" survey based on a random sample is still subject to non-response, coverage or measurement error (potential sources of error that have nothing to do with sampling error are not accounted for by the "margin of error").  In other words, not all "scientific" surveys are created equal.  We should not assume that the results of a poll are infallible (within the range of sampling error) simply because it started with a "scientific" random sample.

At the same time, it would be a mistake to dismiss all polls that fail to make the media cut as "cheap junk."  For example, as both Rothenberg and Todd point out, most major news organizations also refuse to report on automated polls that use a recorded voice rather than a live interviewer to ask questions and select respondents within each household.  Yet except for the recorded voice, these polls use the same methodology as other "scientific" polls, including random digit dial samples.  Yes, conventional pollsters have certainly raised "concerns about their accuracy."  But do these surveys deserve to be painted with the same broad brush as non-random samples drawn from internet based volunteer panels?  The mainstream news media pollsters argue that they do – MP is less certain and generally less skeptical of the automated polls. 

The irony is that in the same week that some Mystery Pollster readers asked why the news media chose to report one particular online poll, a group of professional pollsters debated the merits of ignoring such polls.  Chuck Todd’s comments helped set off an unusually heated discussion when they were posted to the members-only LISTSERV of the American Association for Public Opinion Research (AAPOR – MP is a member and regular reader).  While most were skeptical of online polls (and universally critical of the AMA for misreporting their methodology), some questioned the all-or-nothing ban on reporting of non-probability samples.  Here is one especially pertinent example: 

What I do find inexcusable is the attitude that only surveys that claim (however tenuously) to use a probability sample can be reported on. The only reason for this is that it is easier to pretend that the sampling error is the only survey error and that therefore any survey that is not based on a probability sample is "junk." That is simply not true.

This debate is far bigger and more important than a single blog post.  In many ways, it gets to the underlying theme of most of controversies discussed on Mystery Pollster:  What makes for a "scientific" poll?  More specifically, at what point do low rates of coverage and response and deviations for pure probability methods so degrade a random sample as to render it less than "scientific?"  Can non-random Internet panel studies ever claim to "scientifically" project the attitudes of some larger population?

Because these are the most important questions facing both producers and consumers of survey data, this site will continue to take a different approach.  With respect to automated and internet polls, MP will certainly "acknowledge their existence" and, when appropriate, report their results.  But I hope to go further, explaining their methodology and putting their results under a microscope in search of empirical evidence of their accuracy and reliability (or the lack thereof).   MP’s approach will be to immerse rather than to ignore, because as a noted academic survey methodologist Mick Couper put it a few years ago, "every survey can be seen as an opportunity, not only to produce data of substantive relevance, but also to advance our knowledge of surveys."

PS:  My recent article in Public Opinion Quarterly included some pertintent advice for consumers on what to make of automated and Internet polls.  To review it, go to this link and search on "procure and consume."

Mark Blumenthal

Mark Blumenthal is the principal at MysteryPollster, LLC. With decades of experience in polling using traditional and innovative online methods, he is uniquely positioned to advise survey researchers, progressive organizations and candidates and the public at-large on how to adapt to polling’s ongoing reinvention. He was previously head of election polling at SurveyMonkey, senior polling editor for The Huffington Post, co-founder of Pollster.com and a long-time campaign consultant who conducted and analyzed political polls and focus groups for Democratic party candidates.