An Online Poll on an Online Activity

Internet Polls Legacy blog posts Sampling Error Sampling Issues

One of the gratifying things about writing this blog is the collective power of Mystery Pollster readers.  Last week, I emailed some questions to Zogby International about the methodology they used for a recent poll conducted on behalf of the online gambling industry. The poll had been the subject of a column by Carl Bialik for the Wall Street Journal Online, something I discussed in a post last Friday.  Zogby’s spokesman ignored my emails.  However, over the weekend MP reader Ken Alper reported in a comment that he had been a respondent to the Zogby gambling poll. He also confirmed my hunch: Zogby conducted the survey online. That fact raises even more questions about the potential for bias in the Zogby results.

Why is it important that the survey was conducted online?

1) This survey is not based on a “scientific” random sample — The press release posted on the web site of the trade group that paid for the poll makes the claim that it is a “scientific poll” of “likely voters.”   As we have discussed here previously, we use the term scientific to describe a poll based on a random probability sample, one in which all members of a population (in this case, all likely voters) have an equal or known chance of being selected at random. 

In this case only individuals that had previously joined the Zogby panel of potential respondents had that opportunity.  As this article on the Zogby’s web site explains, their online samples are selected from “a database of individuals who have registered to take part in online polls through solicitations on the company’s Web site, as well as other Web sites that span the political spectrum.”    In other words, most of the members of the panel saw a banner ad on a web site and volunteered to participate.  You can volunteer too – just use this link.

Zogby claims that “many individuals who have participated in Zogby’s telephone surveys also have submitted e-mail addresses so they may take part in online polls.”  Such recruitment might help make Zogby’s panel a bit more representative, but it certainly does not transorm it into a random sample.  Moreover, he tells us nothing about the percentage of such recruits in his panel or the percentage of telephone respondents that typically submit email addresses.  Despite Zogby’s bluster, this claim does not come close to making his “database” a projective random sample of the U.S. population. 

2) The survey falsely claims to have a “margin of error” — Specifically, the gambling survey press release reports a margin of error  0.6 percentage points.  That happens to be exactly the margin you get when you plug the sample size (n=30,054) into the formula for a confidence interval that assumes “simple random sampling.”  In other words, to have a “margin of error” the survey has to be based on a random probability sample.  But see #1.  This is not a random sample. 

Several weeks ago, I wrote about an online panel survey conducted on behalf of the American Medical Association that similarly claimed a “margin of error.”  But in that case, the pollster quickly corrected the “inadvertent” error when brought to his attention:

We do not, and never intended to, represent [the AMA spring break survey] it as a probability study and in all of our disclosures very clearly identified it as a study using an online panel.  We reviewed our methodology statement and noticed an inadvertent declaration of sampling error.

I emailed Zogby spokesman Fritz Wenzel last Thursday to ask how they justified the term “scientific” and the claim of a “margin of error.”  I have not yet received any response.

3) The survey press release fails to disclose that it was conducted online — Check the standards of disclosure of the National Council on Public Polls, standards adhered to by most of the major public pollsters.  They specifically require that “all reports of survey findings” include a reference to the “method of obtaining the interviews (in-person, telephone or mail).”  Obviously, Zogby’s gambling poll release includes no such reference.

Now, it is certainly possible that the press release in question was authored by the client (the gambling trade group) and not by Zogby International.  My email to Fritz Wenzel included this question.  The subsequent silence of the Zogby organization on this issue is odd since most pollsters, including my own firm, reserve the right (usually by contract) to publicly correct any misrepresentations of data made by our clients. 

4) This online survey concerned the regulation of online activity — Even surveys conducted using random sampling are subject to other kinds of errors.  Specifically, when those not covered by the sample or those who do not respond to the survey have systematically different opinions than those included in the survey, the results will be biased.  At a minimum, Zogby’s methodology can include only Americans that are online.  More important, it does not randomly sample online Americans.  Rather, it samples from a “database” of individuals that opted in, many because they saw a banner advertisement on a web page.  As such, these individuals are almost by definition among the heaviest and most adventurous of online users. 

While MP is intrigued by new methodologies that claim to manipulate the selection or weighting of results from such a sample to resemble the overall population, he warns readers of what should be obvious:  The potential for bias using such a technique will be greatest when the survey topic is some aspect of online behavior or the Internet itself.  With these topics, the differences between the panel and the population of interest are likely to be greatest. 

I searched for but could not find a random sample survey that could show the relationship between attitudes on online gambling and time spent online.  However, I did find a data on potential government restrictions that allow for such analysis in a survey conducted in the summer of 2002 by the Pew Internet & American Life Project.  The survey asked a question about government monitoring of email and also asked respondents how often they went online.  A cross-tabulation shows an unsurprising pattern.  Those who went online “daily” opposed government monitoring of email by a margin of twelve points (42% yes, 54% no).  Those who were offline altogether supported monitoring by ten points (49% yes, 39% no).  Heavy online users are more skeptical of government regulation of the Internet than the population as a whole. 

Now obviously, we can only speculate whether such a pattern might apply to the regulation of Internet gambling, but common sense suggests that it is a strong possibility.  And it provides yet another reason for skepticism about the results of this particular Zogby poll. 

So to sum up what we have learned:

In Carl Bialik’s column, American Association for Public Opinion Research (AAPOR)  President Cliff Zukin described the survey questions as “leading and biased.”  Further, the survey release failed to disclose that it was conducted online and made the statistically indefensible claim to be a “scientific poll” with a “margin of error.”   The failure to disclose the use of sampling from an online panel is particularly deceptive given that an online activity was the focus of the survey.  Add that all up and you get one remarkably misleading poll release. 

This story also presents a tough question for Carl Bialik’s editors at the Wall Street Journal Online:   Is it statistically defensible to report a “margin of error” for this non-probability sample?  And if not, why does the Wall Street Journal Online allow Zogby to routinely report a “margin of error” for the Internet panel surveys that the Journal sponsors?

Mark Blumenthal

Mark Blumenthal is the principal at MysteryPollster, LLC. With decades of experience in polling using traditional and innovative online methods, he is uniquely positioned to advise survey researchers, progressive organizations and candidates and the public at-large on how to adapt to polling’s ongoing reinvention. He was previously head of election polling at SurveyMonkey, senior polling editor for The Huffington Post, co-founder of Pollster.com and a long-time campaign consultant who conducted and analyzed political polls and focus groups for Democratic party candidates.