Corrections!: An AMA Spring Break Update

Internet Polls Legacy blog posts Polls in the News Sampling Error Sampling Issues

We have had some new developments over the last few days regarding the online Spring Break study conducted earlier this year by the American Medical Association (AMA).  The story, as long time readers will recall, involved an AMA release that initially misrepresented the study, calling it a "random sample" complete with a margin of error and implying in some instances that results from a small subgroup of women that had actually gone on Spring Break trips represented the views of all the women in the survey.  While my posts on the subject received a fair amount of attention in the blogosphere, the mainstream media — including outlets that had reported the misleading survey — largely ignored the controversy.  This week that changed.

Here are details and links:

Although I had missed it, the New York Times did make a formal correction of a Week in Review story that cited results of the poll soon after American Association for Public Opinion Research (AAPOR) President Cliff Zukin wrote the Times to complain.  Their correction now appears at the end of versions of the story available on the Web or through a Nexis search:

For the Record

A chart on March 19 about the history of spring break referred incompletely to an American Medical Association survey of female college students and graduates on vacation behavior. It was conducted online and involved respondents who volunteered to participate; they were not chosen at random.

Earlier this week, the Washington Post‘s Howard Kurtz devoted his Media Notes column to the story.   Kurtz reviewed some of the most colorful headlines and quotations from the initial media coverage.   "At the risk of spoiling the fun," he concluded, "it must be noted that this poll had zero scientific validity."

Kurtz also quotes Richard Yoast, the director of the AMA’s Department of Alcohol, Tobacco and Other Drug Abuse as saying,

[H]is organization posted a correction on its Web site to note that this was not a nationwide random sample and should not have included a margin of error, as in standard polls. "In the future, we’re going to be more careful," he says.

While they are at it, the AMA might want to be a bit more careful about the way they post corrections.  As noted in my original post on this subject, the AMA did correct the methodology blurb in their online release, but the corrected version includes neither a trace of the original misrepresentation nor any statement that the current version corrects the original.  Also, as Kurtz points out, the corrected AMA release continues to highlight statistics based on "only the 27 percent of the 644 respondents who said they had actually been on spring break," yet still "make[s] no distinction between those who have taken such trips and those who haven’t"  (see this post for details). 

The appearance of the Kurtz item may have been the reason that the Associated Press issued this correction just yesterday: 

Correction: Spring Break Risks story

Eds: Members who used BC-Spring Break Risks, sent March 7 under a Chicago dateline, are asked to use the following story.

05-31-2006 15:23

  CHICAGO (AP) _ In a March 7 story about an American Medical Association survey on spring break drinking and debauchery among college women and graduates, The Associated Press, using information provided by the AMA, erroneously reported how the results were obtained. The AMA now says participants were members of an online panel, not a random sample

Finally, today’s Numbers Guy column by the Wall Street Journal‘s Carl Bialik takes a close look at the story and the new communications initiative that AAPOR will undertake to try to react to stories like this more quickly:

Sixty years after its founding, a key association of professional pollsters is dismayed with all the bad survey numbers in the press. In an overdue response, the group is seeking new ways to curtail coverage of faulty research…

"Our ability to conduct good public opinion and survey research is under attack from many sides," the group’s long-range planning committee wrote in a May report. As part of its response, Aapor, as the group is known, plans to hire a staffer to spot and quickly respond to faulty polls.

If Aapor does come down hard, and quickly, on bad research, it could drive pollsters to do better work and disclose their methods more fully, and perhaps even introduce higher standards to what is today an unruly industry. However, a solitary staffer will be hard-pressed to improve the treatment of polls by a numbers-hungry print and electronic press.  [link added]

[Interests declared:  I am an active AAPOR member and was recently elected to serve as a two-year term as associate chair and char of AAPOR’s Publications and Information Committee, and as such will help oversee AAPOR’s new communications initiative].

Bialik’s column is worth reading in full, as always, and Bialik and his editors are to be commended for his thorough and thoughtful treatment of this complex subject.  However, as long as we are on the topic, the Journal‘s editors might want to consider some questions about its own reporting of online surveys:

  • If it was appropriate for the AMA to quickly correct the "troubling" assertion that its online study had a margin of error, why does the Journal continue to report a margin of error of "between 3.2 and 4.3 percentage points" for the online Zogby statewide polls that it sponsors (see the "methodology" link)?
  • Regarding the surveys conducted for the Journal by Harris Interactive, why does this March poll story refer the survey as a "telephone poll" in the third paragraph, but as an "online survey" in the methodology blurb?  And why did the Journal‘s Harris tracking polls shift from consistently labeled "online surveys" in January and February, to those identified as "telephone polls" in April and May?  And why do the tables in the April and May stories compare results across all of these studies without any reference to the apparent switch from the online mode to the telephone? 

Perhaps more corrections are in order.

Mark Blumenthal

Mark Blumenthal is the principal at MysteryPollster, LLC. With decades of experience in polling using traditional and innovative online methods, he is uniquely positioned to advise survey researchers, progressive organizations and candidates and the public at-large on how to adapt to polling’s ongoing reinvention. He was previously head of election polling at SurveyMonkey, senior polling editor for The Huffington Post, co-founder of Pollster.com and a long-time campaign consultant who conducted and analyzed political polls and focus groups for Democratic party candidates.