April 28, 2006
An "Immigration-Enforcement" Third Party?
Yesterday, our friend Mickey Kaus highlighted a question from a recent Rasmussen automated survey worth examining a bit more closely. The question asks voters to choose between "generic" Republican and Democratic candidates (with no stated immigration position) and a third party candidate that takes a hard-line anti-immigration position. The third-party candidate gets 30% of the vote, leading both Rasmussen and Kaus to speculate about the potential power of immigration to reshape our politics. Let me suggest an alternative: It may simply confirm the desire for a third party (at least in theory) by a large number of Americans regardless of the issues involved.
Courtesy of Scott Rasmussen, here is the full text of the two questions at issue (and remember that Rasmussen currently weights his survey a few points more Republican than other national samples of adults):
If the 2008 Presidential Election were held today, would you vote for the Republican candidate or the Democratic candidate?
17% Not sure
Suppose a third party candidate ran in 2008 and promised to build a barrier along the Mexican border and make enforcement of immigration law his top priority. Would you vote for the Republican, the Democrat, or the third party candidate?
30% Third party/other
18% Not sure
On his web site, Scott Rasmussen concludes:
This result probably reflects unhappiness with both parties on the immigration issue rather than a true opportunity for a third party. Historically, issues that drive third party candidates get co-opted by one of the major parties as they demonstrate popular appeal
Blogging at RealClearPolitics, he adds that the result "be taken as an indication of the [immigration] issue's power rather than a literal projection of election outcomes."
Fair enough. And while there is good evidence elsewhere (especially here) that the immigration issue produces more division within the two political parties than between them, let me suggest another reason to be careful about reading too much into this particular question. It may tell us as much about the strong general desire for a third party candidate as it does about the power of the immigration issue specifically.
Consider this result from the just released NBC/Wall Street Journal poll, which shows 45% favor the idea of a "new independent political party" and 29% oppose:
Tell me whether you would strongly favor, mildly favor, feel neutral about, mildly oppose, or strongly oppose this change: Build a new independent political party to run a credible candidate for president.
31% strongly favor
14% mildly favor
24% feel neutral
12% mildly oppose
17% strongly oppose
2% not sure
Or consider these questions asked by Gallup survey in 2003:
In your view, do the Republican and Democratic parties do an adequate job of representing the American people, or do they do such a poor job that a third major party is needed? (10/10-12/2003, n=1,004)
56% Do an adequate job
40% Third party needed
4% Don't know/refused
Have you ever voted for an independent or a third party candidate for president, that is, a candidate for president who was not either a Republican or a Democrat? (9/19-21/2003, n=1,003)
28% Yes, have
71% No, have not
1% Don't know/refused
It is also worth expanding on one of Kaus' caveats: "Candidates with appealing specifics often beat undefined, generic party choices." That is true, in that questions that inform respondents about the specific issue positions of specific named candidates typically get more of a response (e.g. fewer undecideds) than questions posing only "generic" choices. However, Rasmussen's question is a bit unusual in that it includes both types of choices on the same question. To be honest, I have not seen that done before and am not quite sure what to make of the result.
Again, I do not want to minimize the potential for the immigration issue to divide the bases of both parties, particularly the conservative Republican base. And Rasmussen reports that his hypothetical tough-on-immigration third party candidate divides self-identified conservatives, getting 35% to the generic Republican's 36%, while liberals still overwhelming prefer the Democrat (65%) to the third party candidate (19%).** That result is worth pondering, even though, as Rasmussen appropriately warns, we should not consider it "a literal projection of election outcomes."
**Although note that self-identified conservatives outnumber liberals in Rasmussen's sample by roughly two to one (34% to 17%).
April 26, 2006
An Online Poll on an Online Activity
One of the gratifying things about writing this blog is the collective power of Mystery Pollster readers. Last week, I emailed some questions to Zogby International about the methodology they used for a recent poll conducted on behalf of the online gambling industry. The poll had been the subject of a column by Carl Bialik for the Wall Street Journal Online, something I discussed in a post last Friday. Zogby's spokesman ignored my emails. However, over the weekend MP reader Ken Alper reported in a comment that he had been a respondent to the Zogby gambling poll. He also confirmed my hunch: Zogby conducted the survey online. That fact raises even more questions about the potential for bias in the Zogby results.
Why is it important that the survey was conducted online?
1) This survey is not based on a "scientific" random sample -- The press release posted on the web site of the trade group that paid for the poll makes the claim that it is a "scientific poll" of "likely voters." As we have discussed here previously, we use the term scientific to describe a poll based on a random probability sample, one in which all members of a population (in this case, all likely voters) have an equal or known chance of being selected at random.
In this case only individuals that had previously joined the Zogby panel of potential respondents had that opportunity. As this article on the Zogby's web site explains, their online samples are selected from "a database of individuals who have registered to take part in online polls through solicitations on the company's Web site, as well as other Web sites that span the political spectrum." In other words, most of the members of the panel saw a banner ad on a web site and volunteered to participate. You can volunteer too - just use this link.
Zogby claims that "many individuals who have participated in Zogby's telephone surveys also have submitted e-mail addresses so they may take part in online polls." Such recruitment might help make Zogby's panel a bit more representative, but it certainly does not transorm it into a random sample. Moreover, he tells us nothing about the percentage of such recruits in his panel or the percentage of telephone respondents that typically submit email addresses. Despite Zogby's bluster, this claim does not come close to making his "database" a projective random sample of the U.S. population.
2) The survey falsely claims to have a "margin of error" -- Specifically, the gambling survey press release reports a margin of error 0.6 percentage points. That happens to be exactly the margin you get when you plug the sample size (n=30,054) into the formula for a confidence interval that assumes "simple random sampling." In other words, to have a "margin of error" the survey has to be based on a random probability sample. But see #1. This is not a random sample.
Several weeks ago, I wrote about an online panel survey conducted on behalf of the American Medical Association that similarly claimed a "margin of error." But in that case, the pollster quickly corrected the "inadvertent" error when brought to his attention:
We do not, and never intended to, represent [the AMA spring break survey] it as a probability study and in all of our disclosures very clearly identified it as a study using an online panel. We reviewed our methodology statement and noticed an inadvertent declaration of sampling error.
I emailed Zogby spokesman Fritz Wenzel last Thursday to ask how they justified the term "scientific" and the claim of a "margin of error." I have not yet received any response.
3) The survey press release fails to disclose that it was conducted online -- Check the standards of disclosure of the National Council on Public Polls, standards adhered to by most of the major public pollsters. They specifically require that "all reports of survey findings" include a reference to the "method of obtaining the interviews (in-person, telephone or mail)." Obviously, Zogby's gambling poll release includes no such reference.
Now, it is certainly possible that the press release in question was authored by the client (the gambling trade group) and not by Zogby International. My email to Fritz Wenzel included this question. The subsequent silence of the Zogby organization on this issue is odd since most pollsters, including my own firm, reserve the right (usually by contract) to publicly correct any misrepresentations of data made by our clients.
4) This online survey concerned the regulation of online activity -- Even surveys conducted using random sampling are subject to other kinds of errors. Specifically, when those not covered by the sample or those who do not respond to the survey have systematically different opinions than those included in the survey, the results will be biased. At a minimum, Zogby's methodology can include only Americans that are online. More important, it does not randomly sample online Americans. Rather, it samples from a "database" of individuals that opted in, many because they saw a banner advertisement on a web page. As such, these individuals are almost by definition among the heaviest and most adventurous of online users.
While MP is intrigued by new methodologies that claim to manipulate the selection or weighting of results from such a sample to resemble the overall population, he warns readers of what should be obvious: The potential for bias using such a technique will be greatest when the survey topic is some aspect of online behavior or the Internet itself. With these topics, the differences between the panel and the population of interest are likely to be greatest.
I searched for but could not find a random sample survey that could show the relationship between attitudes on online gambling and time spent online. However, I did find a data on potential government restrictions that allow for such analysis in a survey conducted in the summer of 2002 by the Pew Internet & American Life Project. The survey asked a question about government monitoring of email and also asked respondents how often they went online. A cross-tabulation shows an unsurprising pattern. Those who went online "daily" opposed government monitoring of email by a margin of twelve points (42% yes, 54% no). Those who were offline altogether supported monitoring by ten points (49% yes, 39% no). Heavy online users are more skeptical of government regulation of the Internet than the population as a whole.
Now obviously, we can only speculate whether such a pattern might apply to the regulation of Internet gambling, but common sense suggests that it is a strong possibility. And it provides yet another reason for skepticism about the results of this particular Zogby poll.
So to sum up what we have learned:
In Carl Bialik's column, American Association for Public Opinion Research (AAPOR) President Cliff Zukin described the survey questions as "leading and biased." Further, the survey release failed to disclose that it was conducted online and made the statistically indefensible claim to be a "scientific poll" with a "margin of error." The failure to disclose the use of sampling from an online panel is particularly deceptive given that an online activity was the focus of the survey. Add that all up and you get one remarkably misleading poll release.
This story also presents a tough question for Carl Bialik's editors at the Wall Street Journal Online: Is it statistically defensible to report a "margin of error" for this non-probability sample? And if not, why does the Wall Street Journal Online allow Zogby to routinely report a "margin of error" for the Internet panel surveys that the Journal sponsors?
April 25, 2006
MP on WTWP
Short notice, I know, by yours truly will be a guest on Washington Post Radio this afternoon at about 2:30, along with Post pollsters Richard Morin and Claudia Deane. Those in the DC area can tune in at 1500 AM or 107.7 FM. You can listen to the show live via streaming audio at this link.
April 24, 2006
CNN & ORC: The Real "Breaking News?"
But wait, despite yet another "new low" story (or perhaps because of it), MP readers may be less interested in the results of the survey than organization that conducted it: The Opinion Research Corporation (ORC). Remember that CNN recently severed its long-time survey partnership with the Gallup Organization and USAToday. Could ORC be the new partner and Gallup's replacement? It sure looks that way.
One who knows tells MP that ORC is one of the original "big four" of American commercial polling firms, along with Gallup, Harris and Roper. Like Roper, they have focused for most of their history on surveys conducted for corporate clients. Their web site tells us that ORC is an international "research and consulting firm," founded in 1938, with clients in "both the public and private sectors." A Google News search turns up a sampling of some of their recent projects and clients.
A warning to MP readers: Be careful of making too much of the "new low" comparisons to other recent surveys, particularly the most recent surveys conducted by Gallup and the now defunct CNN/USAToday/Gallup partnership. CNN is still the sponsor, and while the wording may be the same, the pollster and calling centers are different. As we have discussed previously (especially here and here), different polls can have different house effects that make for slightly but consistently different results. See especially the posts on this topic by Charles Franklin and Robert Chung.
On this point, note that in its PDF release, CNN appears to be separating the latest results from the "CNN/USAToday/Gallup Trends."
UPDATE (4/25): As he notes in the comments, Charles Franklin has posted his own thoughts on the new CNN data collected by ORC as well as yet another update of his job approval graphic. He also notes that the new CNN pdf release fails to include results for demographic items and makes the following point, with which I totally agree:
EVERY reputable pollster should be willing to release the topline results for their ENTIRE survey, not just the items they include in their story. It is crucial for credibility and for more informed interpretation of the poll results. (I'm not talking about embargoing results for later stories which is fine-- the demographics don't fall under any embargo and should be released immediately.)
April 21, 2006
The Question That Answers Itself
A few weeks ago, our friend Mickey Kaus described a question asked on a recent Time Magazine poll as having "comically biased wording." I was not ready to be quite so harsh about that particular poll. Well, this week courtesy of Carl Bialik of the Wall Street Journal Online, we have a different poll conducted by Zogby International whose questions and their wording truly meet the "comical" standard.
The second half of Bialik's weekly Numbers Guy column looks at a recent Zogby survey on online gambling sponsored by the online gambling industry. "It appears that the sponsor of the poll influenced the way it was conducted," Bialike writes, "particularly in the way the questions were phrased." He is putting it mildly.
Here is the most brazen of the questions used in the survey press release to support the assertion that "Americans overwhelmingly do not want" federal laws restricting online gambling:
More than 80% of Americans believe that gambling is a question of personal choice that should not be interfered with by the government. Do you agree or disagree that the federal government should stop adult Americans from gambling with licensed and regulated online sports books and casinos based in other countries?
Yes, you're reading that right. The text of the Zogby question actually answers itself. Or, to be more precise, it tells the respondent what "80% of Americans believe" about government regulation of gambling just before asking them what they believe about such regulation. It is thus not exactly surprising, as Bialik put it,
that after being told that most Americans don't want the government to interfere, some 71% of the respondents to this question signaled they, too, were against a government ban.
To be serious for a moment, the issue here is that the poll press release makes the following claim:
[The poll] establishes that Americans overwhelmingly do not want the federal government enacting laws that restrict a recreational activity such as online gambling.
No. At best, these results establish that a pollster can push respondents to oppose such restrictions. The obviously leading nature of the questions cited in the release makes them of little value in measuring the opinions Americans currently hold about online gambling. It is one thing to design "projective" questions in order to "see how different arguments play" (as Humphrey Taylor of Harris Interactive puts it in the Bialik article). It is quite another to try to pass off such projective questions as a "fair and balanced" reading on what Americans currently think, which is exactly what this press release does.
There is much more in Bialik's piece, including reaction from AAPOR President Cliff Zukin and a response from Zogby spokesman Fritz Wenzel. It is definitely worth reading in full.
However, MP has a hunch there is more to this story.
I am doing some additional digging, but here's a hint: The press release describes the study as a "scientific poll of over 30,000 likely voters" interviewed over a two week period with a "margin of error" of " 0.6 percentage points." Moreover, according to Bialik's column, the survey sponsor claims they paid "less than $10,000" for the survey.
I'll put it this way: I'm aware of no pollster or calling center that will complete a telephone survey of 30,000 likely voters for less than 33 cents an interview.
More to come...
April 20, 2006
Salience of Immigration Rising
Gallup brings us the polling story of the day with a report of a "significant" increase in the percentage of Americans who consider immigration and gasoline prices the most important issues facing the country. Gallup reports that mentions of immigration on this open-ended quesiton (in which respondents answer in their own words) have increased from 6% in March to 19% in April. Only Iraq ranks higher at 25% (up slightly from 20% in March). Mentions of fuel and oil prices have increased from 5% to 11%. A complete analysis of these data by Gallup's Jeffrey Jones is available to non-subscribers for today only.
Note what Jones has to say about how the results vary by party identification:
Democrats and Republicans disagree over the nation's top problem. For Democrats, it is Iraq. Thirty percent say so, compared with 21% of independents and 15% of Republicans. Among Republicans, immigration is seen as the most pressing concern. Thirty percent of Republicans cite immigration as the most important problem, compared with 16% of independents and 11% of Democrats.
A Gallup analysis from just two weeks ago (now available to subscribers only) pooled data for this question from the first three months of 2006. It found a similar pattern. Only 4% of Americans named immigration as the most important problem, although mentions were more frequent among Republicans (6%) and independents (5%) than among Democrats (2%).
In the first quarter of 2006, Gallup also found that Democrats were more likely to mention Iraq (31%), general dissatisfaction with government (8%) and unemployment and jobs (5%) as the most important problems. Republicans were more likely to mention terrorism (14%), ethics and morality (9%), national security (6%) and again, immigration (6%).
UPDATE: In the comments, Tom Riehle reminds us that the most recent AP/IPSOS poll showed a similar result. Open-ended mentions of immigration increased from 3% to 13% from January to April.
UPDATE II: Results from the surveys released today by the Pew Research Center and Harris Interactive also confirm the trend. Harris shows immigration increasing from 4% to 19% between March and April as the most important problem. Pew shows an increase from 1% to 15% between March and April in the percentage who name immigration or border issues as the "FIRST news story that comes to mind when you think about what's been in the news lately."
UPDATE III: In the comments section, Andrew Tyndall speculates that the increased salience may be coming from those who fear "legislation may turn many immigrants into criminals, prohibit social services from helping them and obstruct plans to grant a path to citizenship." While we lack data directly on point, the fact that salience tends to be higher among Republicans than Democrats tends to undermine his argument. We could evaluate this hypothesis directly with a cross-tabulation of any of the "most important problem" questions above by attitudes on immigration. Unfortunately, as far as I can see, none of the releases do so.
However, ABC News provided something pretty close in a news release on April 9 (that uncharacteristically does not appear in their "Poll Vault" web page -- though you can see the text of the questions via the Washington Post). The ABC release included a chart (reproduced below) that showed those with "less tolerant views" on immigration - chiefly Republicans and conservatives" tended to rate immigration as most important to their Congressional vote:
Among the nearly two-thirds who favor [a program that would lead to citizenship], 54 percent say it'll be a very important issue in their vote, comparatively few. Among those who favor a temporary guest worker program, more, 62 percent, call it very important. And among those who favor felony status for immigrants with no work program - disproportionately Republicans and conservatives - far more, 79 percent, call it a top issue in their vote.
But be careful of jumping to the conclusion that an anti-immigrant policy rallies all of the GOP base. The Pew Research Center has presented strong evidence that attitudes on immigration tend to divide key coalitions within both parties. Check their Political Typology report from 2005 and their mega-immigration report released a few weeks ago (especially the table reproduced below). They show that opposition to immigration tends to be greatest among social conservative and less-well educated Republicans, but that upscale, well-educated, pro-business Republicans tend to be more ambivalent. A similar divide separates well educated Liberals (who tend to be most pro-immigration) from more downscale, African American or moderate Democrats.
April 19, 2006
Rasmussen and Party ID - Part II
One of the challenges in evaluating the data available from Rasmussen Reports is that Scott Rasmussen's methodology involves two significant departures from most conventional surveys. He uses an automated recorded voice technology (rather than live interviewers) to select and interview respondents, and he routinely weights his data by party identification. In Part I of this post, we looked at patterns in Rasmussen's unweighted party ID data. Today, let's consider Rasmussen's weighting procedure and its implications.
I originally asked Scott Rasmussen to explain his party weighting and likely voter selection procedure a few weeks before the 2004 election (as part of a larger summary of similar information on 24 other public polls). Here is his verbatim answer:
Our base model is 35% R 39% D and 26% other. However, once the sample is weighted to that model, responses that indicate likelihood of turnout can adjust that a bit. As a practical matter, our samples never vary more than 2 percentage points from that base model (and rarely by that much).
We believe that Party ID is something like loyalty to a sports team. Although its intensity and enthusiasm may ebb and flow, the party ID stays with an individual and is not subject to whims of the moment. Obviously, there are some changes over time. Changes in the partisan make-up of the Electorate are more likely the result of turnout and enthusiasm rather than people changing their minds.
A few months ago, I emailed Rasmussen with more questions and he further clarified their procedure. During 2004 Rasmussen always began with an initial screening question about voting history (that eliminated some self-described non-voters from the sample). He then weighted the initial sample so that partisanship was always 39% Democrat, 35% Republican and 26% with no party affiliation. Like a lot of other pollsters, Rasmussen tinkered with his likely voter "model," making it progressively tougher as the campaign progressed (adding other questions to the mix such as political interest). The different models allowed the ultimate party mix to vary from the initial weight target. As Rasmussen explains it:
By Election Day , our baseline was still 35-39-26 but our Likely Voters sample had just over 36% R and just under 38% D. If we [had] adjusted to 37-37-26, we [would have] nailed the actual election results even more closely (our final projection before Election 2004 was within half a point of each candidate's actual total).
In 2004, Rasmussen continued to report on the Bush job rating among voters through mid December. After a brief hiatus in January, the resumed tracking in February 2005 with sample of all adults and a different weighting target. Again, Scott Rasmussen explains:
For 2005, we adopted the 37[D]-37[R]-26[R] copy and have held steady with it mainly to provide a solid trendline. I do not believe party affiliation changes much over time and that this is a preferable approach. I do not claim it to be perfect, merely a reasonable decision in an off-year.
In other words, since February 2005, the Rasmussen daily tracking samples of adults have been weighted so that their party balance is 37% Republican, 37% Democrat and 26% with no party affiliation.
Rasmussen says the desire for continuity was his primary rationale for the way he has weighted his data. Moreover, he explains that the Bush job rating is a relatively small part of a larger daily tracking survey that Rasmussen uses to created indexes of consumer and investor confidence. He considers the economic tracking survey more important to his business. Again, he explains:
I was, and remain, quite surprised by the intensity of discussion around Bush [job approval]. From my perspective, that was just a residual part of the data. Generally speaking, for [Rasmussen Reports], the most important part of the tracking data was the economic data. In that analysis, stability and trends matter most.
The problem in all of this is that for the last 14 months, Rasmussen Reports has been weighting samples of adults to match a snapshot of likely voters taken on Election Day 2004. The result is that Rasmussen has weighting up Republicans slightly throughout 2005 and 2006, even though Republican identification has dropped roughly two percentage points in Rasmussen's unweighted data over the course of the last year.
Plotting the weighted party ID result on Professor Franklin's graphic helps put all of this into perspective. Considering the data reported during 2005 and so far 2006, the weighting makes Rasmussen about two points more Republican, and a few tenths of a percentage point more Democratic. I assume it is not coincidence that the red weighted Rasmussen point in the graphic below exactly matches the point for the Battleground survey (conducted jointly by Republican Ed Goeas of The Tarrance Group and Democrat Celinda Lake of Lake Snell Perry and Associates) -- the only other pollsters in the graph that routinely weight their data by party ID.
[Click on the graph to see a larger version - you may need to click again to magnify to full size].
This weighting helps explain why Rasmussen shows a slightly but consistently higher job approval rating for George Bush than most of the other pollsters, as explained in this post and depicted on the graphic reproduced below. Not surprisingly, the Battleground poll is an exception (see slide #12).
The main point for readers to take away is that Rasmussen's data more closely resembles a sample of likely voters than all adults. While the decision to weight every survey to 37% Republican, 37% Democrat is not one I would have made, it is arguably similar to the approach adopted by the Battleground survey. Both Rasmussen and the Battleground pollsters can point to the closeness of their final 2004 poll results to the actual election returns as vindication. My main gripe with Rasmussen on this score is that his own web site ought to do a better job disclosing his weighting targets and procedures. A reader should not need to come to Mystery Pollster to learn that the weight targets are 37D-37R-26I.
Further, the weighting information that Rasmussen does provide on his web site is a bit misleading. The methodology page says their weighting procedure "insure[s] that the sample reflects the overall population in terms of age, race, gender, political party, and other factors." But Rasmussen derived his weight target from estimates of likely voters, not the "overall population." One (hopefully) constructive suggestion: Why not just apply a likely voter screen to every survey?
One important development is that Rasmussen plans to shift within the next few weeks to the sort of "dynamic weighting" scheme of the sort championed by Alan Abramowitz, Ruy Teixeira, Alan Reifman and others. The new approach, according to Rasmussen, "will apply a party weight to the data based upon a rolling average of the past three months."
If Rasmussen were to make that change right now, it would likely reduce the Bush job approval rating by roughly two percentage points. On Monday, his daily update reported Bush's overall job approval at 39% and also reported the results by party: 70% among Republicans, 16% among Democrats and 26% of those not affiliated with either party (see my screen shot of Monday's report). Readers are free to check my math, but if I put these numbers in a spreadsheet and adjust by the average of Rasmussen's unweighted party ID results for January, February and March 2004 (36.4% Democrat, 34.1% Republican, 29.5% other), the Bush job approval number drops from 39% to 37%.
The debate over party weighting is one we have long followed, and MP will not attempt to resolve it in one post. Those new to the issue may want to review the posts in the party weighting FAQ. For today, here is Rasmussen's position, from another of his recent emails to me:
As I indicated, there is room to disagree with my particular targets on party weighting over the past year... However, I believe that my approach is far less misleading than polls reporting wild swings in party affiliation from poll-to-poll. The evidence that you have posted on your own site suggests strongly that party affiliation changes slowly at most.
PS: After reading my post last week, Scott Rasmussen chided me for focusing solely on the data for 2005 and 2006 in my comparison of his party ID results to those of other pollsters. I did so largely because Franklin's data for other pollsters is mostly since 2005. Many of the public pollsters have only started including party ID on their public releases a year ago.
However, Rasmussen's summary raises an intriguing possibility. He notes a trend in his data since 2004 that roughly matches the pattern identified in a recent analysis of Gallup data (also discussed here): Independent identification declined in 2004 and increased since, with most of the corresponding movement being toward Republican identification during 2004 and away from Republicans since. Rasmussen passes along the speculation of an unnamed Republican analyst who sees a seasonal pattern in which GOP identification surges as Republicans mount campaigns and in election years that "diminish[es] the impact" of the media which "favors the Democrats." It is an interesting theory that I want to try to evaluate by looking at past data for party ID. But not today.
April 17, 2006
Monday Morning Link Roundup
It's Monday morning, and I'm still reeling from a week of trying to balance blogging, Passover travel and the ever present day job. But here is a quick roundup of things worth reading this morning from the world of political polling:
- The Hotline's (subscription only) "Wake-Up Call " is reporting this morning on a press release from Gallup that shows President Bush's job approval rating returning "to last month's all-time low": 36% approve, 59% disapprove. Presumably, we will see more from Gallup later today.
- Gallup's David Moore has also posted some free-for-today analysis on recent trends in opinion on the Iraq War, particularly their measures of whether Americans consider the war a mistake and whether they view it winnable. Read the analysis now, before it disappears behind the Gallup subscription wall, although the data from the April 7-9 Gallup survey is available to all via the Polling Report. For background, see my earlier post on the ongoing academic debate behind these numbers.
- A front page story this morning by the Washington Post's Charles Babington (with an assist by MP friend Chris Cillizza) asks the $64,000 question with respect to the 2006 mid-term elections: Will "intense and widespread opposition to President Bush" as measured in recent surveys translate into "a turnout advantage over Republicans for the first time in recent years?" Even GOP pollster Glen Bolger seems to agree that:
"Angry voters turn out and vote their anger . . . Democrats will have an easier time of getting out their vote because of their intense disapproval of the president. That means we Republicans are going to have to bring our 'A' turnout game in November."
- On the Post's op-ed page, Polling Director Richard Morin tabulates recent survey data by state and finds that:
States that were once reliably red are turning pink. Some are no longer red but a sort of powder blue. In fact, a solid majority of residents in states that President Bush carried in 2004 now disapprove of the job he is doing as president . . . According to the latest Post-ABC News poll, Bush's overall job approval rating now averages 43 percent in the states where he beat Democratic nominee John Kerry two years ago, while 57 percent disapprove of his performance.
Morin's analysis confirms the trend in evidence in the thematic map originally created a few weeks ago by this DailyKos diarist using the 50-state data from SurveyUSA (hat tip to DemFromCT and Andrew Sullivan):
April 13, 2006
Rasmussen and Party ID - Part I
And speaking of putting the results of the new "automated" surveys under a microscope, we have some new data this week on party identification from automated pollster Scott Rasmussen. These data provide us with another opportunity to compare Rasmussen's results to those from other pollsters. While Rasmussen is certainly not an outlier in terms of party identification, there are subtle differences that suggest his polls reach a slightly more partisan universe than other surveys of American adults.
Before diving into the data, however, we should remember two things: First, Rasmussen typically weights his data by party identification. However, as should be obvious, the just released party data are not weighted by party. Rasmussen did adjust these party data so that demographic variables (age, gender and gender) match U.S. census estimates for the adult population. Rasmussen's usual procedure of weighting by party is worth discussing, but I'll take it up in a subsequent post.
Second, in comparing Rasmussen to other pollsters we should consider differences other than the mode of interviewing (recorded voice vs. live interviewer). Rasmussen asks a different party ID "question." Also, some pollsters screen for just registered voters or likely voters rather than all adults. I'll consider those differences in the context of the data.
So with those caveats, let's look at the data. Rasmussen conducts roughly 15,000 automated interviews of adults each month, in which respondents hear a recorded voice and answer by pressing the buttons on their touch-tone phones. His release provides aggregated monthly party ID results going back to January 2004. For March 2006, Rasmussen's sample was 36.7% Democrat, 34.0% Republican, 29.3% other. The following table shows how that result compares to other public polls conducted during (the results for Gallup and Time are the averages of multiple polls, data courtesy Charles Franklin).
The biggest difference is that Rasmussen has fewer respondents in the independent and other categories (29%) than most of the other polls of adults (an average of 39%). However, Rasmussen's independent/other result is very close to average of recent surveys that screened for registered voters or likely voters (LV - an average of 30%),.
We can see this difference more clearly in a chart specially prepared for MP by Professor Charles "Political Arithmetik" Franklin that shows the party ID result for every public poll conducted since January 2005 (click on the image to see a larger version):
As Franklin noted in describing the original version of this chart (san Rasmussen): "The most compelling point of the figure above is that polls from a single polling organization tend to cluster, but that the organizations tend to differ substantially." Note that the Rasmussen points (dark orange) cluster more tightly due the significantly larger sample sizes (n=15,000) for each point. Note also that the Rasmussen points tend to fall in the upper right quadrant along with the polls of Greenberg/Democracy Corps (likely voters) and Fox News (registered voters).
We can see this relationship a bit more clearly in a second chart prepared by Franklin which shows one point on the same graph for each pollster based on their average party ID result since January. On this graph Franklin adds points for the surveys conducted by the Battleground Poll (Tarrance & Lake) and NPR (Greenberg & POS), both of which also screen for likely voters rather than adults. The only other poll that releases party ID result for registered voters only (rather than adults) is AP/IPSOS, and their average just misses the upper right quadrant. The surveys of adults tend to cluster in the lower left quadrant, indicating fewer independents but generally the ratio of Republicans to Democrats.
We should also consider here the differences in question wording. Here, courtesy of Scott Rasmussen, is the way they his organization asks about party affiliation:
If you are a Republican, press 1
If a Democrat, press 2
If you belong to some other political party, press 3
If you are independent, press 4
If you are not sure, press 5
Compare that to the questions asked by other pollsters. I have copied a full listing of the wording used by the different pollsters on the jump page, but there are essentially two main variants: The classic National Election Studies of the University of Michigan version asks:
Generally speaking, do you usually think of yourself as...a Democrat, a Republican, an Independent, or what? (ABC/WP, CBS/NYT)
The classic Gallup version is more temporal, asking about "politics today:"
In politics today, do you consider yourself a Republican, Democrat, or Independent?" (Gallup, Pew)
An important point: Unlike Rasmussen, the three pollsters that consistently show roughly the same low level of independents as Rasmussen - Fox, Democracy Corps and NPR - omit the prompt for the "independent" category.
Also note that the more abrupt Rasmussen question asks respondents what they "are" or what party they "belong to," while the other pollsters typically ask respondents whether they "think of" or "consider" themselves as partisans or independents. I have not seen any controlled experiments on this issue, but I would expect more Americans to say they "think of" themselves as partisans than would say they "belong to" a particular party (Franklin also has more discussion on question wording and other potential "house effects" on the measurement of party ID).
Let's consider one more comparison. Does Rasmussen pick up the same trends in party ID as other pollsters? The answer is a qualified yes. Franklin created another chart comparing Rasmussen to other pollsters in terms of the trends since January 2005 in each category of party ID. The chart below shows an average trend line for Rasmussen (in dark orange) and the other polls (in grey, with a control for "house effects" -- click on the image to see a larger version). As the chart shows, Rasmussen picks up the same downward trend in Republicans and the same upward trend in the independent category as other pollsters. The only difference is that Rasmussen shows slightly less trend - slightly less decline in Republican identification and slightly more growth in the independent category. (See Franklin's original post for far more on these and other graphic depictions of the trends in party ID).
So what can we say about these differences? Generally speaking, Rasmussen is not an "outlier" in terms of the unweighted party ID result, although there are some small but intriguing differences. Given that we have variation in mode (interviewer or automated), sample (adults or registered/likely voter) and question wording (on different dimensions), it is impossible to determine precisely the source of slight but consistent variation in the party ID results charted above.
However, we can speculate a bit. MP can see three reasons to hypothesize that Rasmussen's party result should have fallen in the lower left quadrant of the first two charts above (fewer partisans, more independents): He samples adults rather than registered or likely voters, he prompts for independents and he asks respondents what they "are" or what party the "belong to" rather than what party they "consider" themselves closer to. Instead, the Rasmussen result ends up in the upper right quadrant (slightly more partisans and fewer independents) along with polls of registered and likely voters whose party questions do not always prompt for independents. The surveys also show less movement in party ID, which would suggest that he is polling a slightly more partisan universe. All of which adds up to samples that look a bit more partisan than other surveys of adults.
To speculate even more: MP's hunch is that Rasmussen's implementation of the automated methodology gets a lower response rate than the other polls, and that those who choose to participate tend to be a more partisan and politically interested than those sampled by conventional telephone surveys. Thus, Rasmussen's surveys show more resistance to change and short term trends than other polls of adults, because his respondents are a bit more politically partisan and interested. And that is *before* Rasmussen weights by party, something I want to discuss more in the next post.
I should conclude by saying that Scott Rasmussen deserves credit for releasing his party ID data, as do many of the public pollsters who began regularly releasing party identification results in early 2005. We hope that Rasmussen will make the release of party ID results a regular and recurring feature of his free website. We also hope that Rasmussen and other pollsters will release more data on their response rates. The more we know about who participates in these surveys, the better we will be able to evaluate these surveys.
The complete wording of the party identification question (as collected by Charles Franklin) follows after the jump:
Variations on the Michigan party question:
Generally speaking, do you usually think of yourself as...a Democrat, a Republican, an Independent, or what? (ABC/WP, CBS/NYT)
Generally speaking, do you usually think of yourself as...a Republican, a Democrat, an Independent, or something else? (Time/SRBI)
Generally speaking, do you think of yourself as a Democrat, a Republican or what? (NPR/Greenberg/POS, Democracy Corps/Greenberg)
Variations on the Gallup party question:
In politics today, do you consider yourself a Republican, Democrat, or Independent?" (Gallup, Pew, PSRA)
Regardless of how you might have voted in recent elections, in politics today, do you consider yourself a Republican, Democrat, or Independent? (Newsweek)
Do you consider yourself a Democrat, a Republican, an Independent or none of these? (AP)
When you think about politics, do you think of yourself as a Democrat or a Republican? (Fox)
April 10, 2006
On Reporting Standards and "Scientific" Surveys
Two weeks ago, I posted a two-part series on an online spring break study conducted by the American Medical Association (AMA). That discussion coincided with an odd confluence of commentary on online polling and the standards used by the news media to report on or ignore them. I want to review some of that commentary because it gets to the heart of the challenges facing the field of survey research, and not coincidentally, the central theme of much of what I write about on Mystery Pollster: What makes for a "scientific" poll?
Let's start with the very reasonable point raised by MP reader JeanneB in the comments section of my second post on the online AMA spring break survey:
I'm disappointed that you seem to go easy on the media's role in this. More and more news outlets have hired omsbudsmen in recent years. They also need to add someone to filter polls and ensure they're described acurately (if they get used at all) . . .
[The media] have a responsibility to police themselves and set some kind of standard as to which polls will be included in their coverage and how they will be described. Please don't let them off the hook for swallowing whole any old press release disguised as a "poll".
I am sympathetic to the media in this case because of the way the AMA initially misrepresented its survey, calling it a "random sample" complete with a margin of error. As one pollster put it to me in an email, "if the release has the wrong information (eg margin of error) it is very hard to expect the media to police that."
However, the media standards for poll reporting are certainly worth discussing. Many do set rigorous standards for the sorts of poll results they will report on. Probably the best example is the ABC News Polling Unit, which according to its web site,
vets all survey research presented to ABC News to ensure it meets our standards for disclosure, validity, reliability, and unbiased content. We recommend that our news division not report research that fails to meet these standards.
Thanks to the Public Eye -- the online ombudsman-like blog site run by CBS News -- and the pollsters at CBS, we can now read their internal "standards for CBS News poll reporting" (posted earlier today in response to an email I sent last week). The Public Eye post is well worth reading in full, but here is a pertinent excerpt. The underlined sentence represents "new additions to the CBS News standards:"
Before any poll is reported, we must know who conducted it, when it was taken, the size of the sample and the margin of statistical error. Polling questions must be scrutinized, since slight variations in phrasing can lead to major differences in results. If all the above information is not available, we should be wary of reporting the poll. If there are any doubts about the validity, significance or interpretation of a poll, the CBS News director of surveys should be contacted. The CBS News Election and Survey Unit will maintain a list of acceptable survey practices and research organizations.
Other news organizations clearly apply standards for what they will air or publish, although those standards may vary. Conveniently, just a few days after my posts on the AMA spring break survey, Chuck Todd, editor in chief of The Hotline, posted his explanation of the Hotline's policy with regard to online and automated surveys on the On Call blog:
There are a bunch of new poll numbers circulating in a bunch of states, thanks to the release of the latest online polls Zogby Int'l conducts for the Wall Street Journal's web site. We don't publish or acknowledge the existence of these numbers in any of our outlets because we are just not comfortable that online panels are reliable indicators.
Todd's commentary also spurred Roll Call political analyst Stu Rothenberg to weigh in on online polls and other newer methodologies and the news media's habits with regard to reporting them. He agreed with Todd that "pollsters have not yet figured out how to conduct online polls in a way that's accurate." He continued:
Like the Hotline, both Roll Call and my newsletter, the Rothenberg Report, don't report on online polls either. Well-regarded pollsters are right to say their methodology is unproven.
Of course, we aren't the only ones skeptical about the reporting about online polls. Other media outlets, including CNN and the print editions of the Wall Street Journal, generally don't report on online polls either. Many media outlets also ignore polls taken by automated phone systems rather than real people, because of concerns about their accuracy. Unfortunately, others in the news media aren't as discriminating
[Rothenberg goes on to recount his efforts to contact the Wall Street Journal web site regarding its reporting and sponsorship of the Zogby's online polls. The Roll Call column is available to all on Rothenberg's site
, for now, available only to Roll Call subscribers. I will definitely have more to say about Rothenberg's column soon].
No one can argue with the need for standards in the way the news media reports polls. News organizations have to determine which polls are newsworthy, just as they must judge the news worthiness of any other story. To use the language of computer programming, these judgments are inherently binary, all or nothing. A poll is either worthy of publication or it isn't.
Unfortunately, the big challenge is finding the line that separates "scientific" surveys from lesser research. It is not always easy. A "scientific" survey based on a random sample is still subject to non-response, coverage or measurement error (potential sources of error that have nothing to do with sampling error are not accounted for by the "margin of error"). In other words, not all "scientific" surveys are created equal. We should not assume that the results of a poll are infallible (within the range of sampling error) simply because it started with a "scientific" random sample.
At the same time, it would be a mistake to dismiss all polls that fail to make the media cut as "cheap junk." For example, as both Rothenberg and Todd point out, most major news organizations also refuse to report on automated polls that use a recorded voice rather than a live interviewer to ask questions and select respondents within each household. Yet except for the recorded voice, these polls use the same methodology as other "scientific" polls, including random digit dial samples. Yes, conventional pollsters have certainly raised "concerns about their accuracy." But do these surveys deserve to be painted with the same broad brush as non-random samples drawn from internet based volunteer panels? The mainstream news media pollsters argue that they do - MP is less certain and generally less skeptical of the automated polls.
The irony is that in the same week that some Mystery Pollster readers asked why the news media chose to report one particular online poll, a group of professional pollsters debated the merits of ignoring such polls. Chuck Todd's comments helped set off an unusually heated discussion when they were posted to the members-only LISTSERV of the American Association for Public Opinion Research (AAPOR - MP is a member and regular reader). While most were skeptical of online polls (and universally critical of the AMA for misreporting their methodology), some questioned the all-or-nothing ban on reporting of non-probability samples. Here is one especially pertinent example:
What I do find inexcusable is the attitude that only surveys that claim (however tenuously) to use a probability sample can be reported on. The only reason for this is that it is easier to pretend that the sampling error is the only survey error and that therefore any survey that is not based on a probability sample is "junk." That is simply not true.
This debate is far bigger and more important than a single blog post. In many ways, it gets to the underlying theme of most of controversies discussed on Mystery Pollster: What makes for a "scientific" poll? More specifically, at what point do low rates of coverage and response and deviations for pure probability methods so degrade a random sample as to render it less than "scientific?" Can non-random Internet panel studies ever claim to "scientifically" project the attitudes of some larger population?
Because these are the most important questions facing both producers and consumers of survey data, this site will continue to take a different approach. With respect to automated and internet polls, MP will certainly "acknowledge their existence" and, when appropriate, report their results. But I hope to go further, explaining their methodology and putting their results under a microscope in search of empirical evidence of their accuracy and reliability (or the lack thereof). MP's approach will be to immerse rather than to ignore, because as a noted academic survey methodologist Mick Couper put it a few years ago, "every survey can be seen as an opportunity, not only to produce data of substantive relevance, but also to advance our knowledge of surveys."
PS: My recent article in Public Opinion Quarterly included some pertintent advice for consumers on what to make of automated and Internet polls. To review it, go to this link and search on "procure and consume."