« December 2005 | Main | February 2006 »

January 31, 2006

SOTU: On Bounces and Spin

From the polling perspective, if recent history repeats itself, we can assume two things will occur after the President's State of the Union (SOTU) speech tonight.  First, CNN/USAToday/Gallup and perhaps CBS News will conduct instant reaction polls among speech watchers who will express great enthusiasm for the President and his address.  Second, the traditional poll of all Americans conducted in the weeks following the speech will show little or no "bump" in the President's job approval ratings. 

MP did a bit of research on this question over the weekend and was somewhat surprised to see a bit of a spin war develop over it.  According to The Hotline, the Republican National Committee distributed an email yesterday from Bush strategist and senior RNC advisor Matthew Dowd that attempts to lower expectations of a Bush bump:

In looking at poll movement before and after State of the Union addresses, the average over the last fifty years is actually a slight drop (-0.2%).  President Bush's average change is also a drop (-0.4%).  Only one of his SOTU addresses showed positive movement (2005), which is likely attributed to the intervening events of the 2005 Inaugural and January 2005 Iraqi elections.

In reaction, the Democratic National Committee (DNC) put out a counter release claiming that yes-there-is-so a SOTU bounce: 

[Dowd's] assertions are hardly credible given the press accounts following the last four Bush State of the Union addresses...

President "Always" Gets A Bump From State Of The Union. During an appearance on CNN, Rick Dunham, White House correspondent for Business Week Magazine, noted that, "And if you look at the CNN- Gallup polls, (the) president always gets a big bump out of State of the Union. There hasn't been a bad State of the Union speech, according to instant reaction, for -- I mean, for all these years." (CNN, Reliable Sources, 2/3/02)

2005: "Poll: Bush wins converts among speech-watchers" (CNN.com, 2/2/05)

2004: "Speech Watchers React Positively to Bush's Message" (Gallup Poll News Service, 1/21/04)

2003: "Poll: Bush Gets Boost From Speech" (CBS, 1/29/03)

2002: "Bush, sustaining stratospheric poll ratings after Tuesday's State of the Union address?" (Washington Post, 2/2/02)

So who's right?  Well, in a sense, both.  Here's the story: 

Dowd's numbers come from a very helpful analysis released just before last year's speech by Gallup analyst Jeff Jones.  Gallup today released an updated version (which should be free to non-subscribers for the next few days) that shows that Presidents historically get little or no "bump" in their approval ratings following a State of the Union address:

Historical Gallup findings dating back to President Jimmy Carter's administration indicate that presidents rarely are able to increase their popularity following a State of the Union address. George W. Bush may have done so temporarily with his speech last year, but his public standing was largely unchanged following his three prior State of the Union addresses.

Here is a slightly compressed version of the numbers in the Gallup analysis.  Changes of four or fewer points would not be statistically significant:

0131_gallup_before_after


MP has one minor quibble:  The apparent six point gain in Bush's job rating last year looked more like a statistical outlier than either a "temporary" bump, or as Dowd put it something "attributed to the intervening events."  Check out the Prof. Pollkatz graphic summary of all Bush job ratings (you will need to click to enlarge and see the detail, click here for the Prof. Pollkatz original).  I circled the period just before and after the 2005 SOTU.  The little purple diamond that stands out above all the other points is the Gallup 2/4-6/2005 survey that showed Bush's approval rating at 57%.  Put simply, no other poll conducted at the same time, including another survey that Gallup conducted immediately afterwards showed Bush's rating to be quite that high (the 2/7-10/2005 Gallup poll had Bush's approval rating at 49%).

0131_pollkatz_bush


The one immediate and sustained bump in the Gallup data was for Bill Clinton's 1998 SOTU address.  Check out the Clinton '98 bump in another Pollkatz graphic just below (original here), but keep in mind why that was a very unique event.  The Monica Lewinsky story had broken just a few days before.  The day before that speech, Bill Clinton faced the cameras and delivered his infamous "I never had sex with that woman" quote.  MP cannot find the ratings for that speech, but interest in the speech was certainly high.  Ironically, the reaction to Clinton's performance - seemingly unfazed by the scandal erupting around him - help boost his numbers in a way that persisted until the impeachment trial ended with an acquittal. 

0131_pollkatz_clinton


So what generated all those headlines speaking of a bump in the DNC release?  Those came mostly from the instant reaction polls conducted by Gallup and CBS News.  Consider how these are done.  Gallup included two questions on their recent surveys of adults that presumably looks something like these from 2003:

As you may know, President Bush will present his State of the Union address in a speech to Congress next Tuesday night, January 28th. The speech is scheduled to begin at (time) in your area and will be shown on all the major television networks. Do you think you will watch that speech, or not?

[IF YES, ASK:] Gallup will be conducting a very short survey next Tuesday night, immediately after President Bush's speech. If we wanted to include you in the survey after the speech, would it be all right to call you back at that time?

Then after the speech tonight they will call back all those who answered "yes" on both questions and interview those who watched.  In recent years, CBS news has used essentially the same approach, except they interview debate watchers before and after with the "projectable" Knowledge Networks panel (which we have discussed previously in the context of post debate polls and the last fall's elections in California).

The results on the Gallup survey have been very consistent over the years.  As the table below shows, virtually all debate viewers have a positive reaction to the speech:

0131_sotu_reaction


Also, agreement that the president's policies are "moving the country in the right direction" always increases by at least ten percentage points (more complete data can be found in the report on last year's survey by Gallup's David Moore).

0131_direction


So why a "bump" in the instant reaction surveys but not in more rigorous polls of all Americans?  A big clue comes from the self-reported partisanship of those surveyed immediately after the speech (as reported in Jones' analysis).  It always seems to attract a disproportionate number of partisans from the president's party.  The 2005 speech had the most Republican leaning audience yet.  MP can only speculate, but my guess is that this change had mostly to do with the ever shrinking network audience.  As interest in the SOTU falls, the President's fans are increasingly the only ones who make the effort to tune in. 

0131_partisanship


The instant polls that will be done tonight will no doubt provide interesting results.  As Gallup and CBS can compare the results to similar polls done with comparable methodologies in past years, there may be findings of interest.  However, before jumping to conclusions about the real impact on public opinion, consider this advice from ABC polling director Gary Langer quoted in today's The Note:

1. "Partisans watch these things; rather than torturing themselves, people who don't like the guy can just turn to another of their 100 channels. When we polled on the SOTU in 2003, we found that the president's approval rating among speech watchers was 70 percent, versus 47 percent among those who didn't watch. As we put it at the time: 'Simply put, people who don't like a particular president are considerably less apt to tune him in.'"

2. "These speeches tend to be composed of poll-tested applause lines, so the people who watch are already predisposed to like what they hear."

"We haven't done immediate post-SOTU reax polls in years (pre-war 2003 was an exception) because, given 1 and 2 above, they are so dreadfully predictable."

So here is a bit of advice to those who spin:  The instant analysis polls may show a positive reaction, but they always do.  Such reaction is not evidence of a "bump."  To know if this year's speech breaks precedent and moves public opinion in a significant and lasting way, we will just have to be patient and wait.

UPDATE:   See my subsequent posts on this year's results, the way the networks characterized those results and what the Nielsen ratings had to say about the audience size. 

Posted by Mark Blumenthal on January 31, 2006 at 03:45 PM in Instant Reaction Polls, Polls in the News, President Bush | Permalink | Comments (2)

January 30, 2006

Pre-SOTU Polls

The last week  brought a batch of new surveys from the national pollsters.  While these include new questions that go into great depth on perceptions of the state of the union, corruption in DC and NSA wiretapping, on balance they do not indicate any significant shift in overall views of George Bush's job performance during January.  The President will address a nation that rates his performance in office roughly ten points lower than when he won reelection in November 2004. 

Here is a complete list of the most recent polls, along with links to all available data summaries and results for the President's job approval rating: 

  • Time/SRBI: 41% approve, 55% disapprove (1002 adults interviewed 1/24-26,  article, SRBI summary and results)
  • Fox News/Opinion Dynamics: 41% approve, 51% disapprove (900 registered voters interviewed 1/24-25, summary and results)
  • Cook/RT Strategies : 47% approve, 50% disapprove (1,000 adults interviewed 1/22-25, Cook's analysis and results)
  • LA Times/Bloomberg: 43% approve, 54% disapprove (1,555 adults interviewed 1/22-25, results)
  • CNN/USAToday/Gallup:  43% approve, 54% disapprove (1,006 adults interviewed 1/20-22, CNN story, USAToday story & results, Gallup summary [subscription only]). 

Six of these surveys were in the field in both early December and last week.  All but Cook/RT Strategies also fielded a poll in early January.  Although the individual polls show what appears to be random variation, the wider view continues to be one of stability, at least with respect to Bush's overall job rating.  By averaging across these comparable polls, we have an "apples to apples" average of 43% in early December and 43% as of mid last week.

0130_bush


PS:  For the table above, I averaged results for the Rasmussen automated tracking survey for the six days (1/21-26) that were closest to the field dates of the other late January polls.  Today, Rasmussen reports a 50% positive job rating for the interviews conducted over the last three nights (1/27-29).  It is worth noting that Rasmussen also reported a three-night result of 50% during the week between Christmas and the New Year.   Whether real or just random statistical noise, that "bump" faded quickly.

UPDATE:  Gallup has new pre-State of the Union analysis online that should be available to non-subscribers for the next 24 hours.

UPDATE II:  Franklin updates his graphics and includes this bottom line on the recent trend (or lack therof):

While some individual polls have registered relatively high approval ratings, these have not been sustained in the same poll, which in January have shown results consistent with the range of other polls, and an overall approval estimate of 42.5% as of January 26 polling.

Posted by Mark Blumenthal on January 30, 2006 at 02:34 PM in Polls in the News, President Bush | Permalink | Comments (6)

January 29, 2006

Update: The MyDD Poll

The MyDD survey that we discussed last week is now complete, and the site is rolling out the results over the weekend (here, here and here so far, with a detailed description of the methodology here).  To be sure, this is a survey with a partisan sponsor and some of the questions it asks reflect the activist-liberal views of the MyDD readership.  Those with a different point of view may well see no added value in the MyDD survey, or see some biased intent in some of the questions.  Such is the challenge of surveys with partisan sponsorship.  However, blogger Chris Bowers has endeavored to conduct a credible survey using a professional pollster, a traditional methodology and a sample that represents all Americans.  He has also tried hard to not only meet the usual standards for disclosure but to significantly exceed them.  Whatever you think of the substance of the MyDD questions, they deserve credit for their commitment to professionalism and transparency. 

Bowers and Wright briefed me on their efforts on Friday and, not surprisingly, we discussed very little that they have not subsequently disclosed online.  The only exceptions are the remaining results that they will post online over the next few days. 

Here are two quick methodological notes of interest to MP readers:  First, MyDD pollster Joel Wright had considered an unusual method to weight the data by self-reported party registration (rather than identification).  After much consideration, Wright decided against weighting by party, because his method would have raised the percentage of self-identified registered Democrats from 33% to 40% of his sample.  Wright explains in a comment posted on MyDD:

There's a decent case to make that this [40% Democrat weighting] is correct, from the data I compiled and also a few other sources. However, most other polls don't show that figure for Dems. That would have caused a controversy. It would have misdirected discussion about the poll into a discussion about proper proportions, etc. We would have been talking about weights instead of data and findings. Accused of partisanship when it's not so in the least. All in the first time out for the poll. So I made a command decision, a very hard decision to make, late last night to go with conventional method in the best interest of the poll. This time.

Wright has a point about the potential for controversy.  MP certainly would have had some concerns, but as Wright is headed back to the drawing board on his weighting methodology I will save my questions for another day.

Separately, Wright says that MyDD plans to make the raw data available for downloading within about a month, so that those interested in doing their own tabulations or analysis can have at it. 

Finally, let me close with some general comments about partisan polls.  In my last post on the MyDD poll, I wrote that "polls sponsored by partisan groups are nothing new."  Regular MP reader Robert Chung (creator of this worthy page on survey house effects) asked in a comment if I meant that "sponsorship means the results are not reliable?"  That was not my meaning, but he raises a good question nonetheless.  How reliable are partisan sponsored polls?

Although most of the polls reported on in mainstream media are sponsored by the media outlets themselves, political parties, candidates and interest groups also routinely conduct their own surveys.  Readers should remember that MP earns his living conducting polls for Democratic political candidates, and as such, may not be the most impartial source on this question.  However, my perspective is that my "partisan" clients expect, first and foremost, that we provide accurate and reliable results, so our professional duty is to get the numbers right, without regard to their potential propaganda value.   As such, data from the internal polls that drive campaign strategy rarely see the light of day.    

However, as many of you know, partisan pollsters do sometimes release survey results selectively when they cast our clients in a favorable light.  Academics who analyze public polls systematically (such as this recently published study, and also this paper) find that releases from partisan pollsters show a consistent bias.  That is, publicly released pre-election polls from Democratic pollsters tend to be few points more favorable to Democrats, and polls from Republican pollsters tend to be a few points better for Republicans.  Obviously, pollsters debate the reasons for this pattern, but most believe "selection bias" explains a lot of it.  Consider it this way:  Random sampling error means that all polls have a range of variation.  If partisans release only those results that fall on the positive half of the bell curve (from their perspective), the released results will show a consistent bias.  Either way, data consumers should take public releases of partisan results with the appropriate grain of salt. 

The usual pattern of cherry-picking of results is one reason to be encouraged by the precedent of the MyDD poll of putting out so much detail about the survey in advance.  They announced ahead of time that they were fielding the poll, released several drafts of the questionnaire for reader review and comment and announced when their survey went into the field.  MP can think of no other news media or partisan poll that discloses so much in advance.  MyDD's only hesitation was in releasing the final version of their questionnaire in advance out of a fear that the wording would be criticized before they had a chance to release data.  MP hopes that in future surveys they will take that extra step, because it would help boost the credibility of their claim to release all data, warts and all.

Yes, of course, MyDD is a liberal activist site, and the content of their survey reflects that world view.  Conservatives may well take issue with some of the question wording or see bias in their analysis.  That is the nature of partisan polling.  However, I hope all will appreciate the professionalism and commitment to transparency in the MyDD approach.  They are setting a good precedent for others to follow.  MP sincerely hopes a site on the conservative side of the blogosphere follows their example.   

Posted by Mark Blumenthal on January 29, 2006 at 12:33 PM in Polling & the Blogosphere | Permalink | Comments (7)

January 27, 2006

Palestinian Exit Polls

Wednesday's Palestinian elections once again highlight the shortcomings of exit polls as a tool for projecting election results.  One of the many myths perpetuated by the seemingly endless debate over the 2004 U.S. exit polls involves the supposed infallibility of exit polls, especially in Europe and other countries.  In reality, as noted here more than a year ago, an election information project funded by the United National and the US Agency for International Development concluded in 1999 that when it came to projecting winners, "the majority of exit polls carried out in European countries over the past years have been failures."  Yesterday's Palestinian elections provide another example of this fallibility. 

Our friend Professor Franklin has been all over the Palestinian polling lately, and his blog is the must-read on this subject.  At least two organizations conducted pre-election polling in Palestine, and Franklin's chart shows a wide lead by the ruling Fatah party that closed to roughly ten points as Election Day approached.

Yesterday, three different exit polls all indicated a narrow win by Fatah. Here is Franklin's summary of the "party list ballot" vote estimates:

There were three exit polls... done for the Palestinian Legislative Council elections on January 25th, one by the Development Studies Programme at Bir Zeit University, another by the Palestinian Center for Policy and Survey Research (PSR) and the third poll was done by An-Najah University in Nablus. The results for the party list ballot were:

  • DSP/Bir Zeit: Fatah 46.4%, Hamas 39.5%
  • PSR: Fatah 42%, Hamas 35%
  • An-Najah: Fatah 46%, Hamas 40%

The exit pollsters used these results to estimate the share of parliamentary seats won by each party and each projected Fatah winning more seats than Hamas, although their estimates ranged between 58 and 63 seats for Fatah and 46 to 58 for Hamas. 

This morning brought very different news.  Hamas had taken 76 seats to 43 for Fatah.  Obviously the early forecasts of a Fatah victory were off the mark, but what is unclear - at least from the reports I have seen - whether the discrepancies were about the estimates of the party ballot or in the estimated allocation of parliamentary seats.

A brief article from the Associated Press reports that exit pollsters were "at a loss to explain their failure" yet unwilling to comment for the record.  The article goes on to speculate:

The discrepancy may have been caused by reluctance of voters to admit to pollsters that they were abandoning the ruling party. The polling errors appeared especially glaring in district races, where smaller numbers of voters were surveyed.

Half the seats in Wednesday's parliamentary vote were chosen on a national list and the other half by districts. While the national voting appeared to be close, election officials said Hamas had won a large majority in the district races.

Hamas apparently took advantage of divisions in Fatah, which had fielded multiple candidates in many districts, splitting the Fatah vote while Hamas' support remained united.

As my knowledge of Palestinian elections and exit poll methodologies is cursory at best, I defer to Franklin

The problem of estimating winners in multimember districts with from one to nine members (averaging 4.1) is a daunting problem for any exit poll, even ignoring any response bias problems...We'll need the district level vote data to know how close these district races were-- my guess is that many were way too close to possibly be called by an exit poll at the district level (where the margin of error would be quite substantial.) But until the CEC releases the preliminary counts, we can't do more than speculate about this. (Plus we don't have access to exit poll results at the district level, at least not yet.) What will be telling is if the exit polls estimates of the party list shares for Hamas and Fatah were close to right but the district level results were poorly estimated.

Franklin, in turn, links to the cautions issued by UCSD Political Science Professor Matthew Shugart about the perils of Palestinian exit polling before the announcement of the contrary election results:

I would be really cautious with exit polls in an electoral system like this-even if it were a 'normal' environment in which people felt free to talk to people on the street asking them how they just voted. By that I mean that this electoral system-multi-seat plurality, plus list PR in parallel-means the pollster needs to know:

(1) whether the voter used all his/her votes in the nominal tier (the local multi-seat district);

(2) the identities of all the candidates he or she voted for;

(3) and the party list the voter checked.

That's a lot of moving parts for each interviewee. And then the exit-polling company has to extrapolate from a sample and somehow generate a national allocation. That involves lots of assumptions about how completely other similar voters filled out their slate of candidates in the nominal tier. In general, multi-seat plurality races are very hard to predict because small vote shifts for individual candidates can make substantial differences in the outcome of the election in a district. It is not as though the outcome can be extrapolated just from knowing the party a voter preferred when the voter has more than one vote and can use all or none of them and spread them out on candidates of multiple parties or concentrate them all on one party.

UPDATE:  Shugart has more to say here.

UPDATE (1/28):  Franklin has two new posts up.  In the first, he goes into detail on what he describes as the near "impossible task" of using an exit poll to pick winners inthe many multi-member voting districts in the Palestinian elections.  In the second, he compares how well each exit poll did at projecting the "national list vote:" 

The three Palestinian Legislative Council election exit polls tended to overestimate Fatah's vote in the national party list voting but they all seriously underestimated Hamas' strength. The result is that all got the leader wrong, and this was beyond the margin of error of two of the three surveys (and at the extreme end of the MOE of the third, and least precise, survey.) The bottom line: the preliminary vote count produces results that fall outside the margin of error of the exit polls.

He goes on to speculate about three potential reasons for the discrepancy, a greater reluctance of Hamas voters to participate, a greater reluctance of Hamas voters to reveal their preference and the difficulty in constructing the exit poll sample given the lack of previous comparable elections to use a model.   Here is his bottom line:

[T]he exit poll errors cannot be explained by random variability due to sampling. Systematic response errors, turnout estimation, or non-response are likely culprits in this case. In principle, an exit poll should have been able to detect the Hamas lead. With the sample designs used here, and their associated margins of error, it is unlikely any of them could have concluded that Hamas' lead was statistically significant. But getting the direction right was a possibility.

One refreshing aspect of these exit poll problems is that they do not easily lend themselves to the conspiracy theory interpretations common after the U.S. 2004 presidential elections. With ballot counting conducted under the Palestinian Authority, any fraudulent counting would seem more likely to favor Fatah than Hamas. Sometimes the exit polls are just wrong. We should all remember that lesson (and continue to strive to improve the science of exit polls.)

For the graphs, details, links and more, read it all.

PS:  As of last night "final" vote data were still not available.  Franklin's analysis is based on the preliminary count which, I'm told, should be pretty close to final.   

Posted by Mark Blumenthal on January 27, 2006 at 07:30 AM in Exit Polls | Permalink | Comments (1)

January 26, 2006

Party ID Updates

Here is another quick update on some interesting releases over the last week or so on the subject of party identification from Harris and Gallup.   Two new reports - based on a full year's worth of data - show a slightly greater Democratic advantage in 2005 than 2004, however similar data from the Pew Research Center shows no such trend.

  • Harris Interactive released their annual review of the long term trend in party identification and self-reported ideology.   Their conclusion, based on rolling together data from 4,945 US adults interviewed by telephone during 2005:  36% of Americans identified as Democrats (up from 34% in 2004) and 30% identified as Republicans (down from 31% in 2004).  The six point Democratic edge is "the largest lead since 2000."

The table showing 36 years of data on party identification is well worth the click, and worth comparing to a similar table of party ID results from American National Election Studies conducted in even numbered years since 1952 by the University of Michigan. 

  • Gallup released their own compilation based on their massive pool of 42,431 interviews conducted among US adults during 2005 (and the report appears to be free to all).  The report provides the results to the root party identification question for 2005 only (33% Republican, 33% Democrat).  They provide trended data based on the combined percentage of partisans plus those who initially identify as independents but say the "lean" to one of the parties on a follow-up question. 

Among those who identify or lean to one of the parties, their results also show a growing Democratic advantage in 2005. The Democratic edge increased from a margin of 2.7 percentage points in 2004 (47.9% to 45.2%) to 4.5 points (47.% to 43.2%) in 2005.  Jeff Jones report also provides party identification data for all 50 states. 

  • The Pew Research Center also provides annualized results for party identification on the "topline" questionnaire it releases with every survey, but only for the root party ID question (not for "leaners"). Their results show no change in party identification from 2004 to 2005.  In both years, 33% identified as Democrats, 30% as Republicans.

Keep in mind that the Michigan/Harris version of the party ID question differs slightly from the one used by Gallup.

Michigan and Harris ask:  Generally speaking, do you consider yourself a Republican, a Democrat, an independent or what?

Gallup asks:  In politics, as of today, do you consider yourself a Republican, a Democrat, or an Independent?

As noted here before, although the political scientists continue to debate the issue, some have produced evidence that the Gallup version allows for short term variation.

PS:  Occasional MP commenter DemFromCt has posted thoughts on what all of this might mean for Democrats on the Next Hurrah

Posted by Mark Blumenthal on January 26, 2006 at 01:02 PM in Weighting by Party | Permalink | Comments (1)

January 23, 2006

The MyDD Poll

While we're on the topic of poll questions commissioned by web sites on the blogosphere's left wing, here is a truly innovative development:  The blog known as MyDD has decided to go beyond simply placing a few questions on a larger omnibus poll and is now fielding the first of a "semi-annual series of netroots commissioned polls" in collaboration with its readers that seeks to ask questions that mainstream news polls "seem unable to ask themselves."

Here are the details.  The project is the brainchild of MyDD blogger Chris Bowers, a name that should be familiar to regular MP readers because of his frequent commentary on mainstream media polling.  Bowers was a vocal critic of the Gallup likely voter model during the 2004 campaign, and shared his concerns in a speech (delivered via video) to a roundtable at last summer's conference of the American Association of Public Opinion Research (AAPOR).  MP may not always agree with Bowers conclusions, but I find his comments to be typically thoughtful, constructive and well intentioned.

Although polls sponsored by partisan groups are nothing new, the MyDD project has taken a unique approach by involving its readers not only as sponsors but as participants in the process of drafting questions.  A little over two weeks ago,  Bowers announced the project along with a rough draft of questions he had in mind (mostly on Iraq, domestic spying and impeachment) and then invited readers to "critique these drafts, and to offer your own questions for possible inclusion."   That first draft received more than seventy comments.   Three days later, Bowers posted an updated draft of questions and again invited comments and critiques.  This post drew another 50 or so comments, including a critique from occasional MP correspondent "Professor M."

Meanwhile, Bowers and his colleagues hired a professional pollster – Joel Wright of Wright Consulting – partnered with a non-profit and made several appeals to raise the roughly $16,000 to cover the cost of fielding the survey.  The most recent update, posted on MyDD by Wright himself (blogging under the nom de Internet “Sun Tzu”), indicates that the poll is now in the field, and they expect calling to be completed by Tuesday or Wednesday night. 

MP has been kicking himself for not noticing the MyDD polling project until last week, after they finalized the questionnaire and put the survey into the field.  Had I noticed it earlier, I might have made a few suggestions about the questions, but more importantly, would have invited MP's knowledgeable readers to do the same.  You may wonder, as I do, what questions (and what language) Bowers and the others at MyDD settled on.  For example, the second draft posted by Bowers included essentially the same Zogby impeachment question I critiqued here last week.  I emailed Bowers last week, and he let me know that the final version differs from Draft Two and the Zogby version of the impeachment question had been dropped. 

Bowers and Wright obviously prefer to hold the final version of their questionnaire until the conclusion of the survey.  That seems fair given that no mainstream media poll I am aware of discloses in advance when it is fielding a new poll, much less the wording of the questions they plan to ask.  Given Bowers' passionate commitment to greater transparency in polling, I think we can expect an unusual level of disclosure in the days and weeks ahead. 

It will be interesting to watch.  Whatever your views of MyDD's politics, we should thank them for providing, at very least, a great new opportunity to learn something about the survey process.

PS:  Happy Birthday Chris.

Posted by Mark Blumenthal on January 23, 2006 at 02:41 PM in Innovations in Polling, Polling & the Blogosphere, Pollsters | Permalink | Comments (10)

January 21, 2006

Diageo/Hotline Updates

A very busy end-of-week left me with only enough time to note the latest Diageo/Hotline poll which was released on Thursday (press release, results, presentation) that helps bring us up to on recent MP posts on outliers and Hillary Clinton's ratings among Democrats.

This most recent Diageo/Hotline poll puts George Bush's job rating at 4644% approve and 5355% disapprove.  Regular MP readers will remember that the last Diageo/Hotline poll, released in mid-December, appeared to be a bit of an outlier in terms of the Bush job rating (50% approve, 41% disapprove - though results on other questions were generally in line with other surveys).  This latest result looks like a classic example of regression to the mean, as other have generally shown no significant trend over the last month.  Note also that the Hotline samples registered voters (rather than adults) which may explain why it typically gets a slightly higher Bush job rating than other surveys. 

Though not included in the presentation online, pollster Ed Reilly noted in a poll briefing on Thursday that the percentage of Democrats giving Hillary Clinton's unfavorable rating has been increasing in recent months, from 11% in October to 19% on this most recent survey.  Her favorable percentage among Democrats remains high (at 75%) but that has also fallen slightly in the last month (from 79%). 

0121_hotline_hillary


The Hotline's polling editor Aoife McCarthy also posts to On-Call about the 2006 presidential vote trial heats on this survey matching up Hillary Clinton and John McCain against each other and against "generic" candidates from each party. 

Finally, the survey takes an in-depth look at the Abramoff scandal and perceptions of corruption in government.  One fascinating slide from the poll presentation shows that registered voters divided evenly on whether the terms honest, ethical or immoral applied better to the Democratic Party or the Republican party, with slight pluralities saying the terms apply to both equally.  However, when asked about "corruption," twice as many say Republicans are corrupt (24%) than Democrats (12%), but far more say both are corrupt and only 5% were unsure. 

Hotlinecorrupt


Posted by Mark Blumenthal on January 21, 2006 at 12:24 PM in Hillary Clinton, Polls in the News, President Bush | Permalink | Comments (2)

January 19, 2006

Polling on Impeachment - Part 1

At a town hall meeting in San Francisco on Monday, according to a report in the LA Times, Democratic Minority Leader. Nancy Pelosi had to shout "to be heard above the boos and catcalls" when she "rejected calls for President Bush's impeachment."  Last week, Elizabeth Holtzman, a former member of Congress who served on the House Judiciary Committee when it took up articles of impeachment against President Nixon, authored a cover story in The Nation that makes a detailed case for Bush's impeachment of President Bush.  Near the end of her piece, Holtzman included this passage:

Organizations like AfterDowningStreet.org and ImpeachPac.org, actively working on a campaign for impeachment, are able to draw on a remarkably solid base of public support. A Zogby poll taken in November--before the wiretap scandal--showed more than 50 percent of those questioned favored impeachment of President Bush if he lied about the war in Iraq [emphasis added]. 

Daunting though it may be, MP hopes to avoid the debate about the merits of impeaching Bush to focus narrowly on what polls can tell us about the "base of support" for impeachment.  This issue actually raises a number of questions appropriate for this forum:  What is the best way to ask about attitudes regarding impeachment?  What standards should "mainstream" media pollsters use to determine what questions to include in their polls?  Just how many Americans really want to impeach Bush?   How do attitudes about impeachment and Bush now compare to attitudes about impeachment and Bill Clinton in 1998?

I will not try to address all of these questions in one post (though I hope to come back to this thread a few times over the next week or so).  Today I want to start by looking closely at the polling question that Holtzman cited, first asked by pollster John Zogby.  While it does indicate considerable discontent with the President, I believe it falls short as measure of the "base of support" for impeachment.

Here's the background.  Back in June 2005, pollster John Zogby included the following question on a national survey of "likely voters"

Do you agree or disagree that if President Bush did not tell the truth about his reasons for going to war with Iraq, Congress should consider holding him accountable through impeachment?

The Zogby survey found that 42% agreed and 55% disagreed.  He discussed the results on Keith Olbermann's "Countdown" on MSNBC and a few days later the "Politics" column in the Sunday Washington Post picked up the results.  A subsequent blog item by the Post's Dan Froomkin wondered why "only three mainstream outlets" had made "even cursory mention" of the result. 

Several web sites - particularly the sites AfterDowningStreet.org and Democrats.com (an independent group not to be confused with the official Democratic Party site, Democrats.org) - started an ongoing campaign to bombard various media pollsters, reporters and editors with emails "demanding more polls on impeachment" (yet another topic for a future post). In September, the site AfterDowningStreet.org set about raising money to run their own questions on various "omnibus" polls.   They placed the Zogby impeachment question on a poll by IPSOS public affairs in early October (changing only the words "though impeachment" to "by impeaching him").  They also paid Zogby to track his original question again on a survey in early November.  The results were essentially consistent:  On the Ipsos survey, 50% agreed and 44% disagreed.  On the Zogby update in October, 53% agreed and 42% disagreed.

Just last week, AfterDowningStreet commissioned yet another question on a Zogby poll, this one focused on the NSA wiretap controversy:

If President Bush wiretapped American citizens without the approval of a judge, do you agree or disagree that Congress should consider holding him accountable through impeachment.

The results were consistent with the previous versions of the Zogby impeachment question.  52% agreed, 43% disagree and 6% were not sure or declined to answer. 

Now I'll cut to the chase.  I am not a fan of the Zogby impeachment questions, largely because they tells us very little about what Americans think right now about the merits of impeaching President Bush or removing him from office.  I am also dubious about their value in projecting how Americans might react if the issue were widely debated. 

My skepticism is rooted in the complex nature of the impeachment process.  As we all should have learned in High School civics, impeachment is a process of bringing charges analogous to a criminal indictment that is initiated in the House of Representatives.  Once a president is "impeached," a trial is held in the Senate.  If a two-thirds majority of Senators votes to convict, the process removes the President from office.

The details of this process are hazy for many Americans.  Late in 1998, as the House was about to vote on whether to impeach President Clinton, CBS News twice asked Americans about the meaning of the term impeachment:

As far as you know, if the full US House of Representatives eventually votes to impeach President Clinton, does that automatically mean that President Clinton will be removed from office, or not?

In October 1998 (on a poll conducted jointly with the New York Times), only 46% offered the correct "no" "yes" answer, while more than half either said yes (33%) or were unsure.  Two months later on the eve of the impeachment vote CBS obtained roughly the same result - 54% said no, 30% yes and 16% unsure.

Given this confusion, a pollster should not just throw out the term "impeachment" and assume that all respondents understand.  Rather the pollster should word the question to provide clear meaning of the term so that it is easy to understand and interpret, both for the respondents and those of us who use the data.  Unfortunately, rather than clarifying the meaning of "impeachment," the Zogby question makes it even more vague: "Should Congress consider holding him accountable through impeachment?" 

What does that mean?  Is the question asking whether Congress should remove Bush from office, begin the process of removing him from office or just "consider" doing so?   Do respondents hear this as a question about the formal process of impeachment or just about "holding him accountable?" If the meaning of the question is unclear to us, it was certainly unclear to the respondents.  So how do we interpret the results? 

Yes, the results do indicate considerable discontent with the president and, as John Zogby himself put it, "just how badly divided this country is over the war."  However, we have many other poll measures on that score.  The most recent ABC/Washington Post poll, for example, found that 39% strongly disapprove of Bush's performance and 43% strongly believe the Iraq War was not worth fighting.  Recent polls by CBS, Time Magazine and Fox News put the number of Americans who believe intentionally or deliberately "misled" the country in making the case for war at somewhere between 44% and 52%.   But discontent is not the same as favoring impeachment or an early removal from office.  If the Zogby question cannot help us distinguish between the two, it is not very useful.

Consider the result of another impeachment question paid for by AfterDowingStreet.org and placed on an automated poll conducted by Rasmussen Reports in October.  To the credit of Rasmussen and the sponsors, this question was much more clear and direct:  "Should President Bush be impeached and removed from office?"  Nearly a third (32%) said yes, 56% no and 12% were not sure.

So how does that 32% result compare to similar questions asked about previous Presidents.  And what about the "if" clause in the Zogby question ("if President Bush did not tell the truth about his reasons for going to war with Iraq")?  Does that give it value in projecting what public opinion might be in the future?  I'll take up those questions in subsequent posts. 

Posted by Mark Blumenthal on January 19, 2006 at 07:43 AM in Impeachment, Polls in the News, President Bush | Permalink | Comments (10)

January 17, 2006

More Bush Job Rating Polls

Just have time to post links to a few new surveys that included the Bush job rating that I had not seen last week or that have been released since.  These results are consistent with those summarized here last week.  I see not lasting or statistically significant change in Bush's overall job rating since early December:

  • SurveyUSA has ust posted Bush job rating results from a new round of automated surveys conducted in all of the 50 states.  It includes a combined weighted national average that shows 41% approve and 56% disapprove of his job performance.    The results were one point different in each direction in December:  40% approve, 57% disapprove.
  • AP-IPSOS released results last week from a survey conducted January 3-5.  It showed the Bush job rating at 40% approve, 59% disapprove; down two points from their last survey in early December (42% approve, 57% disapprove). 
  • Harris Interactive released results from a telephone survey conducted January 6-9 that shows 43% rating Bush's performance as excellent or good, 56% as "only fair or poor."  That marks a significant increase since their last survey in November, when 34% were positive and 65% negative.  Unlike the other surveys we have been discussing, Harris did not poll on Bush in December. 
  • Also, a Zogby poll conducted Jan 9-12 shows Bush's job rating at 39% excellent or good, 60% fair or poor.   Zogby's last telephone poll, conducted December 6-8, showed a slightly lower approval rating (38% excellent/good, 62% fair/poor). 

Posted by Mark Blumenthal on January 17, 2006 at 04:39 PM in Polls in the News, President Bush | Permalink | Comments (3)

January 16, 2006

On "Oversamples" and the AP-IPSOS MLK Holiday Poll

The Associated Press, in its usual partnership with IPSOS, released a survey yesterday (story, results & methodology) on attitudes on today's holiday honoring Martin Luther King, Jr. and his dream of racial equality:

Three-quarters of those surveyed say there has been significant progress on achieving King's dream. But only 66 percent of blacks felt that way...

Only 23 percent of respondents say they will do anything to commemorate the national holiday that took effect in 1986 after a lengthy campaign in Congress to honor King. A solid majority of blacks, 60 percent, say they will get involved in holiday activities.

If you do nothing else to commemorate the holiday, read all of Will Lester's AP story on the poll and its implications. 

To allow for a sufficiently large sample of African Americans, AP-IPSOS conducted an "oversample." That term often confuses readers and deserves a bit more explanation.  The AP methodology summary tells us that they interviewed 1,242 adults, including 312 blacks and adds this sentence: 

The total sample includes an oversample of blacks, completed in part by interviewing people who self-identified as black in previous Ipsos telephone surveys.

Do you find that explanation a bit confusing?  If so, here's how it works: AP-IPSOS typically conducts a representative sample of about 1,000 adults.  I am going to assume they did the same on this poll, and guess at the other numbers involved.  A base sample of 1,000 would have included a representative sample of African Americans.  In this case it appears to have been roughly 70 interviews.**  Media pollsters typically shy away from reporting on subgroups that small since they have very large statistical sampling error (for n=70 it would be at least  11%).   So they sometimes conduct an "oversample" of additional interviews among respondents in the subgroup of interest to increase the sample size.  In this case AP-IPSOS conducted (again, my guess) roughly 242 additional interviews among "people how self identified as black in previous IPSOS telephone surveys."

In order to report results among all adults, the pollsters weight the total pool of interviews (n=1,242 in this case) so that the proportion of African Americans in the weighted sample matches Census estimates for the U.S. Population (roughly 11%).  This is an important point: Results reported for "all adults" in this AP-IPSOS poll come from a statistically adjusted sample that "weights down" the African-American oversample to its appropriate size in the U.S. adult population. 

Most pollsters will also use Census estimates to weight on other demographic factors to correct for small differences resulting from sampling error or non-response bias.  In this case, AP-IPSOS tells us that they also weighted by other factors "such as" age, sex, region and income."

**Again, if I'm guessing right, the AP-IPSOS base sample included roughly 70 interviews, or 7%, among African Americans.  A perfectly representative sample of U.S. adults would have been roughly 11% African-American.  Why the (apparent) difference?  Response rates tend to be lower in urban areas, and as a result, unweighted national samples typically under-represent African Americans.  As noted here previously, most national pollsters typically weight African-Americans up slightly to match US Census estimates.

Posted by Mark Blumenthal on January 16, 2006 at 10:46 AM in Polls in the News | Permalink | Comments (2)