« June 2006 | Main | August 2006 »

July 28, 2006

Bush Job Approval Update

This week brought a slew of new national surveys, but the trend in the Bush approval rating appears to be essentially flat since mid-June. 

The new surveys are from Gallup/USA Today (story, results, Gallup reports), NBC News/Wall Street Journal (story, results), CBS/New York Times (CBS story, Mideast results, Bush/Congress results, NYT story, results), Diageo/Hotline/Financial Dynamics (release, results, slides).  The table below shows the results from this week as compared to comparable surveys conducted in mid-june, also including results from automated pollster Rasmussen.  This nearly apples-to-apples comparison shows two slightly up, two slightly down, all within sampling error and the overall average unchanged (38% approve, 58% disapprove).

Bush_job_approval


As usual, Professor Franklin takes a far more sophisticated and graphical approach and reaches the same conclusion.  The direction of the most recent trend depends on how sensitive he sets his regression model to catch short term changes.  His standard and more stable blue trend line shown below (which did not include the latest Hotline poll) shows the recent increase in Bush's approval rating continuing but flattening, while the more sensitive red line shows it down slightly since mid-June:

Franklin_currentbushapproval20060725b


Franklin's updates the chart to include the Hotline numbers and concludes:

[Bush's job approval] looks like it has been pretty stable since June 11, despite three polls that reached 40% or more. While my standard blue trend sees some slight continuing increase, the more responsive red line estimator sees flat and a very slight recent decline. But I mean VERY slight. None of these estimates is clear enough to be willing to declare a trend has begun either way.

Posted by Mark Blumenthal on July 28, 2006 at 06:47 AM in President Bush | Permalink | Comments (0)

July 27, 2006

Even More on Measurement Error

Tuesday's post on a question wording experiment by Rasmussen Reports yielded some interesting comments and email.  Here is a sample.

The Rasmussen questions poses four answer categories (strongly approve, somewhat approve, somewhat disapprove or strongly disapprove) while the traditional job approval rating question poses just two (approve or disapprove).   According to their report, the Rasmussen experiment showed that the four category version produced a smaller "don't know" response and a hgher approval rating. 

My observation that the word "somewhat" softens the choice and often makes the "somewhat" positive choice (somewhat approve, somewhat favor, etc), more appealing provoked an excellent question from reader Jon Willits in the comments section:

Couldn't this work both ways? Might there be some people who would feel troubled saying "Disapprove", but would feel ok about saying "Somewhat Disapprove?" Or is there some psychological reason to think this would only work in the positive direction?

The short answer is, yes, there is a psychological reason why "somewhat approve" may be more attractive than "somewhat disapprove," particularly among respondents that are expending the minimal effort to answer the question.  Survey methodologists have long noticed a phenomenon we call "acquiescence," which is the tendency to choose the positive response regardless of the content of the question.   See this 1999 article by Prof. John Jon Krosnick from the Annual Review of Psychology for a brief review of the "voluminous and consistently compelling" evidence as well as the psychological context.   

Now, the theory behind acquiescence does not guarantee that asking respondents to choose between four categories of approval will always yield a bigger approval percentage than asking them to choose between just approve or disapprove.   From Rasmussen's description, it appears it did in one experiment, but we have not seen the actual data. Moreover, a survey based experiment -- like any survey -- is always prone to random variation. 

Along those lines, I also heard from Doug Rivers at Polimetrix, who reports on several similar wording experiments conducted online using their online panel.  They conducted a very similar test and found that a version of the approval question virtually identical to Rasmussen's resulted in a "not sure" response of less than one percentage point compared to 4% for the the classic two-way choice ("approve or disapprove").  In this case, the lower "not sure" response did not translate into a stronger "approval" rating.   In fact, both the approve and disapprove categories were slightly higher on the four-way choice.   

Finally, I also heard from Emory political science Prof. Alan Abramowitz, an occasional contributor to DonkeyRising, who reports discovering a seemingly similar effect in the 2004 exit polls.  Abramowitz was trying to compare self reported ideology among primary and general election voters.  He reports this discovery:

The questions were slightly different [see reproductions below].  The general election exit poll simply asked respondents to describe themselves as liberal, moderate, or conservative.  The primary exit polls give them five categories to choose from: very liberal, somewhat liberal, moderate, somewhat conservative, and very conservative.  The percentage of moderates was considerably higher in the general election exit poll than in the primary exit polls for the same parties from the same states, but I think that this is probably not because Democratic primary voters are more liberal and Republican primary voters more conservative than Democratic and Republican general election voters, but because of the differences between the 3-option and 5-option questions [emphasis added].

Abramowitz may be right, of course, but the difference in the question makes it impossible to know for sure.  And that is the point.  Differences in question language or in the number or language of the answer choices can produce different results.  Caveat pollster.   

***

The exit poll ideology question from the 2004 primaries:

Exit_ideol_pr


The exit poll ideology question from the 2004 general election:

Exit_ideol_gen


Posted by Mark Blumenthal on July 27, 2006 at 08:43 AM in Measurement Issues | Permalink | Comments (0)

July 25, 2006

Rasmussen Update: A Lesson in Measurement Error

Although still playing catch-up on the "day job," I want to highlight something that has appeared without fanfare over the last week on Bush job approval page on the Rasmussen Reports website.  Rasmussen, as most regular readers know, conducts surveys using an automated methodology that asks respondents to answer questions by pushing buttons on their touch tone telephones.  I have looked closely at the Bush job rating as reported by Rasmussen and noted that their surveys report an approval percentage that is consistently 3 to 4 percentage points higher than the results of other national surveys of adults (see the Charles Franklin produced graphic below).   Last week, Rasmussen offered this explanation: 

When comparing Job Approval ratings between different polling firms, it's important to focus on trends rather than absolute numbers. One reason for this is that different firms ask Job Approval questions in different ways. At Rasmussen Reports, we ask if people Strongly Approve, Somewhat Approve, Somewhat Disapprove, or Strongly Disapprove of the way the President is performing his job. This approach, in the current political environment, yields results about 3-4 points higher than if we simply ask if people if they approve or disapprove (we have tested this by asking the question both ways on the same night). Presumably, this is because some people who are a bit uncomfortable saying they "Approve" are willing to say they "Somewhat Approve." It's worth noting that, with our approach, virtually nobody offers a "Not Sure" response when asked about the President.

Although I  had not considered this possibility before, Rasmussen's finding makes a lot of sense.  In my own experience, the word "somewhat" (as in somewhat agree, somewhat favor, someone approve, etc.) softens the choice and makes it more appealing. 

Rasvsallpolls200406


[Click the graphic for a full size version]

It is worth noting that virtually all of the other national public pollsters begin with a question posing the simple two-way choice: "Do you approve or disapprove of the way George W. Bush is handling his job as president?"   A few then follow-up with a second question, such as the one used by the ABC News/Washington Post poll:   "Do you approve/disapprove strongly or somewhat?"  AP-IPSOS uses a similar follow-up, as does LA Times/Bloomberg, Diageo/Hotline and Cook/RT Strategies

The key point:  On the Rasmussen surveys, respondents choose between four categories:  strongly approve, somewhat approve, somewhat disapprove and strongly disapprove.   On all of the other conventional surveys -- including the one listed above that later probe for how strongly respondents approve -- respondents initially choose between just two categories:  approve and disapprove.**   They will only hear the intensity follow-up (strong or somewhat) after answering the first part of the question.   

So except for the pollsters (Harris and Zogby) that use entirely different answer categories (excellent, good, fair or poor), Rasmussen is the only national pollster I am aware of that presents an initial choice involving more than "approve" or "disapprove."  As Rasmussen argues, the consistently higher percentage he gets for the Bush job approval rating may well be an artifact of the way he asks the question.   If so, the difference in the graph above may be due to what pollsters call "measurement error."

I hope Scott Rasmussen will consider releasing the data gathered in their experiment that "tested this by asking the question both ways on the same night."   This sort of side-by-side controlled experimentation (where the pollster randomly divides their sample and asks slightly different versions of the same question on each half) is a great example of the scientific approach to questionnaire design and survey analysis.  We would all benefit by learning more.

**All of the pollsters mentioned above allow for a volunteered "don't know' response in some form, although the procedures that determine how hard live interviewers press uncertain respondents for an answer may vary among polling organizations. 

Posted by Mark Blumenthal on July 25, 2006 at 04:24 PM in IVR Polls, Measurement Issues | Permalink | Comments (7)

July 21, 2006

Quinnipiac's Latest Connecticut Poll

It’s been a busy week, with far more interesting topics than I have had time to blog.  Let's start with the Quinnipiac Poll released yesterday that puts Ned Lamont ahead of Sen. Joe Lieberman among likely Democratic primary voters in Connecticut by a "razor thin" margin (51% to 47%).   The Democratic primary results are part of a larger survey of 2,502 registered voters in Connecticut that includes many more questions including hypothetical general election match-ups for both Senate and Governor.   But the Lieberman Lamont race is the one everyone seems most interested in, so let me add a few comments. 

Regular readers will remember the recent post about the difficulty of polling in this race. This latest release helps clarify a few things, at least with respect to the Quinnipiac poll, which uses a methodology similar to that used by the national public pollsters.  Doug Schwartz, director of the Quinnipiac poll, confirms by email that they used a random digit dial (RDD) methodology to draw a sample of every household in Connecticut with a working landline telephone and then interviewed 2,502 respondents that self-identified as registered voters.  Of these, 962 (or 38%) identified themselves as registered Democrats. 

If I'm reading the statistics correctly, that percentage of Democrats among "active" registered voters reported by the Connecticut Secretary of State last year was lower (33%).  But keep in mind that both the anecdotal reports of unaffiliated voters switching their registration to Democrat in recent weeks and the possibility over-reporting of partisan registration due to the sort of "social discomfort" that often leads some respondents to say they voted when they didn't.

From the 962 self-identified registered Democrats, Quinnipiac identified 653 as "likely Democratic primary voters."  According to Schwartz, that process involved "questions that measured intention to vote, interest in the election, and interest in politics."  In other words, those indicating the greatest interest and likelihood to vote in the primary were designated likely voters.  "Our likely voter selection was guided by what has worked well for us in the past," Schwartz added.  "We used screens that have done a good job in predicting past elections.  They are not meant to try to predict the voter turnout."

That last point is important.  As I argued just before the 2004 elections, the process of calibrating a poll's likely voter model to match a specific level of turnout is inexact and involves far more art than science.  The Quinnipiac pollsters made an educated guess about turnout and adjusted their models accordingly.  But we should not assume that the "cut off" they used (68% of registered Democrats) amounts to a prediction of the level of turnout. 

Although most pollsters will tell you to be cautious about surveys just before this sort of election, there are two sets of conclusions we can make from these data:

  • First, whatever we might think about how well the Quinnipiac poll models turnout, they have done so consistently.  So the trends in the survey are meaningful and indicate that Lamont has clearly made significant gains since early June, when Lieberman led by 15 points (55% to 40%).
  • Second, the Quinnipiac poll shows that the level of turnout will matter.  For example, among the most likely Democratic primary voters, Lieberman gets a net negative rating (35% favorable, 39% unfavorable).   But among all Democratic identifiers,** Lieberman is more popular  (40% favorable, 29% unfavorable).   Similarly, likely Democratic primary voters are closely divided on whether Lieberman "deserves to be reelected" (46% yes, 45% no), while all Democrats are more positive (51% yes, 37% no).

All of this provides an important warning to all those scrutinizing the polls that may be coming out of Connecticut over the next few weeks.  Be careful about comparing results from the Quinnipiac poll to those we may see from other pollsters, and vice versa.  Poll to poll variation across polls done by different polling organizations may have more to do with differences in the way they define likely voters than with real trends. 

Finally, both campaigns have their own pollsters and are presumably conducting their own tracking polls.  It would be truly interesting to see how those results compare and contrast with the public polls, since I assume (though do not know for certain) that they are drawing samples from registered voter lists rather than using the RDD methodology.  Unfortunately, at this stage campaigns usually keep their internal surveys under tight wraps.  Another topic for another day. 

** The Qunnipiac release includes tabulations of the results by party.  According to Doug Schwartz, those tabulations are based on a question about party identification (""Generally speaking, do you consider yourself a Republican, a Democrat, an independent, or what?") rather than the question about party registration they use to help identify likely voters ("Are you registered as a Republican, Democrat, some other party or are you not affiliated with any party?").

Posted by Mark Blumenthal on July 21, 2006 at 10:51 PM in Likely Voters, The 2006 Race | Permalink | Comments (2)

July 19, 2006

New Pew Blogger Study

The Pew Internet and American Life project today released a fascinating study on Americans who consider themselves bloggers -- that is who say they maintain "a web log or 'blog' that others can read on the web."  A summary is available here, the full report and questionnaire here.  I have not had a chance to read it all yet, but have heard rumors about it and have been eagerly awaiting its release for months.   I have seen other surveys of bloggers, but none I am aware of were based on a truly projective random sample survey. 

The relatively small size of the blogger population makes the mechanics of such a survey difficult.  In this case, the researchers identified bloggers using the random sample surveys of all Americans conducted by the Pew Internet project in 2004 and 2005.  Respondents that identified themselves as bloggers on the first interview were called again (sometimes many months later) and asked to complete a second survey on blogging.  So the good news is, this survey is based on a random sample of all adults.  The bad news is, respondents had to agree to be interviewed twice and the combination of the relatively small size of the blogger population and attrition between the first and second interview makes for a small sample size -- 233 self-identified bloggers.  Thus the important disclaimer in methodology section: "The low number of respondents is a significant limitation to this study." 

Nonetheless, the report looks to provide a very comprehensive look at the population of those who blog.  I'm looking forward to reading it in full, and may update with more comments in a few days. 

One tantalizing bit of news in the summary caught my eye:

Related surveys by the Pew Internet & American Life Project found that the blog population has grown to about 12 million American adults, or 8% of adult internet users and that the number of blog readers has jumped to 57 million American adults, or 39% of the online population.

I had reported on these statistics previously.  Pew's estimate of blog readers would certainly represent a big jump, from 27% of adult Internet users in May/June 2005 to 39% in January 2006.  However, note footnote #2 in their latest report on Internet activities. The wording of the question changed slightly.  In earlier studies, they asked:  "Do you ever use the internet to read someone else's web log or blog?"  In this latest study, they asked:  "Do you ever use the internet to read someone else's online journal, web log or blog? [emphasis added]"   

As I read it, absent a side-by-side experment testing the two versions of the question, we cannot be absolutely certain that the increase is real and not the result of the change in wording.   However, trend aside, if you consider "online journal, web log or blog" a reasonable definition, then the number of Americans who have "ever" read a blog -- 57 million or 28% of the population (if I'm doing the math correctly) -- is quite large. 

Posted by Mark Blumenthal on July 19, 2006 at 04:00 PM in Polling & the Blogosphere | Permalink | Comments (3)

July 17, 2006

A Million Visits

Without any fanfare about a week ago, this site quietly surpassed the mark of one million unique visits since October 2004.  While the most popular sites typically get that level of traffic in a week or less, the one million visit milestone remains one for which I am very grateful.  Thank you to all who read regularly for your continuing interest and confidence.

It is a little early for a formal announcement, but exciting changes are in store for MP in the very near future.  I will be at work on these over the next few weeks, which may slow the pace of posting somewhat.  But be patient, as things are going to get a lot busier and more interesting in the next few months.

Posted by Mark Blumenthal on July 17, 2006 at 03:55 PM in MP Housekeeping | Permalink | Comments (2)

July 14, 2006

Lieberman Push Polls?

While public polls have been few and far between in the Connecticut Democratic primary, reports of "push polling" have been bubbling up through the blogosphere.  Some of the most recent have been quite detailed and worthy of further discussion, if only because, from what I can tell, these do not deserve the "push poll" label.  Rather, the calls described appear to be internal campaign polls testing negative messages. 

I have seen two sets of reports.  The first round drew the usual over-the-top rhetorical blasts.  Calls received in Connecticut in late June were described as by supporters of Ned Lamont as "Lieberman push polling" (here, here and here), as well as "Lieberman's Latest Dirty Trick," and "the sleaziest of campaign tactics" (by Kos himself).   

The most recent and interesting report comes from a correspondent of BranfordBoy on the blog My Left Nutmeg.  The respondent took detailed notes on all of the questions and concluded, "today, I received my first recognizable Push Poll."   The report is worth reading in full, because this call was almost certainly an internal campaign poll and not something that deserving of the label "push poll" (a point echoed -- to their credit -- by both My Left Nutmeg and the Connecticut Blog).

I have discussed push polling in more detail previously, but the defining characteristic of a true "push poll" is fraud.  It is not a poll at all -- not an effort to measure either existing opinions or reactions to political argument -- but rather an attempt to spread a rumor under the guise of a survey.  True "push polls," those that are aptly described as "the sleaziest of campaign tactics," typically involve untrue or outrageous attacks that the purveyors do not dare communicate openly.  The true "push poll" is usually just a question or two:  A question about vote preference, the scurrilous attack and then a quick goodbye.  Since the "push pollster" does not care about measuring or counting anything, they do not waste time on questions about other issues or demographics. 

In this case, the poll described by the BranfordBoy's correspondent has all the hallmarks of an internal campaign poll, most likely conducted on behalf of the Lieberman campaign.  It asks some questions to identify likely primary voters, a job rating for Lieberman, favorable ratings for Bush, Dodd, Lamont and Lieberman, vote questions on the Gubernatorial and Senate primaries, a "certainty" follow-up regarding the Senate vote (that the pollster uses to identify soft or "persuadable" supporters of each candidate), a battery of five questions measuring whether various traits apply to Lieberman or Lamont, and a set of questions to gauge reactions to the recent campaing debate.   Finally, just before asking a series of demographic questions, the survey poses two negative arguments (or "messages") about Lamont.  As reported by the blog correspondent:

Do the following questions make you feel a little less, more much less comfortable with Ned Lamont:

He refuses to release his tax information. At this point, I told the pollster that the statement was incorrect, and that she was acting unethically by repeating it. She asked the question and I told her it made me feel much less comfortable with Joe Lieberman that people would be repeating such false information.

She went on to ask about how Ned Lamont's claim that he would outlaw all earmarks made me feel. I repeated that this was false information and it made me much less comfortable with Joe Lieberman. I urged her to stop repeating false information.

Note two things about these questions.   First, they come at the end of the survey, after questions that measure current vote preference or perception of the candidates.  This is the standard format commonly used by campaign pollsters, including yours truly. 

Second, consider the content of the questions. I certainly do not want to get into an argument about how fair or appropriate these charges may be, but both questions involve arguments that Joe Lieberman made openly in the Lamont-Lieberman debate.  Lieberman challenged Lamont to release his tax returns, and when Lamont did not answer directly, Lieberman concluded, "he is saying he will not release his returns."  According to the Stamford Advocate, however, the next day Lamont "changed his mind" and "decided to release his 2005 tax return 'upon filing.'"      

When asked by Lieberman during the debate whether he would support "earmarks that are good," Lamont replied: "I think we should outlaw these earmarks. I think they corrupt the political process. I think they are written by lobbyists and they're wrong."

Although the first round of reports on calls labeled as "push polls" did not have the same question-by-question specificity as the report from the My Left Nutmeg correspondent, most mention questions describing Lamont's wealth and background and claims that he voted with Republicans as a Greenwich Selectman.  Again, I will leave it to others to debate the accuracy of those claims, but they closely resemble arguments made by Lieberman in the debate an in his paid advertising

My point is that the "arguments" tested mirror the campaign rhetoric of the Lieberman campaign and appear designed to test reactions to that rhetoric.  As such, they deserve the same level of scrutiny as any charge or statement made in the political realm.  Blatantly untrue statements are unethical, whether part of a poll, a campaign mailer or a television ad.  But as one learns from following debates in the blogosphere, truth on such questions is often in the eye of the beholder.  Those on opposite sides of an issue have a way of reaching very different conclusions about the same set of objective "facts."   Such disagreements are often what politics is all about.  Negative rhetoric alone does not deserve to be labeled a "dirty trick," nor does the testing of such rhetoric constitute a "push poll."

Finally, a point of clarification:  I saw several comments in these reports speculating about a legal requirements for pollsters to identify themselves.  Federal does make it illegal to conduct fundraising or telemarketing under the guise of a survey, but as far as I know, no federal or state law requires survey researchers using live interviewers to identify themselves or their clients.  Automated calls appear to be an exception.  Also, most research firms consider it good practice to identify the name of the call center on request.  The ethical code "Respondent Bill of Rights" of the Council for Marketing and Opinion Research (CMOR) -- an organization that includes many of the large survey call centers -- obligates its members to recommends that its members identify the research company's name and the nature of the survey (but not the identity of the client) to respondents on request.

CORRECTION:  The "annonymous" commenter below is right, at least about Wisconsin.  Although I had checked on Friday, I was unaware of the Wisconsin law that obligates those who conduct surveys in that state on behalf of political campaigns to disclose who paid for the poll to the respondents on request.  I am also told that a similar law exists in Virginia, although I have not yet been able to locate the text of any such law. 

Further, as should be evident from the strike-thru corrections above, I misstated the nature of the CMOR Respondent Bill of Rights.  CMOR is not a standard setting organization (like AAPOR, MRA, ESOMAR or CASRO), and thus recommends, but does not require, that its members abide by the terms of the Respondent Bill of Rights.   

My apologies to all. 

Posted by Mark Blumenthal on July 14, 2006 at 05:06 PM in Push "Polls" | Permalink | Comments (19)

July 13, 2006

The Fix: IVR Surveys & Response Rates

Yesterday (just before the Typepad outage that prevented me from posting all day), Chris Cillizza's The Fix blog at WashingtonPost.com took a helpful look at the "Pros and Cons of Auto-Dialed Surveys" and said some kind words about MP in the process.  Thanks Chris!

In the process, Cillizza made a quick reference to the issue of response rates:

A traditional live interview telephone poll has a response rate of roughly 30 percent -- meaning that three out of every ten households contacted participate in the survey. The polling establishment has long held that people are less likely to respond to an automated survey than a call from a real person, meaning that auto-dialed poll have even lower response rates and therefore a higher possibility of bias in the sample. Neither Rasmussen nor Survey USA makes their response rates public, although, in fairness, neither do most media outlets or major partisan pollsters.

A few additional points:

First, Cillizza's quick definition of response rates is close (and arguably close enough for his article), but not exactly right.  Generally speaking, the response rate has two components:  (1) the contact rate, or the percentage of sampled households that the pollster is able to reach during the course of the study and (2) the cooperation rate, or the percentage of contacted households that agree to complete the survey rather than hanging up.  So the response rate tells us the percentage of the eligible sampled households that the pollster is able to contact and complete an interview.   

Second, "typical" response rates are difficult to boil down to a single number, as they vary widely depending on the organization that does the survey, how they do it -- and perhaps most important -- how they calculate the response rate.  The calculation gets complicated because random digit dial (RDD) samples include some ultimately uknown portion of non-working numbers that ring as if they are live.  Another problem is that some of the sampled numbers reach businesses, government offices, fax machines or other numbers that are not eligible for the survey and, therefore, should not be included in the response rate calculation.  The pollster rarely knows precisely how many numbers are ineligible, and must use some estimate to calculate the response rate.  The pollster also needs to decide how to treat partial interviews -- those where the respondent answers some questions, but hangs up before completing the interview. 

The American Association for Public Opinion Research (AAPOR) publishes a set of standard definitions for calculating response rates (and a response rate calculator spreadsheet), but the various technical issues outlined above makes the calculations amazingly complex.  The AAPOR definitions currently include over 30 pages of documentation on how to code the final "disposition" of each call to facilitate six different ways to calculate a response rate.  Gary Langer, the ABC News polling director, addressed many of the technical issues of response rate calculations in an article available on the ABC web site. 

The most comprehensive report on response and cooperation rates for news media polls I am aware of was compiled in 2003 by three academic survey methodologists:  Jon Krosnick, Allyson Holbrook and Alison Pfent.  In a paper presented at the 2003 AAPOR Conference, Krosnick and his colleagues analyzed the response rates from 20 national surveys contributed by major news media pollsters.  They found response rates that varied from a low of 4% to a high of 51%, depending on the survey and method of calculation.  The values of AAPOR's Response Rate 3 (which aims to estimate the unknown eligible numbers) ranged from 5% to 39% with an average of 22% (see slides 8-9).

But keep in mind that many of these surveys were conducted by national media pollsters (such as CBS News/New York Times and ABC News/Washington Post) whose familiar brand names typically help increase participation.  Response rates by pollsters without well known brand names -- such as those conducted by yours truly -- tend to get lower cooperation rates.  Also, the data from Krosnick et. al. are already three years old.  Current response rates are probably a bit lower. 

Finally, we have the issue of how response rates for automated polls compare to those using live interviewers.  Cillizza is right that most public pollsters -- including Rasmussen and SurveyUSA -- do not routinely publish response rate statistics along with survey results.  However, SurveyUSA has posted a chart on their web site that shows eight years of response and refusal rates, although they have not updated the graphic with surveys conducted since 2002. 

For 2002, Survey USA's graph indicates a response rate -- using AAPOR's Response Rate 4 (RR4) -- of roughly 10%.   Krosnick's 2003 report showed an average RR4 of 22% for national media polls, with a range between 5% and 40%.  But keep in mind that Krosnick's data were for national surveys.  Virtually all of the polls by SurveyUSA in that period were statewide or local.

Posted by Mark Blumenthal on July 13, 2006 at 06:47 AM in IVR Polls, Response Rates, Sampling Issues | Permalink | Comments (1)

July 11, 2006

Gallup: Bush Job Approval at 40%

Gallup reported on their latest national survey today, which shows the Bush job approval rating bumping up to 40% (the Gallup release is free to all for today, to subscribers only after that). 

Charles Franklin plots the Gallup numbers, which closely parallel his regression line, a trend estimate based on all available public polls.  Both now show the Bush job rating at just over 40%.   Thus, looking at all available polls, rather than just one at a time, we can clearly see that the Bush rating has increased significantly since hitting bottom in mid May, but still a good ten points lower than Bush's ratings in early 2005. 

Franklin_currentbushapproval20060709


The Gallup report indicates that the improvement has come mostly from Republicans and independents:

The president's job approval ratings currently stand at 78% among Republicans, 36% among independents, and lower still among Democrats, at 10%. In early May, Bush's average support was 68% among Republicans, 26% among independents, and 4% among Democrats.

I did not have time to create a blog-worthy table, but checking the online releases by the Fox poll indicates a similar pattern on their surveys:  They show the Bush approval rating among Republicans rising from 66% in late April and early May to 79% on their most recent survey in late June.   The same Fox surveys show no significant change among Democrats or independents. 

PS:  Alert reader JB notes a minor typo in the Gallup release.  The June 9-11 data adds to 96%.  It appears that a typo turned the 6% for no opinion (as indicated in a release by USA Today) into 2% on the most recent release.  When it comes to proofreading, you just can't beat MP readers.

Posted by Mark Blumenthal on July 11, 2006 at 04:42 PM in President Bush | Permalink | Comments (3)

July 10, 2006

Connecticut Primary Polls: No Easy Task

For all the attention paid lately to the Connecticut's upcoming Democratic primary election between Senator Joseph Lieberman and challenger Ned Lamont, public polls on the race remain few and far between.  That scarcity may owe something to the huge challenge of selecting "likely voters" for a rare summer primary in Connecticut where turnout is largely unknown.   This is the sort of race that gives pollsters nightmares. 

Contested statewide primaries in Connecticut are relatively rare, as party nominees are typically chosen by state party conventions.  Another twist, the Stamford Advocate explains, comes from the recent move of Connecticut's primary from September to August.   The most recent "hotly contested" primary, at least according to the Advocate, was a September 1994 primary in which 26% of Connecticut's registered Democrats cast ballots.  There is no recent contested August primary, though Genghis Conn's compilation of past turnout statistics on the Connecticut Local Politics blog argues that the date alone may not make much difference. 

The likely turnout on August 8 is anyone's guess at this point, but to try to understand the challenge facing the pollsters, consider a few numbers: 

Connecticut has a voter-eligible population of roughly 2.4 million.  The Connecticut Secretary of State reports 1.95 million active registered voters as of last year, of whom 653,055 were registered as Democrats.    Just as a benchmark, consider that if 25% of registered Democrats cast a ballot on August 8, that level of turnout would amount to roughly 7% of the voter eligible population and roughly 8% of all registered voters. 

Now consider the polls released so far.  I am aware of three public pollsters that have released Connecticut results in the last few months: 

  • Quinnipiac University -- Their most recent poll, taken between May 31 and June 6, sampled 2,114 "registered voters," of whom 751 (36%) were registered Democrats and 465 (22%) were identified as "likely Democratic primary voters."   Although the Qunnipiac release does not specify how they sampled voters for this poll, they typically use a random digit dialing (RDD) methodology, which would require asking respondents how they are registered. 
  • Rasmussen Reports - Rasmussen conducted a one-night automated telephone poll on June 12 among just 212 respondents identified as "likely voters" for the August 8 Democratic primary.  The respondents appear to be a subgroup of survey of 800 "likely" general election voters surveyed by Rasmussen the same night.  So Rasmussen's most recent primary sample was 27% of its general election sample.  Consider that a 25% turnout among Connecticut's registered Democrats would represent roughly 11% of the 1.4 million ballot's cast in the last off year elections in 2002.
  • SurveyUSA - Although SurveyUSA has not yet fielded a poll that asks the primary vote question, their 50-state tracking polls provide results for Joe Lieberman's job rating among "Democrats."  Their most recent survey of 600 adults in Connecticut was fielded from June 9 to June 11.  The subgroup of Democrats (presumably based on a party identification rather than a question about party registration) was 42% of the sample, or roughly 250 respondents. 

What should be clear from the above is that all of the samples are small and those that report on "likely voters," appear to select and define them differently.  Neither Quinnipiac nor Rasmussen explains how they identify "likely voters" (although the methods they described in October 2004 are probably close).  And all of these surveys are a bit dated, at least for the moment, as much has transpired in the Lieberman-Lamont race over the last three or four weeks. 

The one thing we do know with some confidence, thanks to the Quinnipiac poll, is that the assumptions the pollster makes about turnout matter a great deal to the results.  Or at least they did in early June, when Quinnipiac had Lieberman leading by 25 percentage points (57% to 32%) among all Democrats, but by only 15 points (55% to 40%) among the smaller subgroup of "likely voters."

Presumably, we will see more polls released in Connecticut over the next four weeks.  Hopefully, the pollsters will tell us a bit more about how they select "likely voters" and about how those likely voters compare to all registered Democrats in their samples. 

P.S.   Charles Franklin's charting and analysis of the Connecticut surveys got me thinking about the challenge of polling in this race.  His analysis is well worth reading, as always, but keep in mind that the results in his charts are for all Democrats, not "likely voters."

Posted by Mark Blumenthal on July 10, 2006 at 04:13 PM in The 2006 Race | Permalink | Comments (1)