Should Jeb Run…Or Not:?

Divergent Polls Legacy blog posts Measurement Issues Sampling Issues

The Hotline (subscription required) caught an intriguing polling conflict yesterday.  A just released Quinnipiac poll of 1,007 Florida "voters," conducted February 18-22, showed 25% said yes when asked, "would you like Jeb Bush to run for President in 2008?"  A survey of "registered voters" conducted February 16-20 by Strategic Vision showed 57% saying yes on a virtually identical question. 

How could that be?   Let’s take a closer look.

First, Quinnipiac and Strategic Vision typically sample different target populations, what survey methodologists call the "sample frame."  In its 2004 pre-election surveys Quinnipiac sampled all households with a working telephone using a random-digit-dial (RDD) methodology and started with a sample of all adults and screened for "likely voters."  Strategic Vision was one of the few public polling organizations that draws its sample from lists of registered voters.   

(An aside:  I italicized "typically" above because neither organization makes the sample frame clear in their press releases.  Quinnipiac says only that they surveyed "voters," Strategic Vision says they surveyed "registered voters."  Back in October, both organizations Quinnipiac responded to my requests for information back in October when I put together a lengthy summary of likely voter selection procedures.  Were I a reporter, I would contact both organizations before posting this, and I am confident that both Quinnipiac and Strategic Vision would be responsive.  But I am a blogger, and I am approaching this issue from the perspective of the consumer.  We are in the dark.  I cannot understand why public pollsters cannot include such basic sampling information in press releases that span 10 pages.  I will contact both pollsters and pass along more information as they make it available).   

Update/Clarification:  Strategic Vision is a public relations firm that polls for Republican candidates.  When I first drafted this post, I had them confused with another public polling organization that I did contact last October.  Nonetheless, David Johnson of Strategic Vision emailed to confirm that their Florida sample was in fact drawn from a list of registered voters.  I have copied his comments below.  Doug Schwartz of the Quinnipiac poll emailed from his vacation to confirm their survey was based on an RDD sample, but they also screened to include only self-indentified registered voters in the final sample.  Thus, both surveys aimed for the same target population — registered voters.

The debate about the use of voter lists is interesting and complex (and MP is reminded that he needs to weigh in on it).  The key point here is that RDD and registered voter list methodologies sample different target populations:  First, the population of registered voters is obviously smaller than the population of all adults.  Second, a list of voters will miss those with unlisted telephone numbers and those whose numbers cannot be appended from available databases. Are these differences enough to explain a 32 percentage point difference on the question of whether Jeb Bush should run?

Another question I had was about the questions that preceded the item about Jeb Bush running.  Is it possible that these created a bias?

So, with the help of the releases (thanks to the folks at the Hotline) let’s look at the questions asked by both organizations.  I am assuming that the order of questions in the release matches the order in which they were asked.  If I’m assuming correctly, the Should-Jeb-Run item was near the beginning of the questionnaire:

Quinnipiac:

  • Do you approve or disapprove of the way Jeb Bush is handling his job as Governor? (52% approve)
  • Do you approve or disapprove of the way the state legislature is handling its job? (46% approve)
  • Do you approve or disapprove of the way George W. Bush is handling his job as President? (44% approve)
  • Would you like Jeb Bush to run for President in 2008 or not?  (25% yes)

Strategic Vision:

  • Do you approve or disapprove of President Bush’s overall job performance?  (56% approve)
  • Do you approve or disapprove of President Bush’s handling of the economy? (55% approve)
  • Do you approve or disapprove of President Bush’s handling of Iraq? (57% approve)
  • Do you support or oppose President Bush’s Social Security reform? (42% approve)
  • Do you approve or disapprove of Governor Jeb Bush’s job performance? (62% approve)
  • Would you like to see Governor Jeb Bush run for President in 2008? (57% yes)

Thus, the questions that precede the Should-Jeb-Run item are very similar.  Quinnipiac asks about Jeb Bush and the Florida legislature before asking about the President’s job rating.  Strategic Vision asks about the President’s Job rating and about three specific issue ratings before asking about Governor Bush’s performance.  I cannot see any obvious reason why these subtle differences would explain the big difference on the Should-Jeb-Run item.

The overall job ratings are higher on the Strategic Vision survey – 12 points higher for the President (56% vs. 44%) and ten points higher for his brother (62% vs 52%).  It looks to me like the registered voter list sample frame explains the difference on both items . The Strategic Vision sample is probably more Republican than the Quinnipiac survey.

But the Should-Jeb-Run item is 32 points different, so even a 10-12 point difference in partisanship between the two surveys would not come close to explaining it.  This is quite a puzzle.

Other than an unlikely tabulation or programming error, I have one other admittedly far-fetched theory:  The two Should-Jeb-Run questions are not exactly alike.  The Quinnipiac version includes two important words: "or not" (as in, "would you like Jeb Bush to run for President in 2008 or not?").  Those words are there for a reason.  Some questions are prone to a phenomenon that survey methodologists call "positvity bias," which is related to the concept of the "non-attitude" that I discussed a few weeks back.  When a respondent doesn’t have a strong opinion but does not want to admit it, they sometimes give a positive response.  Thus, Quinnipiac includes the words "or not" to tell respondents it is OK to say no. 

Frankly, I would not expect positivity bias alone to account for a 32 percentage point difference (or even the roughly 20 point difference that would remain if we assumed that the differences in sample frame contributed 10-12 points).   However, stranger things have happened.  One big clue is that while Strategic Survey reports that 57% of registered voters say they would like to see Jeb run, only 34% of Republicans are ready to support him.  That suggests that the 57% number, if correct, is really soft and might come out differently with subtle changes in wording. 

This theory begs a simple experimental test. Another pollster could ask two versions of the question (one with "or not," the other without) to two half samples of a single poll.  Is anyone out there going into the field in Florida anytime soon? 

One more thought:  If there were ever a time for pollsters to disclose more about the partisanship, demographics and regional composition of their samples, this is it. In this case, the two sample frames appear to be different (although even that is not clear).  The pollsters owe it to the consumers of their data to describe their samples as much as possible:  What was the demographic composition in terms of age, gender, race, Cuban or other Hispanic origin, geographic regions and partisanship?  And with respect to partisanship, did you ask respondents to report their party registration or identification?  What was the text of your party question? 

These are basic questions that need to be more available to consumers of data if we are to make sense of polls that show such divergent results.


Corrected grammer and typos  – 2/25

Update:    Doug Schwartz of the Quinnipiac Poll emailed from vacation.  He promises to try to answer some of the questions raised in this post next week.  .

David Johnson of Strategic Vision emailed with the following comments:

Strategic Vision interviews registered voters who have indicated that they voted
in the 2002 and 2004 primaries and general elections.  It is weighed to reflect
that voter registration numbers throughout the state according to region and
actually had slightly more Democrats then Republicans this is because in areas
like North Florida that while they may vote Republican the registration numbers
indicate Party id as being Democrat. We draw our samples from lists of
registered voters.
 
I believe one aspect of the differences in the two polls may be the wording
as you stated but that alone does not explain 32%. Another aspect may be also
the timing of when we did the two surveys, we were concluding as the Schiavo
case was again re-entering the news and in the past we have seen majorities
oppose Governor Bush’s involvement in the case and while we had polled many
before this with the re-emergence of the Schiavo case negative feelings toward
Governor Bush remanifisted.  That also would not explain it.  Finally, our
numbers also diverge on the gubernatorial race but are more consistent with
private polling that I have seen. And also we polled about 100 more
voters.

UPDATE II (2/28):

Late Friday afternoon, the folks at Quinnipiac sent over demographic results on every item I have asked about:   Age, gender, race, Hispanic origin, geographic regions and partisanship.  They also sent the text of their party ID question.  I’m still waiting to receive the same from Strategic Vision and will post a comparison when that data arrives.   

David Johnson, are you still out there?

UPDATE III (3/1):  I posted Quinnipiac’s numbers  here

Mark Blumenthal

Mark Blumenthal is the principal at MysteryPollster, LLC. With decades of experience in polling using traditional and innovative online methods, he is uniquely positioned to advise survey researchers, progressive organizations and candidates and the public at-large on how to adapt to polling’s ongoing reinvention. He was previously head of election polling at SurveyMonkey, senior polling editor for The Huffington Post, co-founder of Pollster.com and a long-time campaign consultant who conducted and analyzed political polls and focus groups for Democratic party candidates.