Why & How Pollsters Weight, Part III

Legacy blog posts Weighting by Party

I have so far discussed two different philosophies of weighting by party. The purist model — still the choice of most media pollsters — weights samples of adults by demographics but never weights by party identification. An alternative, advocated by pollster John Zogby, weights every 2004 survey so that it matches the estimate of party ID in the exit polls taken among voters in November 2000.

I promised last time to discuss those who pursue a “third way,” somewhere between the Purist and Zogby models. Toward that end I did something bloggers are not supposed to do. I picked up a telephone and called some of my colleagues. In the process, I actually learned something: More media pollsters are now weighting by party than I had realized, demonstrating the conflict between art and science in pre-election polling.

Here is a partial sampling:

Investor’s Business Daily/ Christian Science Monitor/TIPP — This survey weights by party identification using a method that Charlie Cook advocated in a recent column, a dubbed “dynamic weighting” by either Cook or Ruy Teixeira. The TIPP survey weights every survey they conduct during an election year by party ID using a rolling six month average of data rolled together from previous surveys (they weight the combined data, that includes nearly 10,000 cases, by demographics only).

Raghavan Mayur, president of TIPP says he weights this way because “I wanted to be consistent in what I do” during an election year. He told me he believes that party ID is “stable at the aggregate level” during any given three month period and that any short term changes, even if real, are “fleeting.” (Mayur also commented on the IBD methodology in this article).

NBC/Wall Street Journal/Hart-McInturff — Two campaign pollsters administer this survey, Democrat Peter D. Hart and Republican Bill McInturff (who takes the place of the late Robert Teeter, Hart’s longtime partner in this venture). In an interview, Hart confirmed that he sometimes weights the NBC/WSJ samples by party ID using an internal database of past survey results as a guide. Hart’s technique might be dubbed “dynamic weighting in reserve,” as his decision to weight is discretionary. Like other pollsters, Hart will correct any imbalances in race, gender or geography in his raw data. Then, if the party ID numbers seem “off” compared to previous surveys, he will look to see if any other demographic anomalies — age, education, etc. — can explain the difference. If weighting by these characteristics still leaves a “substantial difference” in party compared to the most recent NBC/WSJ poll, he will then weight by party to bring it back into balance.

What difference qualifies as “substantial?” Hart suggested that while he would not be concerned about changes of a few percentage points either way, he would weight to correct a shift from, hypothetically, an even distribution of Democrats and Republicans to a 10-point advantage either way. “I am not a believer,” he said, “that party ID changes [that much] on a monthly basis.”

Fox/Opinion Dynamics — I hesitate to include the Fox survey here because they DO NOT WEIGHT BY PARTY. However, their unique sampling methodology puts them somewhere between the purists and John Zogby in terms of how they control for random variation in the partisanship of their samples.

Virtually all of the others surveys (including IBD and NBC/WSJ) begin with a sample of telephone numbers that represents all households with a working phone, then they screen down for registered and likely voters. The Fox/Opinion Dynamics survey does something a bit more complicated. Although they also call a sample of all households with phones, they “stratify” their sample regionally, setting sample quotas for geographic regions that reflect likely turnout.

To be more specific, the Fox survey divides the country by state into 18 regions, then subdivides each into urban, suburban, rural counties. For surveys of likely voters, they use registration and past voting statistics to set quotas for each sub-region that reflects the national distribution of likely voters. They then draw a random sample of telephone numbers using a random digit dial (RDD) methodology that essentially fills the quota within each sub-region. This sampling methodology resembles what political pollsters like me use for internal campaign polls in statewide races.

Why go to all this trouble? Because where we live is highly predictive of how we vote. Democrats are more likely to live in urban centers, Republicans in rural areas. Moreover, as John Gorman, president of Opinion Dynamics, pointed out in an email, “the demographics of non-response” make it “harder to get an interview in a Northeastern urban center than it is in a rural area…thus stratification is necessary to even out the response rates, and that’s why we use it.”

I believe this regional stratification is one reason why the Fox/Opinion Dynamics results have been nearly stable as stable lately as the pollsters who weight by party. For my money, this methodology reduces swings in the partisan composition of their polls using hard defensible data (turnout statistics) rather than softer attitudinal data (party identification).

But, how defensible is this method given the flood of new registrants? Won’t these skew Gorman’s stratification model? Here is his answer:

We have decided at this time not to try to reflect the new registrations. Our reasoning is twofold. First it is hard to accumulate accurate data…we fear introducing error rather than improvement. Second, while registering may be easy, getting out to vote is harder and we are not confident that these new registrants will actually vote.

If I were polling in a state like Ohio or Florida, where registration activity has been intense, I would be very concerned about the new registrations throwing off the old models. On the national level, however, I tend to agree with Gorman. Remember, in this instance, the issue is not the level of turnout but the possibility that turnout will grow disproportionately in Democratic rather than Republican areas (or vice versa).

UPDATE (10/13): Democracy Corps, the polling entity of Democrats Stan Greenberg and James Carville, uses a samping methdology comparable to the Fox/.Opinion Dynamics. Karl Agne, of DC, tells me that they stratify regionally based on past turnout statistics and do NOT weight by party.

There is one more national survey to discuss that now weights by party: The ABC News/Washington Post tracking survey. I’ll take that up — as well as telling you which approach I prefer — in Part IV.

Mark Blumenthal

Mark Blumenthal is the principal at MysteryPollster, LLC. With decades of experience in polling using traditional and innovative online methods, he is uniquely positioned to advise survey researchers, progressive organizations and candidates and the public at-large on how to adapt to polling’s ongoing reinvention. He was previously head of election polling at SurveyMonkey, senior polling editor for The Huffington Post, co-founder of Pollster.com and a long-time campaign consultant who conducted and analyzed political polls and focus groups for Democratic party candidates.