UK Polls: Anthony Wells

Legacy blog posts Polls in the News

Unfortunately, I’m at least a week late with this recommendation, but perhaps for true political polling junkies it will be better late than never.  For MP-like commentary on political polling in the United Kingdom, no one does it better than Anthony Wells’ UK Polling Report (actually, it might be more appropriate to describe MP as "Wells-like," since he was blogging long before I started).  His site now includes an impressive archive of British political polling.  His review of the performance of the polls in last week’s parliamentary election is noteworthy especially since their accuracy came as something of a surprise. 

Here is Well’s post-election take on the UK pre-election polls: 

So, with pretty much everything except Harlow counted, how well did
the pollsters do? The bottom line is that everyone got it right –
trebles all round! While NOP take the prize, having got the result
exactly spot on, not only did all the pollsters get within the 3%
margin of error, they all got every party’s share of the vote to within
2%. Basically, it was a triumph for the pollsters.

RESULT – CON 33.2%, LAB 36.2%, LD 22.7%

[Update 5/10: These results were posted on Wells’ blog on 5/7.   As of today, BBC is reporting Conservative 32.3%, Labor 35.2%, Liberal Democrat 22.0%.  A am leaving Well’s error estimates in place, although the reader should note they need to be recalculated]

NOP/Independent – CON 33%(-0.2), LAB 36%(-0.2), LD 23%(+0.3). Av. Error – 0.2%
MORI/Standard – CON 33%(-0.2), LAB 38%(+1.8%), LD 23%(+0.3). Av. Error – 0.8%
Harris –  CON 33%(-0.2), LAB 38%(+1.8), LD 22%(-0.7). Av. Error – 0.9%
BES – CON 32.6%(-0.6), LAB 35%(-1.2), LD 23.5%(+0.8%). Av. Error – 0.9%
YouGov/Telegraph – CON 32%(-1.2), LAB 37%(+0.8), LD 24%(+1.3). Av.Error – 1.1%
ICM/Guardian –  CON 32%(-1.2), LAB 38%(+1.8), LD 22%(-0.7). Av.Error – 1.2%
Populus/Times –  CON 32%(-1.2), LAB 38%(+1.8), LD 21%(-1.7). Av. Error – 1.6%

The
other two pollsters, Communicate Research and BPIX, conducted their
final polls too early to be counted as proper eve-of-poll predictions,
but, for the record, both their final polls were also within the
standard 3% margin of error. Their average errors were 0.9% for BPIX
and 2.1% for Communicate.

The British Polling Council
have a press release out with the same information (although they
include the “others” in the average, and use rounded figures for the
results, hence the slightly different figures. It doesn’t change the
result – everyone was right and NOP did best).

On Election Day, Wells posted the results of the national exit polls, which did even better:

MORI/NOP’s exit poll shows a share of the vote of CON 33%, LAB 37%, LD 22%.  It predicts 44 Conservative gains and 2 Liberal Democrat gains for a Labour majority of 66.

Of course, the actual share of the vote was Conservative 33.2%, Labor 36.2% and Liberal Democrat 22.0%, translating into a 67 seat Labor majority.

[Update: Again, these results were posted on Wells’ blog on 5/7. 
As of today (
5/10), BBC is reporting Conservative 32.3%, Labor 35.2%, Liberal
Democrat 22.0%.   
I have changed the error computations below to reflect the updated results.  Also a reader emails to advise that the final Labor majority will end up being
66 seats not the current 67 seats after a special by-election to be held soon to replace a recently deceased LD MP.  The
district should go Conservative in the by-election so the Labor majority will be
66 seats within a few weeks exactly matching the projection from the exit
polls].

So does the UK experience show, as some would seem to believe, that exit polls are flawless elsewhere but troubled only in the US?  Hardly.  In fact, much of Well’s enthusiasm comes from the previously problematic performance of the UK pre-election and exit polls.  Let’s start with the exit polls.  The error on the margin in last week’s MORI/NOP exit poll was only 1.10 percentage point in Labor’s favor.  As Well reports in a summary, the NOP/BBC exit polls showed a much greater skew to Labor in 2001 (2.7) and especially in 1997 (5.2) and 1992 (3.6).    

What about the pre-election polls?  The average error for each candidate estimate last week across all the polls reported by Wells (I did the math) was 1.10 percentage point.  The comparable average error was higher in 2001 (1.7), 1997 (2.5) and 1992 (3.3).  The error on the margin has shown a consistent skew to the Labor party, overestimating its lead over the Conservatives in 2001 (3.5), 1997 (4.0) and especially 1992 (8.3).  The slight error in Labor’s favor this time (1.76) is obviously small by comparison. 

What is interesting about the consistent skew to Labor in recent polling is that pollsters have named it: "Shy Tories" or the "Spiral of Silence" effect (the latter based on the book of the same name by Elisabeth Noelle-Neumann).  As defined by the BBC guide to polling methodology, it means "people who do not like to admit they support a certain party but who vote for them nonetheless."   Some UK pollsters now reallocate undecideds to counter this phenomenon, although Wells’ wrap-up notes that "the spiral of silence adjustment made Populus’s final poll less accurate."

It is also interesting that there may be remnant traces of the Labor skew in this year’s result. Five of seven prelections polls and the exit poll overestimated the Labor vote by one percentage point or more (though my favorite binomial calculator tells me there is a 23% probability of tossing a coin seven times and having it come up heads five times).   Wells — who identifies himself as a supporter of the Conservative Party — concludes that the "lingering bias" is "small it is hardly worth worrying about." 

One characteristic that the UK polls shared with their US counterparts is the way widely varying results tended to converge in the final week.  More from Wells:

What is interesting is the comparison between the final result and the
polls during the campaign – the results from YouGov during the campaign
were pretty close to the final result throughout, especially after the
first few polls that showed the parties neck and neck. In contrast
during the campaign the phone pollsters showed some whopping great
Labour leads that disappeared in their final polls – of all the phone
polls during the campaign, only one (MORI/Observer, published on the
1st May), did not report a Labour lead larger than the 3% they finally
acheived. Of YouGov’s last 10 polls of the campaign, 8 showed a Labour
lead of 3 or 4 percent. It doesn’t, of course, necessarily mean that
YouGov were right – the “real” Labour lead at that time could have been
larger, only to be reduced by a late swing to the Lib Dems – hence the
reason why we only compare the eve-of-poll predictions to the final
result.   

[Note:  YouGov draws samples from a panel of pre-recruited respondents and conducts interviews online] 

Finally, back to exit polls:  It is noteworthy that the UK had no controversy about "leaks" of "early" exit polls and no dark suspicions about the re-weighting of the exit polls to match the actual count because neither phenomena occurred.  The British exit pollsters release their "final" results as the polls close for all to see.  It helps, of course, that Britain has a uniform poll closing time, but survey research in the UK seems to have survived the release of imperfect results.  In fact, it is possible that this year’s improvements resulted from the public disclosure of those previously problematic surveys.  Hopefully Wells will devote some future posts to discussing what, if anything, the UK exit pollsters did differently this time. 

Also consider that the previous problems with the UK exit polls did not produce election fraud conspiracy theories.  Why not?  One reason, as our friend Elizabeth Liddle points out, is that the count in the UK is "utterly transparent."  Paper ballots are sorted and counted in public at a centrally located place, open to all who wish to observe.  People have faith in the result because of this transparency.  Another reason, as Liddle puts it, "why auditable elections are so important."   

Hear, hear!

5/10 – Update:  Anthony Wells emails with a postscript on the real reason why exit poll leaks are so rare in the UK.  They are illegal.  Leakers are subject to a fine of up to £5,000 or 6 months in prison.  Here is the text of the law:

No person shall, in the case of an election to which this section
applies,
publish before the poll is closed—

(a) any statement relating to the
way in which voters have voted at the
election where that statement is (or
might reasonably be taken to be)
based on information given by voters after
they have voted, or

(b) any forecast as to the result of the election
which is (or might
reasonably be taken to be) based on information so
given.

Why no such law in the US?  Well, there’s that funny thing called the First Amendment

Mark Blumenthal

Mark Blumenthal is the principal at MysteryPollster, LLC. With decades of experience in polling using traditional and innovative online methods, he is uniquely positioned to advise survey researchers, progressive organizations and candidates and the public at-large on how to adapt to polling’s ongoing reinvention. He was previously head of election polling at SurveyMonkey, senior polling editor for The Huffington Post, co-founder of Pollster.com and a long-time campaign consultant who conducted and analyzed political polls and focus groups for Democratic party candidates.