By The Polling Observatory (Robert Ford, Will Jennings, Mark Pickup and Christopher Wlezien). The homepage of The Polling Observatory can be found here, and you can read more posts by The Polling Observatory here.
This post is part of a long-standing series (dating to before the 2010 election) that reports on the state of the parties as measured by vote intention polls. By pooling together all the available polling evidence we can reduce the impact of the random variation each individual survey inevitably produces. Most of the short term advances and setbacks in party polling fortunes are nothing more than random noise; the underlying trends – in which we are interested and which best assess the parties’ standings – are relatively stable and little influenced by day-to-day events. Further details of the method we use to build our estimates can be found here.
It is now six months since the television headlines rolled at 10am on May 7th, with the exit poll dropping the bombshell that the polls had got it badly wrong. The election forecasters fared little better, including us: even though our vote model had predicted a Conservative lead of 2-3 points, our seat prediction was nowhere close to the majority achieved by David Cameron. It is with a little trepidation then that the Polling Observatory team returns to provide its assessment on the state of public opinion in late 2015.
As regular readers will know, we pool all the information that we have from current polling to estimate the underlying trend in public opinion, controlling for random noise in the polls. Our method controls for systematic differences between pollsters – the propensity for some pollsters to produce estimates that are higher/lower on average for a particular party than other pollsters. While we can estimate how one pollster systematically differs from another, we have no way of assessing which is closer to the truth.
One possibility with this method is to use the result of the last election to ‘anchor’ our estimates of bias in the polls against the last election result. This treats the election result as if it was produced by a pollster with no systematic error. We can then estimate the systematic difference of each pollster with this hypothetical perfect pollster. With this method, for example, if pollster X produces results which are systematically 2 percentage points higher for the Conservatives than what would be produced by this perfect pollster, we would interpret a poll indicating 40% support for the Conservatives from such a pollster as 38% support for the Conservatives. This approach can be useful where there are recurring historical patterns (such as the tendency of the polls to overestimate the Labour vote and underestimate the Conservative vote), and might allow us to control for systematic bias in the polls.
We have chosen, for now, to anchor our estimates on the average pollster. This means the results presented here are those of a hypothetical pollster that, on average, falls in the middle of the pack. We have chosen to use such a middle pollster rather than anchor on the election result because we believe that the inaccuracies/biases revealed in the polls in May will be different from those which may occur in this election cycle. All of the pollsters have been undertaking reviews of their methods following the big polling miss in May, and it is unlikely that the biases in polling will be unaffected by the changes they are gradually introducing. Because of this, we offer our estimates of party support with an important caveat: while our method accounts for the uncertainty due to random fluctuation in the polls and for differences between polling houses, we cannot be sure that there is no systematic bias in the average polling house (i.e., the industry as a whole could be wrong once again). It may be that the polls are collectively right or wrong. It may also be that a pollster producing figures higher or lower than the average is more accurately reflecting the state of support for the parties than their competitors. Our estimates cannot adjudicate on whether figures on the high or the low side for a party better reflect the underlying preference of the electorate. The only test is on Election Day. Fortunately, none of this prevents us from identifying and reporting on the underlying trends over time.
In terms of the overall story, there has been little apparent change in vote preferences since the election in May. This despite the triumphant budget announced by George Osborne, the surprise ascension of Jeremy Corbyn to leader of the Labour Party (and the onslaught on him and his team from outside and inside the party), and the tax credits row that has quickly taken the shine off the government’s honeymoon period. Unlike the last election, there has been no sudden flight of voters from one party to another, as occurred with the collapse of Liberal Democrat support in the first six months after the Coalition government was formed.
Our estimates suggest that Conservative support has slipped slightly since the heady days of May and June, from around 40% to closer to 37% at the start of November. Despite Labour being divided and in some disarray over its direction, it has made slight gains from around 30% to 32%. This upward drift in the polls largely occurred before election of Jeremy Corbyn as Labour leader, so cannot be attributed to a Corbyn effect. Whether these gains will persist as the election nears and PM Corbyn becomes a possibility, is of course open to debate. At present, though, there is no sign of Mr Corbyn’s election having any impact on his party’s overall support. UKIP support has remained steady at around 13%, and the party shows no signs of going away – even with its own internal conflicts following Nigel Farage’s “unresignation” in the summer. Lagging somewhat behind, the Liberal Democrats continue to flat-line at just under 7%. One of the patterns of the last parliament was the stubborn immovability of Liberal Democrat support. New party leader Tim Farron has much work on his hands to win back voters, and so far there are no green shoots for the party in our estimates. Finally, speaking of the Greens, their support appears to have been squeezed since Labour election Jeremy Corbyn – perhaps because voters attracted by their distinct left wing platform now feel more at home in the Labour party. It has fallen around 1.5 points since the summer. Our estimates for all the parties suggest that the electorate is still to make up its mind on both the new government and the fragmented and much changed opposition. But there are some big events on the horizon, in particular the EU referendum, which may yet provide a shock to move political support in one direction or the other.
One of the reasons why the polling miss back in May came as such a shock was that by election eve there was broad consensus among the pollsters about the level of support for the parties (though of course we noted house effects earlier in the campaign). However, in the period since May the polling has been characterised by much more variation in the standing of the parties. This is revealed in the figure above. The size of the confidence intervals for our estimates in the period since the election (an average of 2.3 points) are more than twice those for the 2010-15 election cycle or for the month just before the start of the short campaign (each an average of 1.1 points). This indicates a much higher level of uncertainty about the state of public opinion today. Part of this could be due to a lower volume of polling since May, or more variation in polling methodologies as pollsters take different approaches in response to May’s polling miss. The greater uncertainty may also reflect the much lower frequency of polling since the election – election watchers used to multiple daily polls have now to accept a more meagre diet of one or two polls a week. The greater uncertainty may, however, also reflect something more fundamental: genuine uncertainty, and hence greater volatility, in the minds of the electorate. Voters are faced with an unexpected Conservative majority government and an unfamiliar and polarising opposition leader attracting widely varying reactions in the media and within his own party. In such circumstances many may be genuinely unsure as to their preferences. Only time will tell whether this uncertainty lasts until the next general election. For now, it provides an important reminder of the need to take single poll results with a degree of caution.
 The average difference between this middle pollster and those pollsters that produce estimates that are systematically higher for a given party is the same as the average difference between this middle pollster and those pollsters that produce estimates that are systematically lower for that same party.
 We came to a similar conclusion during the last election cycle when it became apparent that our method of anchoring on the election result was excessively reducing the estimated level of support for the Liberal Democrats.