Will turnout weighting prove to be the pollsters’ Achilles heel in #GE2017?

sturgis2jennings2

By Patrick Sturgis and Will Jennings, University of Southampton.


The 2017 election campaign has confounded expectations in many ways, none more so than Labour’s continuing surge in the opinion polls. From an average vote share of around 26% at the start of the campaign, they now stand at an average of 36% in the polls conducted over the past week. It is fair to say that few, if any, commentators expected Labour’s support to be at this level as we head into the final week of the campaign.

One of the theories advanced to explain Labour’s unexpectedly lofty position is that the opinion polls are, once again, wrong; their historical tendency to over-state Labour support has not been adequately addressed by the pollsters since the debacle of 2015. Key to this line of thinking is that Labour’s support appears to be ‘soft’, in the sense that those who say they will vote Labour in the polls are more likely to also report that they may change their mind before election day, compared to Conservative ‘intenders’. Labour’s core support is also concentrated within demographic groups that are, historically at least, less likely to cast a ballot, particularly younger voters.

Patterns of turnout across demographic groups will, of course, be key to determining the outcome of the election. But might turnout – and how pollsters deal with it – also be the cause of another polling miss on June the 8th?

Who will turnout and who won’t?

Adjusting for turnout is one of the most difficult tasks a pollster must confront. Polls work by collecting samples of individuals and weighting them to match the general public on characteristics such as age, gender, region, and education for which the population distribution is known. But around a third of  any representative sample of eligible voters will not vote, so an additional adjustment has to be made to filter out likely non-voters from the vote share estimate. The problem here is that there is no entirely satisfactory way of doing this.

The most obvious approach to determining whether poll respondents will vote or not is to ask them. This is indeed the way that the vast majority of polls in the UK have approached turnout weighting in previous elections. In order to allow respondents to express some level of uncertainty, pollsters usually ask them to rate their probability of voting on a 1 to 10 scale (where 1 = certain not to vote and 10 = certain to vote). The problem with this approach is that, for a variety of reasons, people are not very good at estimating their probability of voting. So turnout weights based on self-report questions tend to have high error rates, mainly of the ‘false-positive’ variety. Some pollsters use additional questions on turnout at previous elections to produce a turnout probability but these also suffer from problems of recall and socially desirable responding.

A second approach is to use historical survey data containing a measure of actual turnout (either self-reported after the election or via validation of actual votes using the electoral register). Such data is used to build a statistical model which predicts turnout on the basis of demographic characteristics of respondents. This ‘historical’ model can then be applied to current polling data in order to produce turnout probabilities based on actual turnout patterns from the previous election. While this gets round the problems with faulty reporting by respondents, with this approach we must believe that patterns of turnout haven’t changed very much since the previous election, an assumption which cannot be tested at the time the estimates are required. And, as the EU referendum showed, sharp changes in patterns of turnout from one election to another can and do arise.

In sum, turnout weighting is an essential component of accurate polling but there is no failsafe way of doing it.

The inquiry into the 2015 election polling concluded that, although the turnout probabilities used by the pollsters in that election were not very accurate, there was little evidence to suggest these were the cause of the polling errors. Might inaccuracies in the turnout weights be more consequential in 2017?

Effect of turnout weighting on vote intention estimates

We can get some handle on this by comparing the poll estimates of the Conservative-Labour margin before and after turnout weights have been applied. The table below shows estimated Conservative and Labour vote shares before and after turnout weighting for eleven recently published polls. It is clear that the turnout weights have a substantial effect on the size of the Conservative lead. Without the turnout weight (but including demographic and past-vote weights), the average Conservative lead over Labour is 5 percentage points. This doubles to 10 points after turnout weights have been applied.

 

Vote estimates with turnout weight Vote estimates without turnout weight
Pollster Fieldwork End Date CON LAB CON CON LAB CON
(%) (%) lead (%) (%) lead
ORB/Sunday Telegraph 4th June 46 37 9 44 38 6
IpsosMORI/Standard 1st June 45 40 5 40 43 -3
Panelbase 1st June 44 36 8 40 39 1
YouGov/Times 31st May 42 39 3 41 39 2
Kantar 30th May 43 33 10 40 34 6
ICM/Guardian 29th May 45 33 12 41 38 3
Survation (phone) 27th May 43 37 6 43 37 6
ComRes/Independent 26th May 46 34 12 43 38 5
Opinium 24th May 45 35 10 42 36 6
Survation (internet) 20th May 46 34 12 43 33 10
GfK 14th May 48 28 20 45 29 16
Mean  = 10   Mean  = 5
      S.D.  = 4.5  S.D. = 4.9

 

Particularly notable are the Ipsos-MORI estimates, which change a 3-point Labour lead into a 5-point lead for the Conservatives. Similarly, ICM’s turnout adjustment turns a 3-point Conservative lead into a 12-point one. It is also evident that pollsters using some form of demographic modeling to produce turnout probabilities tend to have somewhat higher estimates of the Conservative lead. For this group (Kantar, ICM, ORB, Opinium, ComRes), the turnout weight increases the Conservative lead by an average 5.4 points compared to 3.7 points for those relying on self-report questions only.

It is also worth noting that the standard deviation of the Conservative lead is actually slightly lower with the turnout weights (4.5) than without (4.9). So, the turnout weighting would not appear to be the main cause of the volatility between the polls that has been evident in this campaign.

This pattern represents a substantial change in the effect of the turnout weights compared to polls during the 2015 campaign, where the increase in the Conservative lead due to turnout weighting was less than one percentage point (for the nine penultimate published polls conducted by members of the British Polling Council).

Why is turnout weighting having a bigger effect now than it did in 2015? One reason is that many pollsters are applying more aggressive procedures than they did in 2015, with the aim of producing implied turnout in their samples that is closer to what it will actually be on election day. While there is a logic to this approach it seems, in effect, to rely on getting the turnout probabilities wrong in order to correct for over-representation of likely voters in the weighted samples.

A second reason turnout weighting matters more in this election is that the age gap in party support has increased since 2015, with younger voters even more likely to support Labour and older voters to support the Conservatives.  Thus, any adjustment that down-weights younger voters will have a bigger effect on the Conservative lead now than it did in 2015.

Corbyn-mania among younger voters?

Another idea that has been advanced in some quarters is that young voters are over-stating their likelihood to vote in this election even more than they did in 2015. Come election day, these younger voters will end up voting at their recent historical levels and Labour will underperform their polling as a result.

We can obtain some leverage on this by comparing the distributions of self-reported likelihood to vote for young voters, aged 18-24, in 2015 and 2017 (the 2017 figures are from the polls in the table above, the 2015 estimates are taken from the penultimate published polls in the campaign). We also present these estimates for the oldest age category (65+). There is no evidence here that younger voters are especially enthused in 2017 compared to 2015. And, while the implied level of turnout is substantially too high for both age groups, the 20 point gap between them is broadly reflective of actual turnout in recent elections.

votelikelihood

The inquiry into the 2015 polling miss found that representative sampling was the primary cause of the under-statement of the Conservative lead. The fact that implied turnout is still so high in the current polls suggests that the representativeness of samples remains a problem in 2017, on this measure at least. Turnout weighting is having a much bigger effect on poll estimates now than it did in 2015. This may be because the pollsters have improved their methods of dealing with the tricky problem of turnout weighting. However, it also suggests that getting turnout weighting right in 2017 is likely to be both more difficult and more consequential than it was in 2015.

Dragging Santa into Politics

DipticBy Will Jennings, Professor of Political Science and Public Policy at University of Southampton (Academia.edu, Twitter). Read more posts by Will here.


B5h7S73IcAA-uRW

Source: @john_neptune

“An urgent message for Father Christmas: all we wish for is our country back!” was the slogan spotted emblazoned across an upside down union jack flag in Hedge End, Hampshire this week. Poor old Santa is increasingly being dragged into the mud of partisan politics — on both sides of the Atlantic. Despite the plea for Father Christmas to return the UK to some rose-tinted age, British voters are not convinced that he is a Ukip supporter. Surveyed by YouGov last December, just 13% of people thought that Father Christmas would vote Ukip. This perhaps should not be a surprise given that, as it has been pointed out, is “is effectively a foreigner doing a job at the expense of hard-working parents across the country.” The much more widespread view was that Santa was a Labour voter (the view of 27% of respondents), no doubt because of his obvious distributive politics, or a Green (23%), due to the low carbon emissions of his sleigh and his role in conservation of Reindeer. As British politics has become increasingly fragmented, and struggles to cope with a new era of multi-party politics, there is no clear consensus among the electorate on which side of the partisan divide Santa belongs.

Father Christmas vote

These figures mask the underlying partisan divide in how voters view Santa. Once “don’t knows” and non-voters are excluded, 64% of Labour supporters think he votes Labour. Among Conservative supporters, 59% are of the view that Santa shares their political predilection. Meanwhile, 60% of Ukip supporters believe that Santa is a member of Nigel Farage’s purple army, despite his distinctly red suit and careless disregard for EU border controls as he delivers presents across the world. Even when it comes to Santa, most people can’t put politics aside and see him through a partisan prism.

The same pattern is observed in the US, where a Public Policy Polling survey in 2012 asked “Do you think Santa Claus is a Democrat or a Republican?” In response, 44% of the US public thought Santa was a Democrat and 28% a Republican, with 28% unsure. Again, once we break the numbers down by partisanship we see starker divisions. Some 79% of Democrats believe Santa votes just like them, compared to 61% of Republicans – suggesting that even some partisan Republicans can recognise the distributive politics involved in giving large numbers of presents to children. Indeed, the evidence suggests that Santa has been a victim of growing political polarization in the US. Strikingly, 85% of Republicans thought that Santa was more likely to leave Obama a lump of coal rather than gifts, suggesting that the festive spirit has not exactly led Americans to put aside their partisan differences. Indeed, Republicans are marginally more likely than Democrats, by 49% to 41%, to tell daddy if they see mommy kissing Santa Claus, though the jury is still out on the political implications of this. The dragging of Santa into the world of partisan politics on both sides of the Atlantic offers a nice illustration of the degree to which partisanship structures so much of our everyday lives – even when it relates to (shhhhhhh!) fictional characters…

Polling Observatory #38: Polls may bounce, but public opinion usually doesn’t

This is the thirty-eighth in a series of posts that report on the state of the parties as measured by opinion polls. By pooling together all the available polling evidence we can reduce the impact of the random variation each individual survey inevitably produces. Most of the short term advances and setbacks in party polling fortunes are nothing more than random noise; the underlying trends – in which we are interested and which best assess the parties’ standings – are relatively stable and little influenced by day-to-day events. Further details of the method we use to build our estimates of public opinion can be found here.

UK 30-01-14 anchor on average

 

This was a bouncing polls month. Early on in June, several polls pointed to an unexpected rebound in Labour’s fortunes, leading to a brief flurry of speculation about a Labour surge. Then, right at the end of the month (and mostly outside of the window that our latest estimates refer to) polls started to show a slight recovery for the Conservatives, which was immediately labelled a “Juncker bounce” by the media, particularly the parts of it who approved of David Cameron’s fruitless campaign to prevent former Luxembourg Prime Minister Jean-Claude Juncker from taking over as President of the European Commission.

The Polling Observatory’s method tends to have a more conservative view of moves in public opinion. Our estimates for the first of July put Labour at 34.6%, up 0.8 points on a month ago. While this is a modest rebound, it nonetheless represents a reversal of the downward trend evident for most of 2014 to date, and is the first significant up-tick in support for Labour since the autumn of last year. Conservative support is stable at 30.8%, down just 0.1 points on last month. However, this is without most of the alleged “Juncker bounce” polls collected in the first week of July, which, when added in, may push the Conservatives modestly higher than they were in late May – but this remains to be seen.

There is little evidence yet of a fall in UKIP support now the European Parliament elections have passed, confounding the expectations of pundits who believed the European election victory was the “peak UKIP moment”. Our estimates have Farage’s party at 14.8%, down just 0.1% on last month. The Liberal Democrats, however, continue to slide to new record lows. This month they register just 8.8%, down 0.5% on last month, and an all-time low under our new methodology.

While our model does register significant month-on-month, and even week-on-week, shifts in public opinion, these are never as dramatic as those shown in the polls which grab the most headlines. The truth is that such bounces are far too large to be plausible as real movements in public opinion — a 7 point swing, for example, would require 2 million people to change their vote preferences in a single week or month. This simply does not happen in the absence of a very powerful change in the political context. It is just not very plausible to believe that 2 million people switched to Labour at the beginning of the month, without any compelling reason to do so, or that a similar mass of voters were won over to the Conservatives by Cameron’s quixotic anti-Juncker campaign.

Once the polls are aggregated together, and the noise inevitably produced by random sampling variation is filtered out, the bounces in public opinion from month to month become much smaller. The largest weekly shifts in support we find to date in this Parliament mostly occur at the very beginning, when Lib Dem support fell by 1.5% in the second week after the general election, and then carried on falling at a similar rate for several weeks afterwards, while Labour support shifted upwards at a similar rate. In fact, almost half of the 20 largest weekly changes in public opinion in this Parliament are accounted for by the Lib Dems’ post-election collapse. This shift in preferences followed a hugely significant and largely unexpected event – the formation of a coalition between the Liberal Democrats and the Conservatives. Millions of Liberal Democrats who had regarded the party as an ideological stable mate of Labour saw their vote choices in a new light and changed their preferences accordingly.

Party Number of weeks since May 6, 2010 (week starting) Weekly change in vote intentions
Lib Dem 2 (13/05/2010) -1.5
Lib Dem 3 (20/05/2010) -1.5
Lab 2 (13/05/2010) 1.4
Lab 3 (20/05/2010) 1.3
Con 101 (05/04/2012) -1.2
Lib Dem 4 (27/05/2010) -1.2
Con 86 (22/12/2011) 1.2
Lib Dem 5 (03/06/2010) -1.0
Lib Dem 9 (01/07/2010) -0.9
Lab 4 (27/05/2010) 0.9
Con 102 (12/04/2012) -0.9
Lib Dem 6 (10/06/2010) -0.9
Lib Dem 8 (24/06/2010) -0.8
Con 100 (29/03/2012) -0.8
Con 159 (16/05/2013) -0.8
Lib Dem 7 (17/06/2010) -0.8
Lab 34 (23/12/2010) 0.8
Lab 86 (22/12/2011) -0.8
Lib Dem 10 (08/07/2010) -0.8
Lab 107 (17/05/2012) 0.7

The other major shift in voters’ preferences during this parliament (aside from a sharp, but short lived, rally in Conservative support immediately after David Cameron’s European summit veto in December 2011) came in the aftermath of the “omnishambles” budget of March 2012, with the Conservatives’ poll rating falling around a percentage point three weeks in a row, and. Even then, there are several different explanations for this dramatic shift in preferences (as we discussed at the time here), which may have combined to make something of a perfect political storm – wrecking the Conservatives’ reputation for competence, alienating previously supporting groups, and reinforcing negative stereotypes about the ‘nasty party’.

Events such as the formation of a governing coalition between two parties that were not regarded as natural allies can produce large swings in the polls, so can highly visible examples of incompetence or economic crises. However, the vast majority of political events are nowhere near as significant. This is why the correct initial reaction to any headline of the form “x produces bounce in polls” is “let’s wait and see”. In the vast majority of cases, the apparent realignment of voters is swiftly revealed to be a statistical phantom.

Robert FordWill JenningsMark Pickup and Christopher Wlezien

Student-Designed YouGov Poll on Aspirations of Young Adults

YouGov recently released results from a poll designed by students in PAIR2004: Research Skills in Politics and International Relations. One of the key findings?

“18-24 year olds are more likely to emphasise the importance of their career in the next 10 years – and much more likely to consider creating a bucket list than the older generations.”

Read the written report by Hazel Tetsill.

 

The Polling Observatory Forecast #2: Still A Dead Heat, Despite Recent Turbulence…

As explained in our inaugural election forecast last month, up until May next year the Polling Observatory team will be producing a long term forecast for the 2015 General Election, using methods we first applied ahead of the 2010 election (and which are also well-established in the United States). Our method involves trying to make the best use of past polling evidence as a guide to forecast the likeliest support levels for each party in next May’s election (see our previous research here), based on current polling, and then using these support levels to estimate the parties’ chances of winning each seat in the Parliament. We will later add a seat-based element to this forecast.

Forecast 01-06-14

In light of the turbulence of the polls over the course of the European election campaign (with a Lord Ashcroft poll showing the Conservatives ahead for the first time since March 2012), inquests into the insipid performance of Labour and Ed Miliband, better than expected results for the Conservatives in local and European elections, and a disastrous showing by the Liberal Democrats, some might have expected a turning point or a step change in the predictions for May 2015 – consistent with the pattern for governments to often recover in the polls during the final year. However, some degree of recovery is already built in to our model and there is, as yet, no evidence that the Conservatives are outperforming the historical trend. Our forecast puts Labour and the Conservatives in a dead heat, as it did last month. We currently forecast both parties to receive 35.8% of the vote. In part this reflects the very recent uptick in Labour support following a decline over recent months. More significantly, though, it reflects the fact that both parties are polling well below their historical level, and therefore we expect both to make some recovery in the polls. However, the prospect of a recovery to the kind of levels seen by winners in past elections – 40% plus – is tempered by the very low starting point for both main competitors. Both main parties are likely to put in weaker performances than in the past, even with a recovery from the current low ebb, but at present history continues to suggest a very tight race to the finish next spring.

Polling Observatory #37: No Westminster polling aftershock from European Parliament earthquake

This is the thirty-seventh in a series of posts that report on the state of the parties as measured by opinion polls. By pooling together all the available polling evidence we can reduce the impact of the random variation each individual survey inevitably produces. Most of the short term advances and setbacks in party polling fortunes are nothing more than random noise; the underlying trends – in which we are interested and which best assess the parties’ standings – are relatively stable and little influenced by day-to-day events. If there can ever be a definitive assessment of the parties’ standings, this is it. Further details of the method we use to build our estimates of public opinion can be found here.

UK 06-01-14 anchor on average

This month’s Polling Observatory comes in the aftermath of the European Parliament elections and the so-called UKIP earthquake for the electoral landscape. Despite much volatility in the polls ahead of those elections, with a few even putting the Conservatives ahead of Labour for the first time in over two years, underlying support for both main parties remained stable over the course of the month. Labour may have fallen early in the month in the run-up to the European elections, or the Conservative leads may have been the result of random variation. In any event, by the end of the month, we had Labour polling at 33.8%, just 0.2 points down on their support a month ago. The Conservatives are also broadly flat at 30.9%, 0.3 points below their standing a month ago. The Lib Dems have suffered slightly more of a post-election hangover, perhaps set back by infighting over the botched coup by Lord Oakeshott and the widespread ridicule over the Clegg/Cable beer-pulling photo op, on 9.3%, down 0.4 points. UKIP support remained stable at record high levels, as they enjoyed a moment in the limelight around the European Parliament elections. We have them rising 0.2 points on last month to 14.9%, their highest support level to date. Note that all these figures are based on our adjusted methodology, which is explained in detail below.

It is noticeable that while Labour’s support has been in decline for the last six to nine months (having plateaued for a period before that) underlying Conservative support has remained incredibly stable around the 31% level. In fact, setting aside the slight slump around the time of the last UKIP surge at the 2013 local elections, their standing with the electorate has been flat since its crash of April 2012 around the time of the ‘omnishambles’ budget. The narrowing in Labour’s lead over the past year is entirely the result of Labour losing support, not of the Conservatives gaining it. We have written at length previously about how the fate of the Liberal Democrats was sealed in late 2010, and as such it is remarkable that in this parliament there has been so little movement in the polls for the parties in government. The prevalent anti-politics mood out in the country and continued pessimism about personal/household finances has meant that neither of the Coalition partners have yet been able to convert the economic recovery into a political recovery. Instead, both are gaining ground relatively as the main opposition party also leak support, perhaps also succumbing to the anti-Europe, anti-immigration, anti-Westminster politics of UKIP.

As explained in our methodological mission statement, our method estimates current electoral sentiment by pooling all the currently available polling data, while taking into account the estimated biases of the individual pollsters (“house effects”). Our method therefore treats the 2010 election result as a reference point for judging the accuracy of pollsters, and adjusts the poll figures to reflect the estimated biases in the pollsters figures based on this reference point. Election results are the best available test of the accuracy of pollsters, so when we started our Polling Observatory estimates, the most recent general election was the obvious choice to “anchor” our statistical model. However, the political environment has changed dramatically since the Polling Observatory began, and over time we have become steadily more concerned that the changes have rendered our method out of date. Yet changing the method of estimation is also costly, as it interrupts the continuity of our estimates, and makes it harder to compare our current estimate with the figures we reported in our past monthly updates.

There were three concerns about the general election anchoring method. Firstly, it was harsh on the Liberal Democrats, who were over-estimated by pollsters ahead of 2010 but have been scoring very low in the polls ever since they lost over half their general election support after joining the Coalition. The negative public views of the Liberal Democrats, and their very different political position as a party of government, make it less likely that the current polls are over-estimating their underlying support. Secondly, a general election anchor provides little guidance on UKIP, who scored only 3% in the general election but poll in the mid-teens now, but with large disagreements in estimated support between pollsters (see discussion of house effects below). Thirdly, the polling ecosystem itself has changed dramatically since 2010, with several new pollsters starting operations, and several other established pollsters making such significant changes to their methodology that they were equivalent to new pollsters as well.

We have decided that these concerns are sufficiently serious to warrant an adjustment to our methodology. Rather than basing our statistical adjustment on the last general election, we now make adjustments relative to the “average pollster”. This assumes that the polling industry as a whole will not be biased. This assumption could prove wrong, of course, as it did in 2010 (and, in a different way, 1992). However, it seems pretty likely that any systematic bias in 2015 will look very different to 2010, and as we have no way of knowing what the biases in the next election might be, we apply the “average pollster” method as the best interim guide to underlying public opinion.

This change in our methodology has a slight negative impact on our current estimates for both leading parties. Labour would be 34.5% if anchored against the 2010 election, rather than the new estimate of 33.8%, while the Conservatives would be on 31.5% rather than 30.9%. Yet as both parties fall back by the same amount, their relative position is unchanged.  UKIP gain slightly from the new methodology – our new estimate is now 14.9%, under the old method they would score 14.5%. However, the big gainers are the Lib Dems, who were punished under our old method for their strong polling in advance of the 2010 general election.  We now estimate their vote share is estimated at 9.3%, significantly above the anaemic 6.7% estimate produced under the previous method. This is in line with our expectations in earlier discussions of the method in previous posts. It is worth noting that none of these changes affect the overall trends in public opinion that we have been tracking over the last few years, as will be clear from the charts above.

The European Parliament elections prompted the usual inquest into who among the nation’s pollsters had the lowest average error of the final polls compared against the result (see here). We cannot simply extrapolate the accuracy of polling for the European elections to next year’s general election. For one thing, these sorts of ‘final poll league table’ are subject to sampling error, making it extremely difficult to separate the accuracy of the polls once this is taken into account (as we have shown here). Nevertheless, with debate likely to continue to rage over the extent of the inroads being made by UKIP as May 2015 approaches, some of the differences observed in the figures reported by the polling companies will come increasingly under the spotlight. These ‘house effects’ are interesting in themselves because they provide us with prior information about whether an apparent high or low poll rating for a party, reported by a particular pollster, is likely to reflect an actual change in electoral sentiment or is more likely be down the particular patterns of support associated with the pollster.

Our new method makes it possible to estimate the ‘house effect’ for each polling company for each party, relative to the vote intention figures we would expect from the average pollster. That is, it tells us simply whether the reported vote intention for a given pollster is above or below the industry average. This does not indicate ‘accuracy’, since there is no election to benchmark the accuracy of the polls against. It could be, in fact, that pollsters at one end of the extreme or the other are giving a more accurate picture of voters’ intentions – but an election is the only real test, and even that is imperfect.

In the table below, we report all current polling companies’ ‘bias’ for each of the parties. We also report details of whether the mode of polling is telephone or Internet-based, and adjustments used to calculate the final headline figures (such as weighting by likelihood to vote or voting behaviour at the 2010 election). From this, it is quickly apparent that the largest range of house effects come in the estimation of UKIP support, and seem to be associated with the method a pollster employs to field a survey. All the companies who poll by telephone (except Lord Ashcroft’s new weekly poll) tend to give low scores to UKIP. By contrast, three of the five companies which poll using internet panels give higher than average estimates for UKIP. ComRes provide a particularly interesting example of this “mode effect”, as they conduct polls with overlapping fieldwork periods by telephone and internet panel. The ComRes telephone-based polls give UKIP support levels well below average, while the web polls give support levels well above it. It is not clear what is driving this methodological difference – something seems to be making people more reluctant to report UKIP support over the telephone, more eager to report it over the internet, or both. The diversity of estimates most likely reflects the inherent difficulty of accurately estimating support for a new party whose overall popularity has risen rapidly, and where the pollsters have little previous information to use to calibrate their estimates.

House Mode Adjustment Prompt Con Lab Lib Dem UKIP
ICM Telephone Past vote, likelihood to vote UKIP prompted if ‘other’ 1.3 -0.9 2.8 -2.4
Ipsos-MORI Telephone Likelihood (certain) to vote Unprompted 0.5 0.4 0.5 -1.6
Lord Ashcroft Telephone Likelihood to vote, past vote (2010) UKIP prompted if ‘other’ -0.7 -0.8 -1.2 0.9
ComRes (1) Telephone Past vote, squeeze, party identification UKIP prompted if ‘other’ 0.6 0.0 0.2 -2.5
ComRes (2) Internet Past vote, squeeze, party identification UKIP prompted if ‘other’ 0.3 -0.7 -1.0 1.8
YouGov Internet Newspaper readership, party identification (2010) UKIP prompted if ‘other’ 1.9 2.1 -1.3 -0.2
Opinium Internet Likelihood to vote UKIP prompted if ‘other’ -0.8 -0.9 -2.3 3.0
Survation Internet Likelihood to vote, past vote (2010) UKIP prompted -1.8 -1.5 -0.2 4.4
Populus Internet Likelihood to vote, party identification (2010) UKIP prompted if ‘other’ 2.3 1.5 0.2 -2.2

Robert FordWill JenningsMark Pickup and Christopher Wlezien

The Polling Observatory Forecast #1: Lessons for 2015 from polling history

With a year to go, the Polling Observatory team launch their forecast for the 2015 general election…

Starting this month, the Polling Observatory team is joined by a new member: our old friend and colleague, Christopher Wlezien of the University of Texas at Austin, who will be helping us to produce a long term forecast for the 2015 General Election, using methods we first applied ahead of the 2010 election. Our method involves trying to make the best use of past polling evidence as a guide to forecast the likeliest support levels for each party in next May’s election, based on current polling, and then using these support levels to estimate the parties’ chances of winning each seat in the Parliament. In this first post, we introduce the poll-based element of this model; in later posts we will introduce and explain the seat-based element.

The past is, of course, an imperfect guide, as voters and parties change and each election is, to some extent, unique. However, this does not mean past polling tells us nothing. On the contrary, as we have shown in previous research (non-gated version here), careful analysis of past polling reveals common underlying trends and patterns in British public opinion. It is these trends and patterns which we use to estimate our forecast of the likely path of public opinion. In 2010, our method fared relatively well against the alternatives on offer, getting the overall outcome of a hung Parliament with the Conservatives as the largest party correct, and coming quite close on all three parties’ seat totals. The forecast performed well relative to others forecasting models published by colleagues, notably beating celebrated polling oracle Nate Silver’s prediction for the British election.

The method works in the following way. Thanks to the Fixed Term Parliaments Act, we know how many days remain until the next general election. For any given day, we can use all the polling data from past general election campaigns to estimate two things: how closely the current polling is likely to reflect the election outcome, and which direction public opinion is likely to move in between now and election day. We do this for each of the three main parties separately, seeing what polling history can tell us about their respective fates.

This is one of the simplest possible ways of forecasting how elections will turn out, and it leaves out an awful lot. We do not look at the impact of leader approval ratings, the objective state of the economy, or public economic perceptions – things which other models have used as forecasting tools. We simply take the best possible estimate of where public opinion is today (an estimate constructed using our poll aggregation method) and ask: How informative does history suggest this estimate will be as a prediction of the next election? Where does history suggest public opinion will move between now and election day?

Our method starts by considering the systematic and predictable ways in which the public’s intention to vote for parties varies over the election cycle – based on past evidence. Some shifts in public opinion are impossible to anticipate, such as in reaction to shocks or events. Other dynamics may be more predictable, however; for instance that pre-election poll leads tend to fade or that parties may benefit from ‘campaign effects’ (such as due to increased attention during the official election campaign). To forecast the election day vote share, we need to know the relationship between vote intention t days before the election and the vote share in past elections. Therefore, the first step in our forecasting procedure is to estimate the relationship between vote intention and vote share through a series of regressions – for each of the main parties – for each day of the election cycle. To do this, we use all available polling data since 1945 (more than 3,000 polls) – across seventeen elections. This allows us to determine both how well the polls predict the final outcome on a given day (unsurprisingly the polls become more predictive the closer we get to the election), and to determine whether support for a party is above or below its long-term equilibrium level – and is likely to gain or fade as the election approaches.

Our past work, using this very simple method, suggests it has some useful lessons to teach us. Firstly, we find that the predictive power of polls evolves differently for different parties: polling of Conservative support becomes steadily more predictive from over a year out, while for Labour and the Liberal Democrats, the main improvement in accuracy comes in the last six months. Secondly, we find that support for Labour, the Liberal Democrats and the Conservatives tends to “regress to the mean” – if current support is above the long run average, it tends to fall; if it is below, then it tends to rise.

While the daily regressions teach us a lot, there is a fair bit of “noise” in the regression estimates, as each regression is based on 17 data points (elections). However, although each regression is based on only 17 data points, we have a separate regression for each day and we know that the estimates for one day should only differ so much from the estimates from the days immediately before and after it. Therefore, the second step in our forecast procedure is to reduce the noise by smoothing the regression estimates over time. The procedure we use to produce the smoothed regression estimates is similar to the procedure a sound engineer would use to remove static from a sound recording.

The third step in our forecast is to use the smoothed regressions estimates to produce forecasts by plugging our daily estimates of vote intention into the smoothed regression equations. These vote intention estimates come from the same poll aggregation method we use in our monthly Polling Observatory updates – a Bayesian averaging of the polls.

In the final step, we pooled the forecasts over 30 day intervals, so that the new pooled forecast on each day is a Bayesian averaging of all forecasts up to 30 days prior to that day.

The predictions we get by applying these methods to current polling since March 2011 are shown in the figure below. Our forecast model has consistently predicted a very close result – the Conservative vote share is expected to recover from its current level of around 32%, rising to around 36%, within half a percentage point of Labour, whose poll share is not expected to change much from current levels. In vote share, the result is close to a dead heat – the Conservatives are currently forecast to have 36.1%, and Labour 36.5%. The Liberal Democrats are forecast to recover some ground from their current polling position, but still put in their weakest performance in decades, with a forecast vote share of 10.1%.

Forecast 01-05-14 cropped

Forecasting vote shares can only take us so far, however. Westminster elections are won and lost in 650 separate battles for constituencies up and down the country. The aggregate vote shares are only an imperfect guide to the likely distribution of seats – our current forecast of a 0.8 point Conservative lead in vote share, for example, would be likely to produce a Labour seat lead. In future posts, we will employ the second part of our forecasting model to translate these vote shares into seat shares.

The seat-based section of the forecast also provides us with a mechanism to examine two of the big unknowns in our forecast – how the Liberal Democrats will perform after their first term in government for generations, and the impact of UKIP. Neither has any historical precedent, so we cannot model these effects in the historical part of the model. However, we can take an alternative approach, taking the baseline predictions from our historical model and applying different scenarios in the seat based part of the model. This will give us some sense of how sensitive the likely outcome is to changes in the fortunes of the two smaller parties. We will explore such scenarios in future posts.

Robert FordWill JenningsMark Pickup and Christopher Wlezien.