Despite Trump, the United States Is Probably More Socially Liberal Than Ever

By Justin Murphy (@jmrphy), Lecturer in the Department of Politics and International Relations at the University of Southampton.


There is a lot of confusion about whether we’re seeing significant ideological change in the United States. With Trump and the re-appearance of white nationalism in the public spotlight, many people are wondering if conservative (right-wing) ideology is on the rise. One can find many influential outlets endorsing this notion. CBSBBCVox, and certainly others have all run articles suggesting this. On the other hand, many conservatives believe that “liberal” (left-wing in America) ideology is on the rise. There are good reasons for people to be confused, because the dynamics of ideology in the United States are confusing. To help clear up some of the confusion, I’ve written this guide to some of the basics of what political scientists know about the long-term historical dynamics of ideology in the United States. And how they shed light on what is happening, or not happening, right now.

If there is one substantial ideological shift in American public opinion in the post-war period, it is the dramatic and near-universal increase in social liberalism since the 1950s. There has not been a general shift to the left or right because economic conservatism has not changed much (although it has polarized on the left and right). There has been some cyclical, “thermostatic” movement in opinion (which is normal). There have been changes in symbolism (“liberalism” became stigmatized in the 1960s). And there have been some dramatic shifts in party identification (a pretty massive Republican resurgence with Reagan). Otherwise, one cannot say the American public has moved to the right or left as a whole, in any significant way, in the long-run or recently, except that it has become more socially liberal. There have been some interesting and substantial ideological shifts within groups, but that would need to be another post.

Racial Liberalism Data from Atkinson et al. (2011)
Racial Liberalism data from Atkinson et al. (2011) 

There is currently no good evidence I am aware of that overt racism or white nationalism is growing.1 It likely appears larger than it is, especially to progressives, precisely because it has never been less common in American history. This says nothing about how such stupid and malicious groups should be dealt with.

This is my interpretation based on what we know about long-term ideological dynamics in the United States. For a more detailed tour of that data, see the post on my personal blog, “Are Americans becoming more conservative or liberal (right or left)?”

 


  1. The only exception I have found is the data on the number of “hate groups” collected by the Southern Poverty Law Center, which reveals an upward climb since 1999. I am not going to say it’s wrong in a dismissive footnote, because it would deserve more attention than that. But I am excluding it from consideration here for a few reasons. First, it includes a wide variety of groups well beyond explicitly racist or white nationalist groups, including black separatist groups. So in this sense it does not reflect what I am considering in this post. But also the SPLC has come under fire for being increasingly politicized and untrustworthy as a data source. See this article from Politico, for instance. My personal view is that there has been a tendency in recent years for progressive groups to lower their bar for what counts as a hate group, and at least a few cases on the SPLC’s list suggest to me this has occurred there, at least to some degree.

Polling Observatory campaign report #4: Unexpected but not unusual twists and turns in the campaign polls

DipticBy The Polling Observatory (Robert FordWill JenningsMark Pickup and Christopher Wlezien). You can read more posts by The Polling Observatory here.


This post is part of a long-running series (dating to before the 2010 election) that reports on the state of the parties as measured by vote intention polls. By pooling together all the available polling evidence we can reduce the impact of the random variation each individual survey inevitably produces. Most of the short term advances and setbacks in party polling fortunes are nothing more than random noise; the underlying trends – in which we are interested and which best assess the parties’ standings – are relatively stable and little influenced by day-to-day events. Further details of the method we use to build our estimates can be found here and here.

The general election is now just two days away, after a campaign that has defied pundits’ expectations of a walkover for Theresa May’s Conservatives, and seen both surprises and tragic events along the way. While the pollsters will likely deliver their final verdict on what voters are saying tomorrow, the Polling Observatory brings you its final roundup of the polls – as they stood up to Sunday night. We may yet see a late swing from the voters, as the choice between the parties becomes clearer in their minds. As such, our estimates remain ‘a snapshot, and not a prediction’.

In the main, there has been little change from the trends that we reported last week: the Conservatives retain a substantial lead in the polls, though are down from 44.5% to 43.8%, while Labour’s resurgence continues – now on 36.8%, up from 35.5% last week. Consequently, what was a 9-point gap (averaged across the pollsters) is now a 7-point gap. However, the change is within the error of most polls and there is considerable variation in the size of leads that pollsters are showing – in part due to the different turnout adjustments being applied. Based on the range of pollsters’ headline figures, the projected results include anything between a hung parliament and a Conservative landslide, hardly providing clarity on matters. Meanwhile, the Liberal Democrats, UKIP and Greens continue to endure a miserable campaign. Current trends suggest that the big two parties will be more politically dominant in this election than at any time for a generation.

UK 04-06-17 anchor on average

The upturn in Labour’s support has led some to suggest this is the biggest shift in the polls during an election campaign since 1945. It is not entirely without precedent, though. In 2010, the surge in Liberal Democrat support following Nick Clegg’s highly effective performance on the first ever television debate – “Cleggmania” – was of similar scale to Labour’s gains in the polls in the 2017 election (around 10-points). That shift in the polls occurred over the course of just seven days, whereas during this campaign Labour’s poll numbers have risen steadily over a six week period. Some of the Liberal Democrats’ gains in the polls after the 2010 debate dissipated in the subsequent weeks of the campaign, and most of the remaining effects vanished by the time people voted. This is shown below, where the blue, red and yellow markers indicate the actual election result for each of the parties in 2010 – with the orange line notably ending well above the orange circle indicating the result. In contrast, the trends in party support during the 2017 campaign have been more gradual – with no sharp upticks or downticks for either the Conservatives or Labour. This may suggest there is less risk of pollsters’ overshooting in measuring the Labour surge, but only time will tell whether this is the case.

2010

2017

It is also possible to verify this claim historically based on the observed variance in all polls conducted over the campaign. For this, we use 574 polls conducted during the last thirty days of the campaign, for all elections between 1959 and 2017. The results are shown in the table below. What is striking from this analysis is that the variance of Labour’s poll numbers has been high by historical standards, but is still less volatile than the Liberal Democrats’ polling in 2010, 1983 or February 1974 or Labour’s polling in 2001 or 1983. The mean variance in the polls across the three parties is also not that much above the historical average (5.6 compared to 4.9). While 2017 has been a surprising and eventful campaign, it does not differ that much from past elections in terms of variability of the polls. Indeed, it is apparent from the table that the 2015 campaign was quite anomalous in the stability of the polls, which may be influencing our perceptions of how volatile polls can be during UK elections.

Variance in all polls  
Election Conservatives Labour Liberals/SDP/Liberal Democrats Mean N of polls
1959 3.70 2.89 1.85 2.81 10
1964 1.03 1.34 0.77 1.05 6
1966 0.78 1.32 0.28 0.79 6
1970 4.12 1.50 1.56 2.39 8
1974 (Feb) 4.98 7.07 13.94 8.67 12
1974 (Oct) 4.70 3.92 2.59 3.74 29
1979 11.69 5.73 5.49 7.64 24
1983 4.99 11.13 14.53 10.22 50
1987 2.16 4.31 5.00 3.82 32
1992 2.68 2.39 4.84 3.30 54
1997 4.19 8.80 5.43 6.14 39
2001 3.04 10.86 5.28 6.39 30
2005 4.28 3.17 1.91 3.12 58
2010 5.92 4.91 20.40 10.41 88
2015 2.10 2.29 1.08 1.82 82
2017 5.42 10.08 1.33 5.61 46

Much commentary already seems to be preparing for another polling miss after the experiences of 2015 and 2016. Certainly, with current polling showing Conservative leads ranging from 1% to 12% someone will be substantially wrong (and someone should be right). The lack of consensus in the polls provides an important reminder, though, that surveying the public on their voting intentions is a hard business at the best of times – and this task is made more difficult by the varied geographical picture that may well emerge on election night, with Labour well supported among younger, educated voters in cities and the Conservatives making gains in regions and towns where once ‘working class Tories’ of the 1980s are being drawn to the leadership of Theresa May in the aftermath of the Brexit vote. It is possible that Labour will end up with the highest vote share since 2005 or even 2001, but the lowest number of seats since 1935. In the British “first past the post” system, it is not just how many votes a party gets which counts, but where they are cast. The geography of Labour and Conservative support could be just as important as their overall popularity, but at present it is receiving much less attention.

 

Robert FordWill JenningsMark Pickup and Christopher Wlezien

Will turnout weighting prove to be the pollsters’ Achilles heel in #GE2017?

sturgis2jennings2

By Patrick Sturgis and Will Jennings, University of Southampton.


The 2017 election campaign has confounded expectations in many ways, none more so than Labour’s continuing surge in the opinion polls. From an average vote share of around 26% at the start of the campaign, they now stand at an average of 36% in the polls conducted over the past week. It is fair to say that few, if any, commentators expected Labour’s support to be at this level as we head into the final week of the campaign.

One of the theories advanced to explain Labour’s unexpectedly lofty position is that the opinion polls are, once again, wrong; their historical tendency to over-state Labour support has not been adequately addressed by the pollsters since the debacle of 2015. Key to this line of thinking is that Labour’s support appears to be ‘soft’, in the sense that those who say they will vote Labour in the polls are more likely to also report that they may change their mind before election day, compared to Conservative ‘intenders’. Labour’s core support is also concentrated within demographic groups that are, historically at least, less likely to cast a ballot, particularly younger voters.

Patterns of turnout across demographic groups will, of course, be key to determining the outcome of the election. But might turnout – and how pollsters deal with it – also be the cause of another polling miss on June the 8th?

Who will turnout and who won’t?

Adjusting for turnout is one of the most difficult tasks a pollster must confront. Polls work by collecting samples of individuals and weighting them to match the general public on characteristics such as age, gender, region, and education for which the population distribution is known. But around a third of  any representative sample of eligible voters will not vote, so an additional adjustment has to be made to filter out likely non-voters from the vote share estimate. The problem here is that there is no entirely satisfactory way of doing this.

The most obvious approach to determining whether poll respondents will vote or not is to ask them. This is indeed the way that the vast majority of polls in the UK have approached turnout weighting in previous elections. In order to allow respondents to express some level of uncertainty, pollsters usually ask them to rate their probability of voting on a 1 to 10 scale (where 1 = certain not to vote and 10 = certain to vote). The problem with this approach is that, for a variety of reasons, people are not very good at estimating their probability of voting. So turnout weights based on self-report questions tend to have high error rates, mainly of the ‘false-positive’ variety. Some pollsters use additional questions on turnout at previous elections to produce a turnout probability but these also suffer from problems of recall and socially desirable responding.

A second approach is to use historical survey data containing a measure of actual turnout (either self-reported after the election or via validation of actual votes using the electoral register). Such data is used to build a statistical model which predicts turnout on the basis of demographic characteristics of respondents. This ‘historical’ model can then be applied to current polling data in order to produce turnout probabilities based on actual turnout patterns from the previous election. While this gets round the problems with faulty reporting by respondents, with this approach we must believe that patterns of turnout haven’t changed very much since the previous election, an assumption which cannot be tested at the time the estimates are required. And, as the EU referendum showed, sharp changes in patterns of turnout from one election to another can and do arise.

In sum, turnout weighting is an essential component of accurate polling but there is no failsafe way of doing it.

The inquiry into the 2015 election polling concluded that, although the turnout probabilities used by the pollsters in that election were not very accurate, there was little evidence to suggest these were the cause of the polling errors. Might inaccuracies in the turnout weights be more consequential in 2017?

Effect of turnout weighting on vote intention estimates

We can get some handle on this by comparing the poll estimates of the Conservative-Labour margin before and after turnout weights have been applied. The table below shows estimated Conservative and Labour vote shares before and after turnout weighting for eleven recently published polls. It is clear that the turnout weights have a substantial effect on the size of the Conservative lead. Without the turnout weight (but including demographic and past-vote weights), the average Conservative lead over Labour is 5 percentage points. This doubles to 10 points after turnout weights have been applied.

 

Vote estimates with turnout weight Vote estimates without turnout weight
Pollster Fieldwork End Date CON LAB CON CON LAB CON
(%) (%) lead (%) (%) lead
ORB/Sunday Telegraph 4th June 46 37 9 44 38 6
IpsosMORI/Standard 1st June 45 40 5 40 43 -3
Panelbase 1st June 44 36 8 40 39 1
YouGov/Times 31st May 42 39 3 41 39 2
Kantar 30th May 43 33 10 40 34 6
ICM/Guardian 29th May 45 33 12 41 38 3
Survation (phone) 27th May 43 37 6 43 37 6
ComRes/Independent 26th May 46 34 12 43 38 5
Opinium 24th May 45 35 10 42 36 6
Survation (internet) 20th May 46 34 12 43 33 10
GfK 14th May 48 28 20 45 29 16
Mean  = 10   Mean  = 5
      S.D.  = 4.5  S.D. = 4.9

 

Particularly notable are the Ipsos-MORI estimates, which change a 3-point Labour lead into a 5-point lead for the Conservatives. Similarly, ICM’s turnout adjustment turns a 3-point Conservative lead into a 12-point one. It is also evident that pollsters using some form of demographic modeling to produce turnout probabilities tend to have somewhat higher estimates of the Conservative lead. For this group (Kantar, ICM, ORB, Opinium, ComRes), the turnout weight increases the Conservative lead by an average 5.4 points compared to 3.7 points for those relying on self-report questions only.

It is also worth noting that the standard deviation of the Conservative lead is actually slightly lower with the turnout weights (4.5) than without (4.9). So, the turnout weighting would not appear to be the main cause of the volatility between the polls that has been evident in this campaign.

This pattern represents a substantial change in the effect of the turnout weights compared to polls during the 2015 campaign, where the increase in the Conservative lead due to turnout weighting was less than one percentage point (for the nine penultimate published polls conducted by members of the British Polling Council).

Why is turnout weighting having a bigger effect now than it did in 2015? One reason is that many pollsters are applying more aggressive procedures than they did in 2015, with the aim of producing implied turnout in their samples that is closer to what it will actually be on election day. While there is a logic to this approach it seems, in effect, to rely on getting the turnout probabilities wrong in order to correct for over-representation of likely voters in the weighted samples.

A second reason turnout weighting matters more in this election is that the age gap in party support has increased since 2015, with younger voters even more likely to support Labour and older voters to support the Conservatives.  Thus, any adjustment that down-weights younger voters will have a bigger effect on the Conservative lead now than it did in 2015.

Corbyn-mania among younger voters?

Another idea that has been advanced in some quarters is that young voters are over-stating their likelihood to vote in this election even more than they did in 2015. Come election day, these younger voters will end up voting at their recent historical levels and Labour will underperform their polling as a result.

We can obtain some leverage on this by comparing the distributions of self-reported likelihood to vote for young voters, aged 18-24, in 2015 and 2017 (the 2017 figures are from the polls in the table above, the 2015 estimates are taken from the penultimate published polls in the campaign). We also present these estimates for the oldest age category (65+). There is no evidence here that younger voters are especially enthused in 2017 compared to 2015. And, while the implied level of turnout is substantially too high for both age groups, the 20 point gap between them is broadly reflective of actual turnout in recent elections.

votelikelihood

The inquiry into the 2015 polling miss found that representative sampling was the primary cause of the under-statement of the Conservative lead. The fact that implied turnout is still so high in the current polls suggests that the representativeness of samples remains a problem in 2017, on this measure at least. Turnout weighting is having a much bigger effect on poll estimates now than it did in 2015. This may be because the pollsters have improved their methods of dealing with the tricky problem of turnout weighting. However, it also suggests that getting turnout weighting right in 2017 is likely to be both more difficult and more consequential than it was in 2015.

Polling Observatory campaign report #3: All changed, changed utterly

DipticBy The Polling Observatory (Robert FordWill JenningsMark Pickup and Christopher Wlezien). You can read more posts by The Polling Observatory here.


This post is part of a long-running series (dating to before the 2010 election) that reports on the state of the parties as measured by vote intention polls. By pooling together all the available polling evidence we can reduce the impact of the random variation each individual survey inevitably produces. Most of the short term advances and setbacks in party polling fortunes are nothing more than random noise; the underlying trends – in which we are interested and which best assess the parties’ standings – are relatively stable and little influenced by day-to-day events. Further details of the method we use to build our estimates can be found here and here.

With just over a week to go until the general election, the campaign continues to take surprising twists and turns, not least the jaw-dropping projection of a hung parliament from YouGov for The Times.

When we first reported, shortly after the snap-election was called, the Conservatives held a commanding lead and Labour seemed to be meandering towards electoral oblivion. This remained the scenario at the start of May, though the scale of UKIP’s collapse in the polls was starting to become clear – with the Conservatives seemingly the main beneficiary. Just four weeks on, the change in Britain’s political landscape is remarkable – even if the likely outcome of the election (a decent sized Conservative majority at least) remains much the same. Labour has surged in the polls, now standing at 35.5% (up over eight points from 27.8%), while the Tories have fallen back from their initial bump after May called the election (at a still impressive 44.5%, down from the high of 45.6%). UKIP have continued to lose support at a rapid rate, with our estimates putting them at just 4.0% – less than a third of their vote in the 2015 general election just two years ago. The Liberal Democrats’ have also fallen back to just 7.8% (which would be below their catastrophic performance at the last election). Barring an even larger polling miss than occurred in May 2015, the political landscape of Britain looks like it will be redrawn in unexpected ways. There continue to be good reasons to be cautious about what the polls are currently telling us – due to the wide range of Conservative leads being shown by different polling houses and the possibility that Labour’s votes may stack up in seats in cities among younger and educated voters, where they tend to already hold large majorities, while falling away in marginal seats elsewhere.

 

UK 29-05-17 anchor on average

One of the features of our method is that it enables us to estimate the ‘house effect’ for each polling company for each party, relative to the vote intention figures we expect from the average pollster. That is, it tells us simply whether the reported vote intention for a given pollster is above or below the industry average. This does not indicate ‘accuracy’, since this will only be known on June 9th. It could be, in fact, that pollsters at one end of the extreme or the other are giving a more accurate picture of voters’ intentions. Indeed, in contrast to the 2015 election where there was convergence of the pollsters around the Conservative-Labour margin of zero, the most recent set of polls have shown Conservative leads ranging from as little as 5 points to as high as 14 points – outcomes that would have vastly different results in terms of a parliamentary majority for Theresa May.

In the table below we report the ‘house effects’ towards or against each of the parties for all polling companies who have recently conducted surveys. We, of course, estimate separate effects where the same company uses different modes (i.e. where Survation have fielded polls using both online and telephone surveys). We also (where possible) create ‘new’ polling houses where pollsters have implemented significant changes to their method and weighting procedures, though these are not always easy to date precisely. Nevertheless, the estimates give a picture of which pollsters tend to show higher numbers for which party, and thus are a handy guide for reading the latest polls with a dose of caution.

Our estimates reveal a range of house effects – and some interesting patterns too. It is first of all apparent that ComRes and ICM stand out as tending to report higher numbers for the Conservatives (+1.6 points and +1.4 points respectively) and lower numbers for Labour (-1.6 and -1.4 points). In contrast, ORB, Survation (online) and SurveyMonkey are at the other end of the spectrum — in tending to show support for the Conservatives lower and Labour higher than the industry average. Interestingly, Ipsos MORI and Panelbase show both parties higher – due mainly to their tendency to put UKIP much lower (in the case of Ipsos MORI this is a substantial 4.5 points).

House effects, by pollster

Pollster Mode Turnout filter Con Lab Lib Dems UKIP Green
YouGov Online Self-reported -0.3 (0.2) -1.0 (0.2) +0.9 (0.1) +0.4 (0.2) -0.6 (0.1)
ComRes Online Turnout model +1.6 (0.3) -1.6 (0.3) +1.3 (0.2) -0.6 (0.3) -0.3 (0.1)
Ipsos MORI Telephone Self-reported +1.2 (0.4) +2.0 (0.4) +1.5 (0.2) -4.5 (0.3) -0.3 (0.2)
Survation Online Self-reported -2.2 (0.5) +0.5 (0.5) +0.6 (0.3) +1.0 (0.4) -1.1 (0.2)
Survation Telephone Self-reported 0.0 (0.8) -0.4 (0.8) -0.3 (0.5) -0.9 (0.5) -0.3 (0.3)
Panelbase Online Self-reported +1.2 (0.7) +0.4 (0.7) -0.1 (0.4) -0.9 (0.5) -0.2 (0.3)
Kantar (TNS) Online Turnout model -0.6 (0.6) -3.2 (0.6) +1.4 (0.4) +0.7 (0.4) +1.5 (0.3)
ORB Online Self-reported -1.3 (0.6) +1.7 (0.6) -0.4 (0.4) +1.6 (0.4) +0.2 (0.2)
SurveyMonkey Online Unknown -0.8 (0.7) +0.9 (0.7) -1.9 (0.4) +1.1 (0.5) +1.6 (0.3)
Opinium Online Self-reported +0.1 (0.5) +0.7 (0.5) -0.5 (0.3) +0.4 (0.3) -0.5 (0.2)
ICM Online Turnout model +1.4 (0.3) -1.4 (0.2) +0.3 (0.2) +0.8 (0.2) +0.1 (0.1)

Pollsters have made many methodological changes since 2015, making it tricky to discern the causes of variation in these ‘house effects’. One notable feature of the methodology used by ComRes and ICM is the use of demographic turnout models to predict the propensity of individuals to vote. This has the consequence of down-weighting those respondents who have been less likely to vote in previous elections – giving rise to considerably lower Labour vote shares due to their current reliance on younger respondents and previous non-voters. In contrast, other firms such as Ipsos MORI and Opinium use self-reported likelihood to vote, giving rise to slightly higher vote shares for Labour. We will only know which of these adjustment procedures (if either) has been effective on June 9th, however.

While the Conservatives still hold a sizeable lead, the differences across pollsters could represent the difference between a huge working majority in parliament for Theresa May and an election that delivers few gains to the Conservatives contrary to all expectations.  Only time will tell who has got closest to the result.

 

Robert FordWill JenningsMark Pickup and Christopher Wlezien

Polling Observatory #GE2017 campaign report #2:

DipticBy The Polling Observatory (Robert FordWill JenningsMark Pickup and Christopher Wlezien). You can read more posts by The Polling Observatory here.


This post is part of a long-running series (dating to before the 2010 election) that reports on the state of the parties as measured by vote intention polls. By pooling together all the available polling evidence we can reduce the impact of the random variation each individual survey inevitably produces. Most of the short term advances and setbacks in party polling fortunes are nothing more than random noise; the underlying trends – in which we are interested and which best assess the parties’ standings – are relatively stable and little influenced by day-to-day events. Further details of the method we use to build our estimates can be found here and here.

The Polling Observatory is now able to report on polling from the first fortnight of the election campaign. Our estimates show several notable changes on last time. Firstly, the Conservatives gained nearly three points after announcement of the general election but in the last week this gain has stalled – with their support now standing at around 46%. The big losers in the polls so far are UKIP – who have dropped several points in just the last two weeks (now at 7%). Indeed, their support has almost halved since mid-February, pointing to a bleak electoral outlook for the party. As we noted last time, UKIP’s collapse has closely mirrored the surge in Conservative support. Contrary to expectations, Labour has gained in the polls – with its support now standing at 28%, two points higher than in our last report. In contrast, the Liberal Democrats have fallen back slightly, at 10% still only a couple of points higher than their disastrous performance in 2015.

So far, the polls tell a pretty clear and straightforward story: a towering Conservative lead over their main challengers Labour, the collapse of UKIP and marginal revival of the Liberal Democrats. Whether any surprises lie in wait for us in the next five weeks depends largely upon whether the early Conservative surge wears off at all and whether Corbyn’s Labour can muster further gains in support that would deny Theresa May the landslide that looks on the cards.

UK 01-05-17 anchor on average

 

Robert FordWill JenningsMark Pickup and Christopher Wlezien

Polling Observatory campaign report #1: reading polling tea leaves in the shadow of the bonfire of the experts

DipticBy The Polling Observatory (Robert FordWill JenningsMark Pickup and Christopher Wlezien). You can read more posts by The Polling Observatory here.


This post is part of a long-running series (dating to before the 2010 election) that reports on the state of the parties as measured by vote intention polls. By pooling together all the available polling evidence we can reduce the impact of the random variation each individual survey inevitably produces. Most of the short term advances and setbacks in party polling fortunes are nothing more than random noise; the underlying trends – in which we are interested and which best assess the parties’ standings – are relatively stable and little influenced by day-to-day events. Further details of the method we use to build our estimates can be found here and here.

It has been almost 18 months since the Polling Observatory’s last investigation of the Westminster polls, though the intervening period has seen dramatic political events – Britain’s vote to leave the EU, a change in Prime Minister, and much more besides.

The surprise result of the 2015 general election prompted much reflection on the reliability of polling methodologies – most notably in the report of the official inquiry into the pre-election polls – as did the outcome of the 2016 referendum on Britain’s membership of the EU. The vanquishing of the polls, and election forecasts, has added fuel to the bonfire of the experts. To populists, the unpredictability of voters may serve to further undercut the authority of elites.

While the events of 2015 and 2016 provided a valuable reminder that a dose of caution is needed when digesting the latest polls, they remain the best way of assessing relative shifts in public opinion.

As regular readers will know, we pool all the information that we have from current polling to estimate the underlying trend in public opinion, controlling for random noise in the polls. Our method controls for systematic differences between polling ‘houses’ – the propensity for some pollsters to produce estimates that are higher or lower on average for a particular party than other pollsters. While we can estimate how one pollster systematically differs from another, we have no way of assessing which is closer to the truth (i.e. whether the estimates are ‘biased’). This was where our election forecast came unstuck in 2015, as the final polls systematically over-estimated support for Labour and under-estimated support for the Conservatives.

Because most pollsters have made methodological adjustments since May 2015 – designed to address this over-estimation of Labour support – it is inappropriate to ‘anchor’ our estimates on their record at previous elections. Instead, we anchor our estimates on the average pollster. This means the results presented here are those of a hypothetical pollster that, on average, falls in the middle of the pack. It also means that while our method accounts for the uncertainty due to random fluctuation in the polls and for differences between polling houses, we cannot be sure that there is no systematic bias in the average polling house (i.e., the industry as a whole could be wrong once again).

Our latest analyses are based on polls up to April 18th, the day of the announcement of the general election to be held on June 8th. Since then, a number of polls have suggested an even larger Conservative lead – and it will be interesting to see if this is sustained in coming weeks of the campaign. The Polling Observatory’s headline figures currently put the Conservatives on 43%, far ahead of Labour on 25.7%. The Liberal Democrats at 10.5% have overtaken UKIP, at 9.8%, for the first time since December 2012. Meanwhile the Greens are lagging well behind at 4.3%.

UK 19-04-17 anchor on average

Our estimates also provide insights on the trends in support for the parties since May 2015. Under David Cameron, support for the Conservatives had been slipping, especially in early 2016. It was only immediately following the EU referendum vote, and around the time that Theresa May took over as PM, that they have enjoyed a sharp rise in support. In contrast, Labour’s support has steadily been declining since April 2016 – from around the start of the EU referendum campaign. This is well before ‘the coup’ that some have blamed for Labour’s poor polling. We find no evidence to support those claims here.

While UKIP support rose steadily in the year following the 2015 general election, it slumped after the Brexit vote and has continued to decline since. It is too soon to write off UKIP for good, but it is clear that the party faces an uncertain future, threatened by an emboldened Conservative Party plotting Britain’s course out of the EU. By contrast, Brexit has given a renewed purpose to the Liberal Democrats, whose support has steadily been increasing since June 2016 – though hardly at a dramatic rate. The largely static support for the Greens highlights that Britain’s ‘progressive’ parties face an uphill battle to win back voters.

The trends since Brexit specifically point towards two gradual shifts: UKIP voters switching to the now more pro-Brexit Conservatives (with the blue and purple lines mirroring each other quite closely above), and the Liberal Democrats slowly recovering, seemingly at the expense of Labour who are slowly declining. The parties that appear to have benefited from Brexit are those now seen as the natural issue ‘owners’ of Leave and Remain.

So the two mainstream parties with clear Brexit positions are rising in the polls, while the one without a clear position (Labour) is declining steadily.

During the election campaign we will provide updates on the state of support for the parties. We will also be undertaking analyses of what ‘the fundamentals’ – such as party leader ratings and the state of the economy – tell us about the likely election result. Our aim will be to provide an assessment of election forecasts generated using different methods and data. After the experience of 2015, where the polling miss threw many forecasts off, we believe that this approach of triangulation may bolster confidence in expectations about the likely result – and also illuminate how different modelling choices and assumptions matter.

Robert FordWill JenningsMark Pickup and Christopher Wlezien

Did Brexit increase hate crimes? Probably, yes.*

By Dan Devine. Dan is a PhD student in Politics at Southampton. He specialises in comparative politics, political attitudes and quantitative methods (@DanJDevine, personal websiteAcademia.edu).


Tl;DR: Brexit probably caused an increase in hate crimes. I provide descriptive and statistical (linear regression and regression discontinuity) evidence for this claim, but the claim that there was a rise in reporting rather than hate crimes per is also plausible. It’s also positive to see that this is not a lasting effect (at least in the data), although there is still an upward trend in hate crime since 2013.


In the wake of Brexit – when the UK voted to leave the European Union – there was a flurry of activity in newspapers and across the internet reporting a rise in racial tensions and hate crimes. In the following weeks and months, this was widely reported in the Guardian (a lot), the BBCHuman Rights WatchSky NewsThe Telegraph, and I’m sure some others that I’ve missed. Nevertheless, some individuals and outlets (such as Spiked and ConservativeHome) remain extremely sceptical of the claim that the vote to leave the EU was behind a rise in hate crime – and indeed, call into question the validity of the numbers at all. 

As many outlets have picked up in the last week, the government have recently released the full figures of hate crime that cover the referendum and post-referendum months and days. This allows us a much closer look at what exactly was going on around that time (and gives me a chance to try out some new ideas at visualising data). Here, I take a look at these numbers, put them through some rough-and-ready statistical tests, and look at some other explanations of the findings. In general, though, the evidence is overwhelming that Brexit did cause a rise in hate crime. Nevertheless, it is encouraging that (at least according to the data) this does not seem to be a ‘lasting effect’, as The Independent reports.

What is hate crime?

Many of the biggest critiques of the data concern what is meant by ‘hate crime’. Hate crimes in general are defined as ‘any criminal offence which is perceived, by the victim or any other person, to be motivated by hostility or prejudice towards someone based on a personal characteristic’. However, the data I use here is focused specifically on racially or religiously aggravated offences (from now, I will just call these hate crimes). This includes crimes such as: assault with or without injury, criminal damage, public fear or distress, and harassment. This also includes graffiti of a racist nature (presumably under the latter two categories), and attacks on asylum seekers or refugees (regardless of their race). 

This does mean that essentially, anyone can report something as a hate crime if they perceive it as such. In addition, it’s true that a majority of these cases go unsolved – about a quarter of offences are taken to court. I don’t want to get into the territory of disagreeing with the very definition of hate crimes (or how they’re reported) – but it’s worth being open about what is behind the analysis.

An increase in hate crime is descriptively clear

At first glance, it is clear that there was a rise in hate crime surrounding the Brexit referendum. The first graph below shows hate crimes by month since 2013. Although there is always a seasonal effect – hate crimes increase over summer – the sharp rise in June and July 2016 is startling, and the drop off in August is not particularly drastic (or at least as drastic as we would hope). From this longer-term perspective, the summer months of 2016 are outliers in the recent history of hate crime. It should be noted, however, that there is a clear upward trend in hate crime since 2013; the low point of 2016 is around the same as the high point of 2013. This upward trend should send a warning to those interested in social cohesion.

image00

It’s also possible, with the Home Office data, to have a more fine-grained analysis. The graph below presents daily data for the months of May, June, July and August. Once again, the dashed horizontal line indicates when the referendum took place. The interesting part of this is the sudden increase the day after the referendum, which persists for several days, peaking approximately a week after (more on this later).  There is, as in the monthly data, a slow decline to pre-referendum levels.

image00

From both of these graphs, it is clear that there was a peak in hate crime surrounding the referendum. But there is also a lot of variability, and some claim that this is not necessarily down to the referendum. In lieu of suitable data to test the competing claims, I wanted to look at this statistically as best I could.

And the differences are statistically significant

To do this, I took two approaches. I make no claim that these are conclusive. They are relatively back-of-the-envelope tests, but I think they are nonetheless strong evidence for the impact of Brexit on hate crimes (for those interested, details are at the end). The first tests how much variation can be explained by the referendum, and how many hate crimes can be attributed to Brexit. What the results indicate is that Brexit increased hate crimes by about 31 a day (if we use the daily data), or by 1600 a month (if we use the monthly data). Due to the few months following the referendum, I would say the daily data is more accurate. Importantly, the results indicate that Brexit explains about 35% of the crimes in the days following Brexit – which is a statistically and substantively significant amount.

But regressions are flawed for a range of reasons, especially when done like this. As is clear from above, hate crimes slowly decrease after the peak. In other words, June and July are huge outliers. So, as another check I carried out a regression discontinuity test (again, details at the end). This narrows the focus to the days surrounding the referendum, and essentially treats the referendum as an experiment: the day of the referendum and afterwards are those ‘treated’ with the experiment, whilst those before are the control group. In other words, there should be no real difference between June 21st and June 24th other than the referendum.

The results are the same. It is statistically significant. Moreover, in the ‘RD Plot’ at the end of the post, you can see how this relationship changes dramatically. Put another way: the days either side of the referendum are fundamentally different, and the only plausible explanation is the referendum. Indeed, this is what the regression discontinuity provides extremely strong evidence for.

But was there really an increase in hate crime?

The evidence in the data is extremely strong. However, there can be a few objections which are more theoretical. The first, and most important, is that the difference might be due to an increased awareness and therefore increased reporting (this is what the police claimed at the time). In other words, hate crimes did not increase, but the reporting of them did. This is certainly plausible.

In the days following the referendum, I find this hard to believe. Why would people be more likely to report hate crimes following a referendum? This did not happen after the Charlie Hebdo attacks, or Paris attacks, or other elections, or the start of the Palestine-Israel conflict – events which are more closely tied to the potential for hate crime. It only increased slightly even after the murder of Lee Rigby. The reverse is much more plausible: that hate crimes (remember, this includes damage to property and graffiti) ensued after the referendum. However, the peak of hate crimes occurred a week after the referendum. This is surely likely to be influenced by media coverage of the previous rises. Again, I think it is likely that there was indeed increased reporting of hate crimes, in response to national media coverage and the existence of more hate crime in general. In other words, I think it was a bit of both, with more hate crimes leading to coverage and more reporting (we must also remember that hate crimes are still hugely under-reported). 

Other claims I find less appealing. One might just say it is a coincidence. The statistical weight of evidence is, for me, far too strong for that. It is far less than a 1% chance that this was just a random increase which happened to occur at the exact time of the referendum. Other claims might argue about the definition of hate crime, how they are accounted for, and how few are brought to court. These are not the focus of this post – not because they’re not important, but because they can’t be drawn from the evidence here.

Brexit, hate crime and the future

A lot of coverage has argued that the atmosphere in the UK is increasingly toxic and intolerant. The data released only extends a few months after the referendum, so we cannot be sure of what’s actually happening. But from the existing data, I would conclude that the actual impact of Brexit on hate crimes was a short-lived one, and that the effect will decrease over time.

However, I would also suggest, on a more negative note, that all Brexit did was mobilise latent attitudes into behaviour. In other words, I do not think it changed attitudes that much, but acted as a catalyst to change those attitudes into actual actions – and hate crimes. This is in part evidenced by the general upward trend in hate crimes since 2013. For what it’s worth, going forward, the media and politicians need to be extremely careful not to stoke the flames of these attitudes. The referendum has shown that it does not necessarily take much to spark an increase in hate crimes. Other catalysts are possible. And it’s important that, when the next one comes, it is much harder to translate these beliefs into actions. 

*Probably = almost certainly

 


Statistical/methodological notes:

Summary

Graphics and tests were produced in the software package R, using data from the Home Office. The background design for the graphs was taken from code by Max Woolf

Summary statistics for the two data sets used (monthly and daily data):

Statistical Tests

Firstly, I ran a basic regression on both the daily and monthly data. This uses the referendum to ‘predict’ the variation in hate crimes after the referendum. The regressions were run using the variable ‘brexit’ as a binary predictor for the dependent variable ‘hate.crime’. Clearly for the monthly data, this is hugely unbalanced, so should be treated with a bucket load of caution. The daily data is more stable.

image00

The regression discontinuity used the day after the referendum as the cut off. This is because the referendum really would not have had an effect until the result. Nevertheless, it is centred around 0, the day of the referendum. It is statistically significant as well. Additional analysis by Professor Will Jennings, using a time series intervention model, confirmed the findings here. The debate about whether to use a time series or discontinuity approach continues…

image00

The R code used is as follows. You will need the theme function downloadable from here. If you’d like the data, please contact me (D.J.Devine@Soton.ac.uk)

setwd(“”) # set your working directory

hate <- read.csv(“day.hate.csv”) # read in the data
hate2 <- read.csv(“month.crimes.total.csv”)

install.packages(“rdrobust”) #install packages
library(“rdrobust”)
install.packages(“ggplot2”, dependencies = TRUE)
library(“ggplot2”)
library(“stargazer”)
library(“lubridate”)
library(“tseries”)
library(“scales”)
library(“grid”)
library(“RColorBrewer”)
install.packages(“extrafont”)
library(“extrafont”)
loadfonts() # note, loading the fonts package will take considerable time depending on the machine
pdf(“plot_garamond.pdf”, family=”Garamond”, width=4, height=4.5)

rdrobust(hate$hate.crime, hate$since.ref, c = 1 ) # regression discontinuity
rdd_est <- rdrobust(hate$hate.crime, hate$since.ref, c = 1 )
rdplot(hate$hate.crime, hate$since.ref, c = 1)

stargazer(hate, type=”html”,
title = “Summary Statistics for Daily Data”)
stargazer(hate2, type=”html”,
title = “Summary Statistics for Monthly Data”) # summary statistics, output in HTML

linear.day <- lm(hate$hate.crime ~ hate$brexit) # regular regression on day
linear.month <- lm(hate2$hate.crime ~ hate2$brexit) #… and months
stargazer(linear.day, linear.month, type=”html”,
title = “The Effect of Brexit on Hate Crimes”,
column.labels = c(“Daily Crime”, “Monthly Crime”),
coviariate.labels=”Brexit”) # table of the regression

month.crime.plot <- ggplot(data=hate2, aes(x=id, y=hate.crime)) +
fte_theme() +
geom_line(color=”#c0392b”, size=1.45, alpha=0.75) +
geom_vline(xintercept=42, linetype = “longdash”, color = “gray47″, alpha = 0.7) +
geom_text(aes(x=42, label=”Referendum”, y=2300), colour=”gray36″, size=8, family=”Garamond”)+
ggtitle(“Hate Crimes in England and Wales, 2013-2016”) +
scale_x_continuous(breaks=c(6,12,18,24,30,36,42),
labels=c(“June 2013”, “Dec 2013”, “June 2014”, “Dec 2014”, “June 2015”, “Dec 2015”, “June 2016”)) +
labs(y= “# Hate Crimes”, x=”Date”) +
theme(plot.title = element_text(family=”Garamond”, face=”bold”, hjust=0, size = 25, margin=margin(0,0,20,0))) +
theme(axis.title.x = element_text(family=”Garamond”, face=”bold”, size = 20, margin=margin(20,0,0,0))) +
theme(axis.title.y = element_text(family=”Garamond”, face=”bold”, size = 20, margin=margin(0,20,0,0))) +
geom_hline(yintercept=2000, size=0.4, color=”black”) # monthly graph

day.crime.plot <- ggplot(data=hate, aes(x=id, y=hate.crime)) +
fte_theme() +
geom_line(color=”#c0392b”, size=1.45, alpha=0.75) +
geom_vline(xintercept=54, linetype = “longdash”, color = “gray47″, alpha = 0.7) +
geom_text(aes(x=54, label=”Referendum”, y=85), colour=”gray36″, size=8, family=”Garamond”) +
ggtitle(“Hate Crimes in England and Wales, May-August 2016”) +
scale_y_continuous(limits=c(75,220)) +
scale_x_continuous(breaks=seq(14,123, by=14),
labels=c(“14 May”, “28 May”, “11 June”, “25 June”, “9 July”, “23 July”, “6 August”, “20 August”)) +
labs(y= “# Hate Crimes”, x=”Date”) +
theme(plot.title = element_text(family=”Garamond”, face=”bold”, hjust=0, size = 25, margin=margin(0,0,20,0))) +
theme(axis.title.x = element_text(family=”Garamond”, face=”bold”, size = 20, margin=margin(20,0,0,0))) +
theme(axis.title.y = element_text(family=”Garamond”, face=”bold”, size = 20, margin=margin(0,20,0,0))) +
geom_hline(yintercept=75, size=0.4, color=”black”) # daily graph