Did Brexit increase hate crimes? Probably, yes.*

By Dan Devine. Dan is a PhD student in Politics at Southampton. He specialises in comparative politics, political attitudes and quantitative methods (@DanJDevine, personal websiteAcademia.edu).


Tl;DR: Brexit probably caused an increase in hate crimes. I provide descriptive and statistical (linear regression and regression discontinuity) evidence for this claim, but the claim that there was a rise in reporting rather than hate crimes per is also plausible. It’s also positive to see that this is not a lasting effect (at least in the data), although there is still an upward trend in hate crime since 2013.


In the wake of Brexit – when the UK voted to leave the European Union – there was a flurry of activity in newspapers and across the internet reporting a rise in racial tensions and hate crimes. In the following weeks and months, this was widely reported in the Guardian (a lot), the BBCHuman Rights WatchSky NewsThe Telegraph, and I’m sure some others that I’ve missed. Nevertheless, some individuals and outlets (such as Spiked and ConservativeHome) remain extremely sceptical of the claim that the vote to leave the EU was behind a rise in hate crime – and indeed, call into question the validity of the numbers at all. 

As many outlets have picked up in the last week, the government have recently released the full figures of hate crime that cover the referendum and post-referendum months and days. This allows us a much closer look at what exactly was going on around that time (and gives me a chance to try out some new ideas at visualising data). Here, I take a look at these numbers, put them through some rough-and-ready statistical tests, and look at some other explanations of the findings. In general, though, the evidence is overwhelming that Brexit did cause a rise in hate crime. Nevertheless, it is encouraging that (at least according to the data) this does not seem to be a ‘lasting effect’, as The Independent reports.

What is hate crime?

Many of the biggest critiques of the data concern what is meant by ‘hate crime’. Hate crimes in general are defined as ‘any criminal offence which is perceived, by the victim or any other person, to be motivated by hostility or prejudice towards someone based on a personal characteristic’. However, the data I use here is focused specifically on racially or religiously aggravated offences (from now, I will just call these hate crimes). This includes crimes such as: assault with or without injury, criminal damage, public fear or distress, and harassment. This also includes graffiti of a racist nature (presumably under the latter two categories), and attacks on asylum seekers or refugees (regardless of their race). 

This does mean that essentially, anyone can report something as a hate crime if they perceive it as such. In addition, it’s true that a majority of these cases go unsolved – about a quarter of offences are taken to court. I don’t want to get into the territory of disagreeing with the very definition of hate crimes (or how they’re reported) – but it’s worth being open about what is behind the analysis.

An increase in hate crime is descriptively clear

At first glance, it is clear that there was a rise in hate crime surrounding the Brexit referendum. The first graph below shows hate crimes by month since 2013. Although there is always a seasonal effect – hate crimes increase over summer – the sharp rise in June and July 2016 is startling, and the drop off in August is not particularly drastic (or at least as drastic as we would hope). From this longer-term perspective, the summer months of 2016 are outliers in the recent history of hate crime. It should be noted, however, that there is a clear upward trend in hate crime since 2013; the low point of 2016 is around the same as the high point of 2013. This upward trend should send a warning to those interested in social cohesion.

image00

It’s also possible, with the Home Office data, to have a more fine-grained analysis. The graph below presents daily data for the months of May, June, July and August. Once again, the dashed horizontal line indicates when the referendum took place. The interesting part of this is the sudden increase the day after the referendum, which persists for several days, peaking approximately a week after (more on this later).  There is, as in the monthly data, a slow decline to pre-referendum levels.

image00

From both of these graphs, it is clear that there was a peak in hate crime surrounding the referendum. But there is also a lot of variability, and some claim that this is not necessarily down to the referendum. In lieu of suitable data to test the competing claims, I wanted to look at this statistically as best I could.

And the differences are statistically significant

To do this, I took two approaches. I make no claim that these are conclusive. They are relatively back-of-the-envelope tests, but I think they are nonetheless strong evidence for the impact of Brexit on hate crimes (for those interested, details are at the end). The first tests how much variation can be explained by the referendum, and how many hate crimes can be attributed to Brexit. What the results indicate is that Brexit increased hate crimes by about 31 a day (if we use the daily data), or by 1600 a month (if we use the monthly data). Due to the few months following the referendum, I would say the daily data is more accurate. Importantly, the results indicate that Brexit explains about 35% of the crimes in the days following Brexit – which is a statistically and substantively significant amount.

But regressions are flawed for a range of reasons, especially when done like this. As is clear from above, hate crimes slowly decrease after the peak. In other words, June and July are huge outliers. So, as another check I carried out a regression discontinuity test (again, details at the end). This narrows the focus to the days surrounding the referendum, and essentially treats the referendum as an experiment: the day of the referendum and afterwards are those ‘treated’ with the experiment, whilst those before are the control group. In other words, there should be no real difference between June 21st and June 24th other than the referendum.

The results are the same. It is statistically significant. Moreover, in the ‘RD Plot’ at the end of the post, you can see how this relationship changes dramatically. Put another way: the days either side of the referendum are fundamentally different, and the only plausible explanation is the referendum. Indeed, this is what the regression discontinuity provides extremely strong evidence for.

But was there really an increase in hate crime?

The evidence in the data is extremely strong. However, there can be a few objections which are more theoretical. The first, and most important, is that the difference might be due to an increased awareness and therefore increased reporting (this is what the police claimed at the time). In other words, hate crimes did not increase, but the reporting of them did. This is certainly plausible.

In the days following the referendum, I find this hard to believe. Why would people be more likely to report hate crimes following a referendum? This did not happen after the Charlie Hebdo attacks, or Paris attacks, or other elections, or the start of the Palestine-Israel conflict – events which are more closely tied to the potential for hate crime. It only increased slightly even after the murder of Lee Rigby. The reverse is much more plausible: that hate crimes (remember, this includes damage to property and graffiti) ensued after the referendum. However, the peak of hate crimes occurred a week after the referendum. This is surely likely to be influenced by media coverage of the previous rises. Again, I think it is likely that there was indeed increased reporting of hate crimes, in response to national media coverage and the existence of more hate crime in general. In other words, I think it was a bit of both, with more hate crimes leading to coverage and more reporting (we must also remember that hate crimes are still hugely under-reported). 

Other claims I find less appealing. One might just say it is a coincidence. The statistical weight of evidence is, for me, far too strong for that. It is far less than a 1% chance that this was just a random increase which happened to occur at the exact time of the referendum. Other claims might argue about the definition of hate crime, how they are accounted for, and how few are brought to court. These are not the focus of this post – not because they’re not important, but because they can’t be drawn from the evidence here.

Brexit, hate crime and the future

A lot of coverage has argued that the atmosphere in the UK is increasingly toxic and intolerant. The data released only extends a few months after the referendum, so we cannot be sure of what’s actually happening. But from the existing data, I would conclude that the actual impact of Brexit on hate crimes was a short-lived one, and that the effect will decrease over time.

However, I would also suggest, on a more negative note, that all Brexit did was mobilise latent attitudes into behaviour. In other words, I do not think it changed attitudes that much, but acted as a catalyst to change those attitudes into actual actions – and hate crimes. This is in part evidenced by the general upward trend in hate crimes since 2013. For what it’s worth, going forward, the media and politicians need to be extremely careful not to stoke the flames of these attitudes. The referendum has shown that it does not necessarily take much to spark an increase in hate crimes. Other catalysts are possible. And it’s important that, when the next one comes, it is much harder to translate these beliefs into actions. 

*Probably = almost certainly

 


Statistical/methodological notes:

Summary

Graphics and tests were produced in the software package R, using data from the Home Office. The background design for the graphs was taken from code by Max Woolf

Summary statistics for the two data sets used (monthly and daily data):

Statistical Tests

Firstly, I ran a basic regression on both the daily and monthly data. This uses the referendum to ‘predict’ the variation in hate crimes after the referendum. The regressions were run using the variable ‘brexit’ as a binary predictor for the dependent variable ‘hate.crime’. Clearly for the monthly data, this is hugely unbalanced, so should be treated with a bucket load of caution. The daily data is more stable.

image00

The regression discontinuity used the day after the referendum as the cut off. This is because the referendum really would not have had an effect until the result. Nevertheless, it is centred around 0, the day of the referendum. It is statistically significant as well. Additional analysis by Professor Will Jennings, using a time series intervention model, confirmed the findings here. The debate about whether to use a time series or discontinuity approach continues…

image00

The R code used is as follows. You will need the theme function downloadable from here. If you’d like the data, please contact me (D.J.Devine@Soton.ac.uk)

setwd(“”) # set your working directory

hate <- read.csv(“day.hate.csv”) # read in the data
hate2 <- read.csv(“month.crimes.total.csv”)

install.packages(“rdrobust”) #install packages
library(“rdrobust”)
install.packages(“ggplot2”, dependencies = TRUE)
library(“ggplot2”)
library(“stargazer”)
library(“lubridate”)
library(“tseries”)
library(“scales”)
library(“grid”)
library(“RColorBrewer”)
install.packages(“extrafont”)
library(“extrafont”)
loadfonts() # note, loading the fonts package will take considerable time depending on the machine
pdf(“plot_garamond.pdf”, family=”Garamond”, width=4, height=4.5)

rdrobust(hate$hate.crime, hate$since.ref, c = 1 ) # regression discontinuity
rdd_est <- rdrobust(hate$hate.crime, hate$since.ref, c = 1 )
rdplot(hate$hate.crime, hate$since.ref, c = 1)

stargazer(hate, type=”html”,
title = “Summary Statistics for Daily Data”)
stargazer(hate2, type=”html”,
title = “Summary Statistics for Monthly Data”) # summary statistics, output in HTML

linear.day <- lm(hate$hate.crime ~ hate$brexit) # regular regression on day
linear.month <- lm(hate2$hate.crime ~ hate2$brexit) #… and months
stargazer(linear.day, linear.month, type=”html”,
title = “The Effect of Brexit on Hate Crimes”,
column.labels = c(“Daily Crime”, “Monthly Crime”),
coviariate.labels=”Brexit”) # table of the regression

month.crime.plot <- ggplot(data=hate2, aes(x=id, y=hate.crime)) +
fte_theme() +
geom_line(color=”#c0392b”, size=1.45, alpha=0.75) +
geom_vline(xintercept=42, linetype = “longdash”, color = “gray47″, alpha = 0.7) +
geom_text(aes(x=42, label=”Referendum”, y=2300), colour=”gray36″, size=8, family=”Garamond”)+
ggtitle(“Hate Crimes in England and Wales, 2013-2016”) +
scale_x_continuous(breaks=c(6,12,18,24,30,36,42),
labels=c(“June 2013”, “Dec 2013”, “June 2014”, “Dec 2014”, “June 2015”, “Dec 2015”, “June 2016”)) +
labs(y= “# Hate Crimes”, x=”Date”) +
theme(plot.title = element_text(family=”Garamond”, face=”bold”, hjust=0, size = 25, margin=margin(0,0,20,0))) +
theme(axis.title.x = element_text(family=”Garamond”, face=”bold”, size = 20, margin=margin(20,0,0,0))) +
theme(axis.title.y = element_text(family=”Garamond”, face=”bold”, size = 20, margin=margin(0,20,0,0))) +
geom_hline(yintercept=2000, size=0.4, color=”black”) # monthly graph

day.crime.plot <- ggplot(data=hate, aes(x=id, y=hate.crime)) +
fte_theme() +
geom_line(color=”#c0392b”, size=1.45, alpha=0.75) +
geom_vline(xintercept=54, linetype = “longdash”, color = “gray47″, alpha = 0.7) +
geom_text(aes(x=54, label=”Referendum”, y=85), colour=”gray36″, size=8, family=”Garamond”) +
ggtitle(“Hate Crimes in England and Wales, May-August 2016”) +
scale_y_continuous(limits=c(75,220)) +
scale_x_continuous(breaks=seq(14,123, by=14),
labels=c(“14 May”, “28 May”, “11 June”, “25 June”, “9 July”, “23 July”, “6 August”, “20 August”)) +
labs(y= “# Hate Crimes”, x=”Date”) +
theme(plot.title = element_text(family=”Garamond”, face=”bold”, hjust=0, size = 25, margin=margin(0,0,20,0))) +
theme(axis.title.x = element_text(family=”Garamond”, face=”bold”, size = 20, margin=margin(20,0,0,0))) +
theme(axis.title.y = element_text(family=”Garamond”, face=”bold”, size = 20, margin=margin(0,20,0,0))) +
geom_hline(yintercept=75, size=0.4, color=”black”) # daily graph


 

Polling Observatory #1: Estimating support for the parties (with some trepidation…)

DipticBy The Polling Observatory (Robert FordWill JenningsMark Pickup and Christopher Wlezien). The homepage of The Polling Observatory can be found here, and you can read more posts by The Polling Observatory here.


This post is part of a long-standing series (dating to before the 2010 election) that reports on the state of the parties as measured by vote intention polls. By pooling together all the available polling evidence we can reduce the impact of the random variation each individual survey inevitably produces. Most of the short term advances and setbacks in party polling fortunes are nothing more than random noise; the underlying trends – in which we are interested and which best assess the parties’ standings – are relatively stable and little influenced by day-to-day events. Further details of the method we use to build our estimates can be found here.

It is now six months since the television headlines rolled at 10am on May 7th, with the exit poll dropping the bombshell that the polls had got it badly wrong. The election forecasters fared little better, including us: even though our vote model had predicted a Conservative lead of 2-3 points, our seat prediction was nowhere close to the majority achieved by David Cameron. It is with a little trepidation then that the Polling Observatory team returns to provide its assessment on the state of public opinion in late 2015.

As regular readers will know, we pool all the information that we have from current polling to estimate the underlying trend in public opinion, controlling for random noise in the polls. Our method controls for systematic differences between pollsters – the propensity for some pollsters to produce estimates that are higher/lower on average for a particular party than other pollsters. While we can estimate how one pollster systematically differs from another, we have no way of assessing which is closer to the truth.

One possibility with this method is to use the result of the last election to ‘anchor’ our estimates of bias in the polls against the last election result. This treats the election result as if it was produced by a pollster with no systematic error. We can then estimate the systematic difference of each pollster with this hypothetical perfect pollster. With this method, for example, if pollster X produces results which are systematically 2 percentage points higher for the Conservatives than what would be produced by this perfect pollster, we would interpret a poll indicating 40% support for the Conservatives from such a pollster as 38% support for the Conservatives. This approach can be useful where there are recurring historical patterns (such as the tendency of the polls to overestimate the Labour vote and underestimate the Conservative vote), and might allow us to control for systematic bias in the polls.

We have chosen, for now, to anchor our estimates on the average pollster. This means the results presented here are those of a hypothetical pollster that, on average, falls in the middle of the pack.[1] We have chosen to use such a middle pollster rather than anchor on the election result because we believe that the inaccuracies/biases revealed in the polls in May will be different from those which may occur in this election cycle.[2] All of the pollsters have been undertaking reviews of their methods following the big polling miss in May, and it is unlikely that the biases in polling will be unaffected by the changes they are gradually introducing. Because of this, we offer our estimates of party support with an important caveat: while our method accounts for the uncertainty due to random fluctuation in the polls and for differences between polling houses, we cannot be sure that there is no systematic bias in the average polling house (i.e., the industry as a whole could be wrong once again). It may be that the polls are collectively right or wrong. It may also be that a pollster producing figures higher or lower than the average is more accurately reflecting the state of support for the parties than their competitors. Our estimates cannot adjudicate on whether figures on the high or the low side for a party better reflect the underlying preference of the electorate. The only test is on Election Day. Fortunately, none of this prevents us from identifying and reporting on the underlying trends over time.

In terms of the overall story, there has been little apparent change in vote preferences since the election in May. This despite the triumphant budget announced by George Osborne, the surprise ascension of Jeremy Corbyn to leader of the Labour Party (and the onslaught on him and his team from outside and inside the party), and the tax credits row that has quickly taken the shine off the government’s honeymoon period. Unlike the last election, there has been no sudden flight of voters from one party to another, as occurred with the collapse of Liberal Democrat support in the first six months after the Coalition government was formed.

Our estimates suggest that Conservative support has slipped slightly since the heady days of May and June, from around 40% to closer to 37% at the start of November. Despite Labour being divided and in some disarray over its direction, it has made slight gains from around 30% to 32%. This upward drift in the polls largely occurred before election of Jeremy Corbyn as Labour leader, so cannot be attributed to a Corbyn effect. Whether these gains will persist as the election nears and PM Corbyn becomes a possibility, is of course open to debate. At present, though, there is no sign of Mr Corbyn’s election having any impact on his party’s overall support. UKIP support has remained steady at around 13%, and the party shows no signs of going away – even with its own internal conflicts following Nigel Farage’s “unresignation” in the summer. Lagging somewhat behind, the Liberal Democrats continue to flat-line at just under 7%. One of the patterns of the last parliament was the stubborn immovability of Liberal Democrat support. New party leader Tim Farron has much work on his hands to win back voters, and so far there are no green shoots for the party in our estimates. Finally, speaking of the Greens, their support appears to have been squeezed since Labour election Jeremy Corbyn – perhaps because voters attracted by their distinct left wing platform now feel more at home in the Labour party. It has fallen around 1.5 points since the summer. Our estimates for all the parties suggest that the electorate is still to make up its mind on both the new government and the fragmented and much changed opposition. But there are some big events on the horizon, in particular the EU referendum, which may yet provide a shock to move political support in one direction or the other.

UK 01-11-15 anchor on average (1)

One of the reasons why the polling miss back in May came as such a shock was that by election eve there was broad consensus among the pollsters about the level of support for the parties (though of course we noted house effects earlier in the campaign). However, in the period since May the polling has been characterised by much more variation in the standing of the parties. This is revealed in the figure above. The size of the confidence intervals for our estimates in the period since the election (an average of 2.3 points) are more than twice those for the 2010-15 election cycle or for the month just before the start of the short campaign (each an average of 1.1 points). This indicates a much higher level of uncertainty about the state of public opinion today. Part of this could be due to a lower volume of polling since May, or more variation in polling methodologies as pollsters take different approaches in response to May’s polling miss. The greater uncertainty may also reflect the much lower frequency of polling since the election – election watchers used to multiple daily polls have now to accept a more meagre diet of one or two polls a week. The greater uncertainty may, however, also reflect something more fundamental: genuine uncertainty, and hence greater volatility, in the minds of the electorate. Voters are faced with an unexpected Conservative majority government and an unfamiliar and polarising opposition leader attracting widely varying reactions in the media and within his own party. In such circumstances many may be genuinely unsure as to their preferences. Only time will tell whether this uncertainty lasts until the next general election. For now, it provides an important reminder of the need to take single poll results with a degree of caution.

 

Robert FordWill JenningsMark Pickup and Christopher Wlezien

 

[1] The average difference between this middle pollster and those pollsters that produce estimates that are systematically higher for a given party is the same as the average difference between this middle pollster and those pollsters that produce estimates that are systematically lower for that same party.

[2] We came to a similar conclusion during the last election cycle when it became apparent that our method of anchoring on the election result was excessively reducing the estimated level of support for the Liberal Democrats.

 

Remembering the 1945 General Election 70 Years Later

Diptic

Diptic

By Jonathan Moss, Nick Clarke, Will Jennings and Gerry Stoker. Jonathan Moss is Senior Research Assistant for Geography at the University of Southampton, Nick Clarke is Associate Professor in Human Geography at the University of Southampton, Will Jennings is Professor of Political Science and Public Policy at University of Southampton (Academia.edu, Twitter) and Gerry Stoker is Professor of Governance at University of Southampton (Twitter). Their project ‘Popular Understandings of Politics in Britain, 1937-2014’ is funded by the ESRC.


This Sunday marks the 70th anniversary of the 1945 General Election. The election is widely understood as a significant turning point in modern British history. Labour won their first ever majority government and introduced a wide-ranging programme of social and economic reform, including the inception of NHS exactly three years later, and establishing the foundation of a political consensus that was sustained until the 1970s. Yet the meaning of the election has been contested by historians ever since.

For some, the 1945 election represented the beginning of a golden age for British politics. By comparison to the present period, turnout was high and support for the two main parties was high. It was estimated that 45 per cent of the public listened to election broadcasts on the radio and large numbers flocked to outdoor meetings to see politicians in the flesh (see Lawrence). Labour’s first parliamentary majority represented the highpoint of post-war enthusiasm and consensus for social democracy. The ‘people’s war’ produced a sense of national purpose and social reconciliation through events including conscription, evacuation, rationing and communal air-raid shelters. Labour’s victory was a consequence of greater public engagement and support for collectivism, planning and egalitarianism (see Field).

For others, the election has been remembered with greater enthusiasm than was present at the time. Politicians such as Hugh Gaitskell, Herbert Morrison and Harold MacMillan all remarked on the public’s lack of interest in the election. A 1944 Gallup poll showed 36 per cent of the population believed politicians placed their own interests ahead of country. Labour’s victory was the result of anti-Conservative feeling. The ‘spirit of 1945 was a myth’ and few people voted for Labour because they desired socialism or social democracy. Citizens supported the implementation of the 1942 Beveridge report out of individual self-interest and were indifferent to ambitious projects of social transformation. The majority of voters were disengaged from the political process and cynical about the motives of politicians (see Fielding).

Our current research project draws on survey/poll data and volunteer writing in the Mass Observation Archive to offer a new interpretation of this election from the perspective of ordinary people. It is important that we revisit the past to understand political attitudes in the present. Much has been written about the rise of anti-politics in recent years, which presumes a historical narrative that citizens have become increasingly disenchanted from politics, without understanding how citizens engaged with formal politics in the past. Crucially, we revisit 1945 not to answer questions about why Labour won that election, but to explain how citizens understood, imagined and evaluated politics in their everyday lives, and to identify how this has changed in the last 70 years.

Our early findings illustrate that citizens encountered politics and politicians in 1945 primarily by listening to long, uninterrupted speeches on the radio, and by attending local political meetings. These relatively unmediated forms of political interaction could expose politicians who lacked character or had little to say. They also provided an opportunity for politicians to impress with their oratory, authenticity and ability deal with rowdy crowds. Citizens judged politicians on their sincerity, charm, policies and programmes.

We also find that citizens commonly understood party politics as unnecessary. Politics involved ‘mud-slinging’ and ‘axe-grinding’, and was something to be avoided. Many did not want the election to take place and wished that coalition politics would continue after the war. Many expressed preference for independent candidates who demonstrated the ability to rise above the ‘petty squabbling’ of party politics.

So how should we remember the 1945 election today? Maybe this was not a golden period for democratic engagement in that negativity towards formal politics was certainly present. Politicians were frequently conceptualised as ‘gift-of-the–gabbers’ and ‘gas–bags’. Yet we should not mistake cynicism for apathy. Remembering the 1945 election, we should think about the everyday rituals of political interaction that permitted citizens to criticise, but also appreciate some politicians’ character and capacity to make effective collective decisions on their behalf. Returning to the present, we should consider how political interaction has changed over the last 70 years, and examine how this has influenced ordinary people’s decisions about participation in formal politics.

 

This research is funded under the ESRC research award ‘Popular Understandings of Politics in Britain, 1937-2014’ (Nick Clarke, Gerry Stoker, Will Jennings and Jonathan Moss). See further details here.

Reshaping the Politics of Contemporary Democracies: Cosmopolitan versus Shrinking Dynamics

Diptic

Diptic

By Will Jennings and Gerry Stoker. Will Jennings is Professor of Political Science and Public Policy at University of Southampton (Academia.edu, Twitter) and Gerry Stoker is Professor of Governance at University of Southampton (Twitter). You can read more posts by Will Jennings here and more posts by Gerry Stoker here.


Originally posted at John Denham’s Optimistic Patriot blog.

In a recent pamphlet, Jeremy Cliffe argues that 21st Century politics will be shaped by the emergence of a cosmopolitan shift in demography. This phenomenon is led by the big cities that are attracting ever-more people, jobs and investment for their university-educated and ethnically diverse populations. We would argue that the advance of cosmopolitanism tells only half the story and that the dilemma for political parties is acute as Britain’s future lies on two divergent paths: one cosmopolitan and one shrinking. To add further complexity to this predicament, citizens in both types of area share a lack of faith in politics.

There is growing divide between global cosmopolitan cities and shrinking urban conurbations with the dynamic of global competition driving both developments. Cosmopolitan centres are the gainers in a new system of global production, manufacturing, distribution and consumption that has led to new urban forms made possible by the revolution in logistics and new technologies. These global urban centres are highly connected, highly innovative, well-networked, attracting skilled populations, often supported by inward migration, and display the qualities of cosmopolitan urbanism. Simultaneously, other towns, cities and entire regions are experiencing the outflow of capital and human resources, and are suffering from a lack of entrepreneurship, low levels of innovation, cultural nostalgia and disconnectedness from the values of the metropolitan elite, and are largely ignored by policy-makers. These shrinking urban locations are the other side of the coin; for them the story is of being left behind as old industries die or as old roles become obsolete, and as successive governments have left them to fend for themselves. Populations may be declining, the skilled workers and the young are leaving in search of opportunity and these places are increasingly disconnected from the dynamic sectors of the economy, as well as the social liberalism of hyper-modern global cities in which the political, economic and media classes plough their furrow.

These developments are not temporary or transitional. Globally connected urban areas are experiencing a sustained and self-reinforcing growth and shrinking cities are struggling to overcome the challenges of decline as part of a new capitalist order. The shrinking cities as new urban analysis suggests cannot easily be dragged into the slipstream of cosmopolitans by policy interventions. The forces that are driving rampant cosmopolitanism are also driving the gradual withering of shrinking conurbations.

What is also clear is that these trends are reshaping and fracturing politics in such a way that creates a major dilemma for all parties in the short- and longer-term: political attitudes and engagement are heading in opposed directions in the two types of area. A survey by Populus, commissioned by the Universities of Canberra and Southampton allows us to compare cosmopolitan areas to shrinking areas to explore these different forces. Using Mosaic geodemographic categories, the survey identified the fifty constituencies most closely resembling the profiles of Clacton and Cambridge respectively – places that previously have been characterised as harbingers of Britain’s very different futures. This approach allows us to explore differences in political attitudes and participation in cosmopolitan and shrinking settings. To illustrate these distinctive demographics of place, some 45% of respondents in cosmopolitan areas appear to have post-degree education (i.e. left full-time education at 24+) compared to 20% of those in shrinking areas. In shrinking locations, 32% of respondents consume tabloid newspapers or websites, whereas in cosmopolitan areas the figure is only 19%. But more importantly what are the differences in terms of political outlook and forms of politics that are being practiced?

These communities have very different attitudes on issues of Europe and immigration, as well as more broad views about social change, as Table 1 shows. Shrinking areas tend to be more negative about recent developments, expressing concern about both immigration and the EU. In this respect, cosmopolitans have a much more outward-looking perspective on forces and institutions of the global economy, whereas shrinkers are more resistant.

Table1

The populations of these places exhibit distinct views on important areas on social change, as shown in Table 2. Cosmopolitan areas tend to display much stronger support for more to be done to create equality across a range of social divides – ethnicity, gender and sexuality. This in part reflects the contrasting social contexts of these two sets of places, but also hints at the sorts of politics that they might produce.

Table2

More significantly, citizens in cosmopolitan and shrinking areas engage in politics in distinctive ways, as Table 3 demonstrates. There are strong similarities for participation in a range of traditional off-line methods, but some differences in political activity that takes place on-line. This suggests that the cosmopolitan/shrinking schism may be another venue for the digital divide.

Table3

Citizens in cosmopolitan and shrinking places tend to hold contrasting views about trends of social change and are developing their own repertoires of engagement. Despite this, both sets of citizens are very doubtful about the politics that is currently on offer. As Table 4 indicates, both share a lot of the same disaffection towards politics and politicians. Both groups think governments can make a difference but fear that politicians are too self-serving and short-termist. Both have little trust in politicians and feel that politicians don’t care about them, although that view is more strongly held marginally in shrinking areas.

Table4

What does this all mean for the future of politics. Given this diversity a centralised nationally oriented party structure – on both left and right – is going to increasingly struggle to cope with this divergent world. The challenges include: that recruitment and candidate selection becomes more complex and needs to be locally sensitive. Social media engagement might have more of a grip in cosmopolitan rather than shrinking locations so it is unlikely to become a universal tool in the immediate future. Above all it is difficult to present the same face to shrinking and cosmopolitan populations; and it is far from clear how any party can bridge that divide of economic change and social outlook that will only increase in intensity as their experiences diverge and become locked in a self-reinforcing cycle of economic growth or stagnation and civic culture.

Lessons for FIFA from the Salt Lake City Olympic scandal

Diptic

By Will Jennings, Professor of Political Science and Public Policy at University of Southampton (Academia.edu, Twitter). You can read more posts by Will Jennings here.


FIFA is in crisis. Nine current or former senior officials have been charged by US prosecutors over bribes totalling more than US$150m over 24 years. The allegations have shocked the football world.

The story so far has some parallels with the scandal that engulfed the Olympics’ governing body, the IOC, in the late 1990s. The way the IOC dealt with that crisis might offer some lessons for how FIFA should respond.

In 1998, revelations concerning the bidding process for the Salt Lake City 2002 Winter Olympics led to investigations and a series of disclosures about bid-related malfeasance at other Olympic games. Officials from the Salt Lake bid committee were indicted on charges of conspiracy to commit bribery, fraud and racketeering.

It turned out that officials from applicant cities had been lavishing IOC members and their families with payments, gifts and luxurious hospitality, as well as scholarships, with the aim of buying their votes. The revelations were highly damaging for the IOC, clashing as they did with the idealistic rhetoric that the Olympic movement had sought to harness.

Reputation salvaging

Looking back, it is arguable that the IOC’s response to the crisis salvaged its reputation and led to important reforms aimed at the long-term sustainability of the event. This is in deep contrast to FIFA’s reaction to its first corruption scandal in 2011 – which simply allowed a serious governance problem to fester.

The IOC’s response to its bribery scandal was an effective approach to managing reputational risk: Apologise. Investigate. Punish. Reflect. Reform. In the immediate aftermath of the revelations, numerous senior figures in the IOC expressed regret and contrition, soon followed by internal investigations into wrongdoing.

As a result of these probes, a substantial number of IOC members resigned or were expelled, while an extensive programme of institutional reflection and reform was quickly instigated through the creation of the IOC 2000 Commission, which included external members. Out of this review came important reforms, including the introduction of a code of ethics and a ban on IOC members who were not serving on its Evaluation Commission from visiting candidate cities.

IOC president at the time of the scandal, Juan Antonio Samaranch.
EPA

A big hole

Questions remain, however, whether FIFA will be able to learn from these lessons to dig itself out of a very big hole. For one thing, while the Olympic bribery scandal was undoubtedly damaging to the image of the event and to the IOC as its governing body, the allegations largely related to members of the Olympic movement who were not on its executive board.

The FIFA allegations have hit much closer to home in relation to the administrative machinery of world football. The charges involve two vice-presidents of the organisation and other senior officials. This is deeply ironic given that commenting on the Salt Lake affair, in 1999, Sepp Blatter observed that the smaller size of FIFA’s executive made it less easy to sway: “Twenty-one members is really a group of people that are easier to supervise than a group of 114.”

The US Department of Justice charges point to a much more systematic pattern of kickbacks and patronage that, if proven, will be less easy to blame on a few bad apples. Indeed, FIFA’s defiant response to the bribery accusations levelled at it in 2011 will make it difficult to claim it had missed the warning signs.

While FIFA might take the lesson that contrition and meaningful reform are both important steps in starting to salvage the wreckage of the governance of world football, this may not be enough. As it stands, FIFA and its leadership seems irreparably damaged in terms of its credibility and legitimacy. This before criminal proceedings threaten a lengthy period of organisational fire-fighting and paralysis.

 

This article was originally published on The Conversation. Read the original article.

Polling Observatory Latest #GE2015 Forecast: the Conservatives make slight gains, but the likeliest result is deadlock

DipticBy The Polling Observatory (Robert FordWill JenningsMark Pickup and Christopher Wlezien). The homepage of The Polling Observatory can be found here, and you can read more posts by The Polling Observatory here.


As we enter the closing stretch of the campaign, substantial uncertainty remains about the final outcome. Taking out the random noise, the polls are still showing a close race ahead of May 7th. Some have pointed to differences between telephone and internet pollsters, with the former having shown a steady, if slight, Conservative lead all year. Our method allows us to control for systematic differences between polling houses and variation in the ‘poll of polls’ that is due to changes in the mix of pollsters in the field at a given point in time.

The latest Polling Observatory forecast covers all polls completed up until April 30th, and shows support for the two main parties is still in the balance – with Labour on 33.1% and the Conservatives on 34.2% — though the confidence intervals are such that we cannot say for certain that the Conservative lead is greater than zero.

Our vote forecast points to a higher level of support for the Conservatives than two weeks ago, up 1.4 points at 35.0%, with Labour on 32.6%, up 0.1 points. This reflects the squeeze that the “big two” have put on other parties in the final weeks of the campaign. The Conservative lead now stands at 2.4%, but with considerable uncertainty remaining in our forecast.

Forecast 01-05-15

This slight shift in the balance of polling is reflected in our latest seat estimates. The Conservatives’ median estimate rises by six seats, Labour falls by six seats, and the Liberal Democrats fall by four.  This puts the median Conservative seat lead at just two. However, as the confidence intervals attached to our estimates reveal, this projected lead is highly uncertain, a veritable coin-flip, with a 53 per cent chance that the Conservatives will have more seats than Labour. A majority for either is at present very unlikely, e.g., the likelihood of a Conservative majority is tiny (less than 0.2%). Our estimates further reflect the gains made by the SNP in recent polling in Scotland, with the nationalists now forecast to win 54 out of 59 seats north of the border.

Table 1: Seat estimates, with confidence intervals and change on April 15th

Party March 1st estimate April 1st estimate April 15th estimate April 30th estimate
Conservative 265 271 268 274 (+6)

(251,305)

Labour 285 276 278 272 (-6)

(244, 295)

Liberal Democrat 24 27 28 24 (-4)

(18, 29)

UKIP 3 3 3 2 (-1)

(1, 4)

SNP 49 49 49 54 (+5)

(46, 58)

Others 6 6 6 6

(4, 8)

Northern Ireland (not forecast) 18 18 18 18

The Conservatives’ paths to a governing coalition are even more winding than their slight lead in votes and seats. They cannot reach a majority with the backing of the Liberal Democrats (combined 298 seats, 15 short of a majority) or with both the Liberal Democrats and the Northern Irish DUP (combined 306 seats, assuming the DUP once again win 8 seats), or even by adding UKIP to that two party combination (308 seats total). It would be very hard, with this seat outcome, for the Conservatives to sustain a government without some form of acquiescence from the SNP. Things are rather more promising for Labour.  While they cannot reach a majority with the help of the Liberal Democrats (combined 300 seats), they can with SNP.  Whether that happens remains to be seen, of course.

Table 2: Most plausible governing combinations, based on March and April seat forecasts

Party March 1st estimate April 1st estimate April 15th estimate April 30th estimate
Conservatives + Lib Dems + DUP 298 307 305 306
Conservatives + Lib Dems + DUP + UKIP 301 310 308 308
Labour + SNP 334 325 327 326
Labour + Lib Dem 309 303 306 300
Labour + SDLP + Plaid Cymru + Green + Lib Dem 316 310 313 307
Labour + Lib Dem + SNP 358 352 355 354
Labour + SDLP + Plaid Cymru + Green + Lib Dem + SNP 365 359 362 361

Our projected numbers suggest that while the ballots may all have been counted by May 8th, the shape of the new government may be up in the air for some time after.

Update: we have mow updated our forecast with all polls up to the end of Tuesday 5th May, giving the final Polling Observatory forecast for this parliament:

Conservatives 34.5% (32.6, 36.4)

Labour 32.4% (29.7, 35.2)

Liberal Democrats 8.7% (6.9, 10.6)

In terms of seats, this translates into:

Labour 273 (246, 295)

Conservatives 271  (248, 299)

Liberal Democrats 24 (19, 28)

Scottish National Party 55 (49, 59)

Ukip 2 (1, 4)

Other 6 (4, 9)

Robert FordWill JenningsMark Pickup and Christopher Wlezien

Polling Observatory analysis cited in OfCom’s statement on party election broadcasts

DipticBy The Polling Observatory (Robert FordWill JenningsMark Pickup and Christopher Wlezien). The homepage of The Polling Observatory can be found here, and you can read more posts by The Polling Observatory here.


Regular readers of the blog might be interested to know that our Polling Observatory analysis of support for the parties (a joint venture between the Universities of Southampton and Manchester, Simon Fraser University and the University of Texas at Austin) featured today in OfCom’s statement on party election broadcasts. You can read the full OfCom report, ‘Review of Ofcom list of major political parties for elections taking place on 7 May 2015’, here.

Robert FordWill JenningsMark Pickup and Christopher Wlezien