Student-Designed YouGov Poll on Aspirations of Young Adults

YouGov recently released results from a poll designed by students in PAIR2004: Research Skills in Politics and International Relations. One of the key findings?

“18-24 year olds are more likely to emphasise the importance of their career in the next 10 years – and much more likely to consider creating a bucket list than the older generations.”

Read the written report by Hazel Tetsill.

 

Demos Problems and the European Union: An Exercise In Contextual Democratic Theory

by David Owen, Anali Hrvatskog Politološkog Društva (English), vol. 10, No. 1 (2013), pp 7-23.

Debates concerning the ‘democratic deficit’ have been a prevalent feature of the normative literature on the European Union, but rather less attention has been paid to ‘demos problems’ constructed by the normative ordering of the EU and what such problems reveal about the nature of democratic citizenship in the EU, the character of the EU as a normative order and the institutional character of the relationship between the constitution of the EU as a normative order and as a structure of political incentives. This article addresses this topic by focusing on one such ‘demos problem’.

Read this article now at Citizenship Observatory.

Explaining Voting Turnout in Latin America

By Nestor Castaneda-Angarita, University of Southampton @nccastaneda and Miguel Carreras, University of California – Riverside – @carreras_miguel

After thirty years of uninterrupted democratic rule in most Latin American countries, we still know very little about the factors that affect individuals’ propensity to vote. Democratic theorists have repeatedly argued that political participation has a positive influence on citizens because it leads to enlightened choices in the political arena and increased civic-mindedness. Politically active persons are likely to be more developed — intellectually, practically, and morally — than politically passive. Previous studies have demonstrated that a series of institutional and contextual factors have a positive impact on turnout (Fornos, Power, Garand, 2004; Pérez-Liñán, 2001). Those studies argue that electoral participation increases when registration procedures are efficient, when voting is compulsory and sanctions for abstaining are enforced, and when legislative and presidential elections are held concurrently. Conventional wisdom also holds that socioeconomic factors are not related with turnout in the region. The studies of turnout at the subnational level have found inconsistent evidence for the impact of variables such as literacy, wealth, and population age on electoral participation. These null and inconsistent findings may be related to the ecological problems that result from analyzing aggregate levels of turnout.

In a recently published paper (Who Votes in Latin America? A Test of Three Theoretical Perspectives, Comparative Political Studies, July 2014, Volume 47, No. 8, pp.1079-1104), we re-assess the link between socio-demographic characteristics and turnout at the individual level with recent survey data from the Americas Barometer 2010 for 30,075 respondents in 17 Latin American countries. We found out that the strongest predictors of voter turnout in all of our models are two individual resources (education and age — proxy for political experience). Our analysis reveals that these objective characteristics of the voters explain much more than their subjective motivations (trust in elections, political efficacy, and interest in politics) and their insertion in mobilization networks.

The importance of voter’s resources to explain turnout in Latin America contrasts with the little influence that variables such as income or education have on electoral participation in developed countries. Particularly, education is a very poor predictor of electoral participation in many industrialized countries.

Why are citizens with a low socio-economic status (i.e. destitute and poorly educated individuals) less likely to go to the polls in Latin America but not in most industrialized countries? We believe there are three main reasons that explain this pattern.

First, the gap between those that have a low level of education and those that have a high level of education is more remarkable in Latin America than in most industrialized countries. Since most citizens in developed countries crossed this minimum threshold of instruction (the vast majority of citizens at least completed primary school), it makes sense that the effect of education on electoral participation is less remarkable.

Second, the size of the informal sector in the economy is much bigger in Latin American countries than in developed countries. Unskilled individuals in Latin America are much more likely to work in the informal economy than their counterparts in industrialized countries. People working in the informal sector are less likely to be immersed in active social networks. As our own analysis reveals, citizens with low social capital are less likely to participate in the elections. Hence, the likelihood that poor and uneducated individuals will turn out is lower in Latin American countries than in the developed countries.

Finally, the literature suggests that voters’ resources will matter less when leftist parties or labor movements are able to mobilize lower status individuals. Latin American countries have lacked precisely the type of labor parties that were created in Europe in the twentieth century to mobilize the working-class electorate. Latin American party systems have traditionally been dominated by “parties of a multi-class appeal and ideological pragmatism.” These catch-all parties do not develop programmatic linkages with voters along existing lines of societal cleavages, and are less effective at mobilizing individuals with low socio-economic status. Moreover, the neoliberal turn in the 1990s has considerably weakened labor movements in the region, thereby eroding a potential mobilization arena that could encourage disadvantaged social groups to go to the polls. In sum, a series of structural factors help explain the divergent impact of voters’ resources on electoral participation across different regions.

The conventional wisdom regarding turnout in Latin America is that institutions matter much more than socio-economic factors. We demonstrate that the strongest predictors of turnout in the region (education, age, employment status) are all socio-economic variables. Income also matters but its impact is not linear. Our analysis reveals that individuals in situation of extreme poverty are less likely to vote than the rest of the population.

Puzzling About Political Leadership – Rhodes and t’ Hart

By R.A.W. Rhodes, Professor of Government at the University of Southampton and Griffith University (Personal website, Academia.eduGoogle Scholar). You can read more posts by R.A.W. Rhodes here.


 

rhodes

Since Machiavelli, political leadership has been seen as the exercise of practical wisdom. We can gain insights through direct personal experience and sustained reflection. The core intangibles of leadership – empathy, intuition, creativity, courage, morality, judgement – are largely beyond the grasp of ‘scientific’ inquiry. Understanding leadership comes from living it: being led, living with and advising leaders, doing one’s own leading.

In sharp contrast, a ‘science of leadership’ has sprung up in the latter half of the twentieth century. Thousands of academics now make a living treating leadership as they would any other topic in the social sciences, and political leadership is no exception. These scholars treat it as an object of study, which can be picked apart and put together. Their papers fill journals, handbooks, conference programs and lecture theatres. Some work in the real world of political leadership as consultants and advisers, often well paid. This buzzing, blooming confusion would not persist if such knowledge did not help in grasping at least some of the puzzles that leaders face and leadership poses. And there are puzzles aplenty.

The first puzzle is whether we are looking at the people we call leaders, or at the process we call leadership? Leader-centred analysis has proved hugely popular but many now prefer to understand political leadership as a two-way street; an interaction between leaders and followers, leaders and media, leaders and mass publics.

The second puzzle is whether we are studying democrats or dictators. Democracy needs good leadership yet the idea of leadership potentially conflicts with democracy’s egalitarian ethos. Political leaders holding office in democratic societies live in a complex moral universe. Other heads of government gained power by undemocratic means. They sometimes govern by fear, intimidation and blackmail. Is that leadership? However, even such ‘leaders’ may aim for widely shared and morally acceptable goals and rule with the tacit consent of most of the population. Understanding leadership requires us to take in all its shades of grey: leading and following, heroes and villains, the capable and the inept, winners and losers.

rhodes1

The third puzzle ponders whether political leadership matters. Leaders use their political platforms to inject words, ideas, ambitions and emotions into the public arena, to shape public policies and transform communities and countries. But when do they make a difference? What stops them from being a force in society? Or are political leaders a product of their societies? Finding out who gets to lead can teach us much, not just about those leaders, but about the societies in which they work. So, we ask who becomes a political leader, how and why? What explains their rise and fall?

The fourth puzzle explores the relative importance of their personal characteristics and behaviour compared to the context in which they work. Sometimes political leaders are frail humans afloat on a sea of storms and sometimes they survive at the helm when few thought that possible. They achieve policy reforms and social changes against the odds, and the inherited wisdom perishes. How do political leaders escape the dead hand of history?

The fifth puzzle wonders if the success of leaders stems from their special qualities or traits – the so-called ‘great man’ theory of leadership. However, we have to entertain the possibility that these allegedly ‘great’ leaders might have been just plain lucky; that is they get what they want without trying. They are ‘systematically lucky’.

rhodes2

The sixth puzzle is about success and failure. How do we know when a political leader has been successful? The temptation is always to credit their success to their special qualities, but no public leader ever worked alone. Behind every ‘great’ leader are indispensable collaborators, advisers, mentors, and coalitions; the building blocks of the leader’s achievements.

Political leadership is both art and profession. Political leaders gain office promising to solve problems but more often than not they are defeated by our puzzles. There is no unified theory of leadership to guide them. There are too many definitions, and too many theories in too many disciplines. We do not agree on what leadership is, or how to study it, or even why we study it. The subject is not just beset by dichotomies; it is also multifaceted, and essentially contested. Leaders are beset by contingency and complexity, which is why so many leaders’ careers end in disappointment.

R. A. W. Rhodes is Professor of Government at both the University of Southampton (UK); and Griffith University (Brisbane, Australia). He is the author or editor of some 35 books including recently Lessons of Governing. A Profile of Prime Ministers’ Chiefs of Staff (with Anne Tiernan, Melbourne University Press 2014); and Everyday Life in British Government (Oxford University Press 2011).

Paul ‘t Hart is Professor of Public Administration at the Utrecht School of Governance; associate Dean at The Netherlands School of Government in The Hague; and a core faculty member at the Australia New Zealand School of Government (ANZSOG). He is the author or editor of some 35 books including recently Understanding Public Leadership (Palgrave Macmillan 2014); and Prime Ministerial Leadership: Power, Parties and Performance (co-edited with James Walter and Paul Strangio (Oxford University Press, 2013).

The Polling Observatory Forecast #2: Still A Dead Heat, Despite Recent Turbulence…

As explained in our inaugural election forecast last month, up until May next year the Polling Observatory team will be producing a long term forecast for the 2015 General Election, using methods we first applied ahead of the 2010 election (and which are also well-established in the United States). Our method involves trying to make the best use of past polling evidence as a guide to forecast the likeliest support levels for each party in next May’s election (see our previous research here), based on current polling, and then using these support levels to estimate the parties’ chances of winning each seat in the Parliament. We will later add a seat-based element to this forecast.

Forecast 01-06-14

In light of the turbulence of the polls over the course of the European election campaign (with a Lord Ashcroft poll showing the Conservatives ahead for the first time since March 2012), inquests into the insipid performance of Labour and Ed Miliband, better than expected results for the Conservatives in local and European elections, and a disastrous showing by the Liberal Democrats, some might have expected a turning point or a step change in the predictions for May 2015 – consistent with the pattern for governments to often recover in the polls during the final year. However, some degree of recovery is already built in to our model and there is, as yet, no evidence that the Conservatives are outperforming the historical trend. Our forecast puts Labour and the Conservatives in a dead heat, as it did last month. We currently forecast both parties to receive 35.8% of the vote. In part this reflects the very recent uptick in Labour support following a decline over recent months. More significantly, though, it reflects the fact that both parties are polling well below their historical level, and therefore we expect both to make some recovery in the polls. However, the prospect of a recovery to the kind of levels seen by winners in past elections – 40% plus – is tempered by the very low starting point for both main competitors. Both main parties are likely to put in weaker performances than in the past, even with a recovery from the current low ebb, but at present history continues to suggest a very tight race to the finish next spring.

Polling Observatory #37: No Westminster polling aftershock from European Parliament earthquake

This is the thirty-seventh in a series of posts that report on the state of the parties as measured by opinion polls. By pooling together all the available polling evidence we can reduce the impact of the random variation each individual survey inevitably produces. Most of the short term advances and setbacks in party polling fortunes are nothing more than random noise; the underlying trends – in which we are interested and which best assess the parties’ standings – are relatively stable and little influenced by day-to-day events. If there can ever be a definitive assessment of the parties’ standings, this is it. Further details of the method we use to build our estimates of public opinion can be found here.

UK 06-01-14 anchor on average

This month’s Polling Observatory comes in the aftermath of the European Parliament elections and the so-called UKIP earthquake for the electoral landscape. Despite much volatility in the polls ahead of those elections, with a few even putting the Conservatives ahead of Labour for the first time in over two years, underlying support for both main parties remained stable over the course of the month. Labour may have fallen early in the month in the run-up to the European elections, or the Conservative leads may have been the result of random variation. In any event, by the end of the month, we had Labour polling at 33.8%, just 0.2 points down on their support a month ago. The Conservatives are also broadly flat at 30.9%, 0.3 points below their standing a month ago. The Lib Dems have suffered slightly more of a post-election hangover, perhaps set back by infighting over the botched coup by Lord Oakeshott and the widespread ridicule over the Clegg/Cable beer-pulling photo op, on 9.3%, down 0.4 points. UKIP support remained stable at record high levels, as they enjoyed a moment in the limelight around the European Parliament elections. We have them rising 0.2 points on last month to 14.9%, their highest support level to date. Note that all these figures are based on our adjusted methodology, which is explained in detail below.

It is noticeable that while Labour’s support has been in decline for the last six to nine months (having plateaued for a period before that) underlying Conservative support has remained incredibly stable around the 31% level. In fact, setting aside the slight slump around the time of the last UKIP surge at the 2013 local elections, their standing with the electorate has been flat since its crash of April 2012 around the time of the ‘omnishambles’ budget. The narrowing in Labour’s lead over the past year is entirely the result of Labour losing support, not of the Conservatives gaining it. We have written at length previously about how the fate of the Liberal Democrats was sealed in late 2010, and as such it is remarkable that in this parliament there has been so little movement in the polls for the parties in government. The prevalent anti-politics mood out in the country and continued pessimism about personal/household finances has meant that neither of the Coalition partners have yet been able to convert the economic recovery into a political recovery. Instead, both are gaining ground relatively as the main opposition party also leak support, perhaps also succumbing to the anti-Europe, anti-immigration, anti-Westminster politics of UKIP.

As explained in our methodological mission statement, our method estimates current electoral sentiment by pooling all the currently available polling data, while taking into account the estimated biases of the individual pollsters (“house effects”). Our method therefore treats the 2010 election result as a reference point for judging the accuracy of pollsters, and adjusts the poll figures to reflect the estimated biases in the pollsters figures based on this reference point. Election results are the best available test of the accuracy of pollsters, so when we started our Polling Observatory estimates, the most recent general election was the obvious choice to “anchor” our statistical model. However, the political environment has changed dramatically since the Polling Observatory began, and over time we have become steadily more concerned that the changes have rendered our method out of date. Yet changing the method of estimation is also costly, as it interrupts the continuity of our estimates, and makes it harder to compare our current estimate with the figures we reported in our past monthly updates.

There were three concerns about the general election anchoring method. Firstly, it was harsh on the Liberal Democrats, who were over-estimated by pollsters ahead of 2010 but have been scoring very low in the polls ever since they lost over half their general election support after joining the Coalition. The negative public views of the Liberal Democrats, and their very different political position as a party of government, make it less likely that the current polls are over-estimating their underlying support. Secondly, a general election anchor provides little guidance on UKIP, who scored only 3% in the general election but poll in the mid-teens now, but with large disagreements in estimated support between pollsters (see discussion of house effects below). Thirdly, the polling ecosystem itself has changed dramatically since 2010, with several new pollsters starting operations, and several other established pollsters making such significant changes to their methodology that they were equivalent to new pollsters as well.

We have decided that these concerns are sufficiently serious to warrant an adjustment to our methodology. Rather than basing our statistical adjustment on the last general election, we now make adjustments relative to the “average pollster”. This assumes that the polling industry as a whole will not be biased. This assumption could prove wrong, of course, as it did in 2010 (and, in a different way, 1992). However, it seems pretty likely that any systematic bias in 2015 will look very different to 2010, and as we have no way of knowing what the biases in the next election might be, we apply the “average pollster” method as the best interim guide to underlying public opinion.

This change in our methodology has a slight negative impact on our current estimates for both leading parties. Labour would be 34.5% if anchored against the 2010 election, rather than the new estimate of 33.8%, while the Conservatives would be on 31.5% rather than 30.9%. Yet as both parties fall back by the same amount, their relative position is unchanged.  UKIP gain slightly from the new methodology – our new estimate is now 14.9%, under the old method they would score 14.5%. However, the big gainers are the Lib Dems, who were punished under our old method for their strong polling in advance of the 2010 general election.  We now estimate their vote share is estimated at 9.3%, significantly above the anaemic 6.7% estimate produced under the previous method. This is in line with our expectations in earlier discussions of the method in previous posts. It is worth noting that none of these changes affect the overall trends in public opinion that we have been tracking over the last few years, as will be clear from the charts above.

The European Parliament elections prompted the usual inquest into who among the nation’s pollsters had the lowest average error of the final polls compared against the result (see here). We cannot simply extrapolate the accuracy of polling for the European elections to next year’s general election. For one thing, these sorts of ‘final poll league table’ are subject to sampling error, making it extremely difficult to separate the accuracy of the polls once this is taken into account (as we have shown here). Nevertheless, with debate likely to continue to rage over the extent of the inroads being made by UKIP as May 2015 approaches, some of the differences observed in the figures reported by the polling companies will come increasingly under the spotlight. These ‘house effects’ are interesting in themselves because they provide us with prior information about whether an apparent high or low poll rating for a party, reported by a particular pollster, is likely to reflect an actual change in electoral sentiment or is more likely be down the particular patterns of support associated with the pollster.

Our new method makes it possible to estimate the ‘house effect’ for each polling company for each party, relative to the vote intention figures we would expect from the average pollster. That is, it tells us simply whether the reported vote intention for a given pollster is above or below the industry average. This does not indicate ‘accuracy’, since there is no election to benchmark the accuracy of the polls against. It could be, in fact, that pollsters at one end of the extreme or the other are giving a more accurate picture of voters’ intentions – but an election is the only real test, and even that is imperfect.

In the table below, we report all current polling companies’ ‘bias’ for each of the parties. We also report details of whether the mode of polling is telephone or Internet-based, and adjustments used to calculate the final headline figures (such as weighting by likelihood to vote or voting behaviour at the 2010 election). From this, it is quickly apparent that the largest range of house effects come in the estimation of UKIP support, and seem to be associated with the method a pollster employs to field a survey. All the companies who poll by telephone (except Lord Ashcroft’s new weekly poll) tend to give low scores to UKIP. By contrast, three of the five companies which poll using internet panels give higher than average estimates for UKIP. ComRes provide a particularly interesting example of this “mode effect”, as they conduct polls with overlapping fieldwork periods by telephone and internet panel. The ComRes telephone-based polls give UKIP support levels well below average, while the web polls give support levels well above it. It is not clear what is driving this methodological difference – something seems to be making people more reluctant to report UKIP support over the telephone, more eager to report it over the internet, or both. The diversity of estimates most likely reflects the inherent difficulty of accurately estimating support for a new party whose overall popularity has risen rapidly, and where the pollsters have little previous information to use to calibrate their estimates.

House Mode Adjustment Prompt Con Lab Lib Dem UKIP
ICM Telephone Past vote, likelihood to vote UKIP prompted if ‘other’ 1.3 -0.9 2.8 -2.4
Ipsos-MORI Telephone Likelihood (certain) to vote Unprompted 0.5 0.4 0.5 -1.6
Lord Ashcroft Telephone Likelihood to vote, past vote (2010) UKIP prompted if ‘other’ -0.7 -0.8 -1.2 0.9
ComRes (1) Telephone Past vote, squeeze, party identification UKIP prompted if ‘other’ 0.6 0.0 0.2 -2.5
ComRes (2) Internet Past vote, squeeze, party identification UKIP prompted if ‘other’ 0.3 -0.7 -1.0 1.8
YouGov Internet Newspaper readership, party identification (2010) UKIP prompted if ‘other’ 1.9 2.1 -1.3 -0.2
Opinium Internet Likelihood to vote UKIP prompted if ‘other’ -0.8 -0.9 -2.3 3.0
Survation Internet Likelihood to vote, past vote (2010) UKIP prompted -1.8 -1.5 -0.2 4.4
Populus Internet Likelihood to vote, party identification (2010) UKIP prompted if ‘other’ 2.3 1.5 0.2 -2.2

Robert FordWill JenningsMark Pickup and Christopher Wlezien

Reflections on the International Conference for E-Democracy and Open Government 2014

By Mark Frank, PhD Student in Politics & International Relations and WebScience.

On the 21st to 23rd of May I attended the International Conference for E-Democracy and Open Government 2014  (CEDEM 2014) in Krems in Austria.  I was there because they accepted a short paper (what they call a reflection) written by myself and Phil Waddell.  As far as I know, no one from Southampton, much less PAIR, has ever attended CEDEM although it has been going every year since 2008. So I thought it would be worth saying a bit about it.

Most of all – I highly recommend it.  The conference title is a pretty good description. So if you are interested in the interaction between the Internet and Democracy then this may be a good conference for you. It is organised by the Danube University of Krems – which is an interesting university. It is aimed at continuing education for working professionals – postgraduate only and specialising in short or masters’ courses with a lot of part-time and distance learning.  The main conference is always in Krems, although they also run an Asian version which moves from place to place.

It was a small conference (120 people and three tracks) and but very friendly and well-organised and of a high standard. As my interest is Open Data I concentrated on that track and found the majority of the papers to be valuable. With hindsight I think that ideally I would have spent more time in other sessions, as this track, while fascinating, was more technology oriented than I expected. For example a paper on ways of automatically assessing data quality and another comparing different platforms for presenting open data. The e-Democracy and e-Participation track was more concerned with the political implications of technology (this divide may reflect two different views of Open Data in the world at large). However, the keynote speakers and the brief presentations of the reflections presented a wide variety of perspectives on e-government from round the world. Alexander Gerber on Scientific Citizenship was a particular highlight, although I disagreed quite strongly with his thesis of upstream scientific engagement which seemed to imply that the direction of scientific research should be decided democratically. The small and specialist format seemed to permit keynote speakers who were less bland and more approachable than is often the case at large conferences.

The biggest benefit of any conference is always talking to the other attendees and here CEDEM 2014 really scored. The theme of the conference was well defined and not too broad so I found that I had common interests with almost everyone there. And everything about the three days made it easy to meet and talk. It is small and informal and there were numerous social events. The UK is a leader in Open Data and it is easy to neglect what is going on in the rest of the world. This was an excellent reminder that this is a truly global movement albeit with very different perspective in different countries.