Leading social scientists consider cutting-edge quantitative and qualitative methodologies, analyse the logic underpinning an array of approaches to empirical enquiry, and discuss the practicalities of carrying out research in a variety of different contexts.
Most seminars run fortnightly, day and location to be confirmed. Half are held at the LSE (usually COL 8.13, which is on the eighth floor of Columbia House). Please see here for a map of the LSE.
Follow the DoM Seminar series on Twitter.
If you would like to be informed of upcoming seminar check out our twitter feed.
Up Coming Seminars
The Abject Academy: The Sociology of Britain's Research Excellence Framework
Derek Sayer, Lancaster University
Thursday 18 June 2015, 16:15-17:45 in Columbia House COL 8.13
The research assessment regime developed in the UK over the last 30 years, culminating in REF2014, is indisputably "world-leading" in its scale, complexity, and costs in public money and academics' time. Yet by any reasonable criteria the REF is seriously deficient as a means of evaluating research quality and grossly inefficient as a vehicle for allocating research funding. Derek Sayer argues that the ever greater role assumed by the REF regime in British HE is best understood sociologically. It has been the key mechanism through which the UK's traditional academic elites have managed to preserve and reproduce their hegemony in a neoliberal era, while simultaneously reducing faculty members from a self-governing profession to a disciplined and docile workforce who are largely complicit in their own abjection. Sayer ends by asking whether metrics (including alt-metrics) or other alternatives to the REF might deliver a fairer, cheaper, and less disruptive basis for allocation of research funding between institutions that offers a basis for breaking from this model of "Old Corruption."
The rise of cost-effectiveness evidence in global health: contingencies of 'context' and the politics of contingency
Dominique Béhague, King's College London
Thursday 21 May 2015, 16:15-17:45 in Columbia House COL 8.13
The development and implementation of health policies in developing countries have become increasingly driven by the practices of research communities, inter- and non-governmental organisations, and donor agencies operating at the global level. Critics of this shifting landscape argue that the concurrent rise in demand for experimental evidence of cost-effectiveness reflects not only the 'superior' epistemic truth-value often attributed to experimentation, but also the permeation of neo-liberal market-principles into global health that took force in the 1990s with the rise of "philanthrocapitalism" and public-private partnerships. Simply put, cost-effectiveness measures facilitate the calculation of returns on investments, thereby enabling donors to hold recipients to account in highly detailed technocratic ways. Concern with growing neo-liberalism in global health has given rise to a series of counter-narratives relating specifically to the reductionistic and universalizing tendencies of cost-effectiveness frameworks. Epidemiologists and social scientists now routinely point to the disjuncture that arises when evidence-based guidelines, which are assumed to be broadly universally applicable, are transposed onto a variety of local "contexts" that are rife with "culture." Many now actively engage in producing alternative epistemologies that they argue are better suited to understanding "context" and "complexity." Drawing on an ethnographic study of the safe motherhood initiative, this paper explores the social and political life of these alternative epistemologies. Focusing on the recent proliferation of two specific concepts -- "context" and "complexity" -- we compare key differences in the ways these concepts are used by various stakeholders, depending on their type of expertise, and their role, rank and geopolitical placement. We demonstrate that underpinning debates regarding the epistemic importance of complexity and context are long-standing concerns with the workings of neocolonial power. Our analysis explores how debates relating to contingency are entrenched in geo-political negotiations and the emergence of new "community"-oriented politics of self-governance among experts who identify as representing the Global North and Global South. Focusing less on the polarised fringes of normative epistemologies (e.g. epistemic "resistance") and more on how epistemic genres divide, multiply and inter-relate, we explore how assertions of the importance of "context" and "complexity" are trapped in a finely-tuned dualistic and dialectical relationship, thereby rendering these concepts unlikely vehicles of change.
Bias assessment of a causal inference approach to age-period-cohort analysis
Maarten Bijlsma, University of Groningen
Thursday 7 May 2015, 16:15-17:45 in Columbia House COL 8.13
An age-period-cohort (APC) model is a statistical model that attempts to break down some phenomena of interest into constituent effects caused by, or associated with, age, calendar time and time of birth. Being able to break down some topic into age, period and cohort is of interest for explanatory reasons, e.g. to explain changes in voting behaviour, or for projections, e.g. to better forecast future mortality. While APC models have been around for decades, they are controversial because they face a linear identification problem: age = period – cohort. In other words, a unique solution cannot be found using a linear model without constraints. Unfortunately, the technical constraints that have been proposed are commonly arbitrary and do not in any way guarantee correct parameter estimates. Instead, various commentators have argued, the way forward is to do a more theoretically informed analysis. We believe that methods from causal inference will force researchers to think about substantive theory before modelling APC, and to be more open about underlying assumptions. The mechanism-based approach to APC analysis proposed by Winship and Harding in Sociological Methods & Research is a method that is based on Pearl’s front door criterion, and thereby relies on correct modelling of the causal pathways between APC, mediators and outcome. While the method shows promise, it is unclear how well it will estimate the values of parameters in a realistic setting. To assess bias in estimation, we simulate a number of situations that will occur in a real life setting, such as having a partial set of mediators, mediators being a child of more than one APC variable, and confounding between mediators and outcome.
Organizational Ethnography: a method and a way of imagining the social
Dr Hugo Gaggiotti, University of the West of England
Thursday 5 February 2015, 16:15-17:45 in COL8.13
Dahles, Höpfl and Koning (2013) are right when saying that conventional thinking views organizational ethnography basically as a method. However, the variety, subjectivity and, sometimes, contradictory ways of explaining this view are considerable. Some authors refer to it as an “obvious method for understanding work organizations as cultural entities” (Bryman and Bell, 2007, p. 441), others say that ethnographic methods “make it possible to explore little-known phenomena without having to establish a rigid conceptual framework” (Charreire and Durieux, 2001, p. 61) and there are even authors who define organizational ethnography as an “in depth case study analysis” (Royer and Zarlowski, 2012, p. 114) or as a “methodological assumption” (Collis and Hussey, 2003, p. 60). At the same time, there has been several points of possible meeting and convergence, also recently, between ethnography holistically understood and practiced by anthropologists and organization theorists. This is my main focus of interest in this text where I propose that the way of engaging with organizational ethnography suggests the potentiality of a re-foundational spirit of the discipline and the possibility of bridging a long and non-productive time of divergences that created the understanding of organizational ethnography just as a menu of tools, methods or techniques. I illustrate the case by explaining the ethnographic approach of Lloyd Warner, a character neglected in the history of ethnographic research, but who, by virtue of his contribution to the famous Hawthorne Studies, may be considered one of the founding authors of both organization studies and organizational ethnography.
Qualitative analysis and causal inference in evaluation
Professor Judith Green of the London School of Hygiene and Tropical Medicine
Applied social science is typically undertaken in the context of 'evaluations', where questions are orientated to establishing credible causal inferences (did the intervention have these effects?) and policy questions of transferability (is it likely to have these effects somewhere else?). This seminar explores how analysis of qualitative data can contribute to these issues of internal and external validity in evaluations. The examples are taken from a study which aimed to evaluate the impact of free bus travel on the public health in London. This study integrated secondary data analysis (using a 'change on change' analysis to establish change that could be attributed to the intervention) and the analysis of more qualitative data, including interviews with groups of young bus passengers, to elucidate the effects of free bus travel. The two data sets combine easily when utilised in a 'triangulation' model, but less easily when attempting to make credible inferences about cause, and about transferability.
Multimodal methods for researching digital environments
Professor Carey Jewitt of the Institute of Education
This presentation will give an overview of methodological developments from MODE, a large ESRC project. It will outline what is meant by multimodality, its key concepts and principles, sketch its scope and potential. The process of multimodal research will be described, focusing in on multimodal data collection and different stages of analysis, using a
range of examples drawn from MODE. The challenges and constraints of this approach and its innovative features will be discussed.
'Why are the truly disadvantaged American?' Can we answer big questions and remain methodologically (reasonably) consistent?
Professor David Soskice and Professor Nicola Lacey (LSE)
This paper addresses a 'big' question: 'Why is poverty (in the key areas of income, crime and punishment, education and residential segregation) so much worse in the contemporary US than in other advanced nations?' We narrow the question down to a comparison between the US and the other Anglo-Saxon countries, Canada, the UK, Australia and NZ, as well as exploiting American data across local jurisdictions. All the Anglo-Saxon countries are Liberal Market Economies in the variety of capitalism classification, they are all Competitive as opposed to Negotiated political systems in Lijphart's classification, and they all have Liberal welfare states in district, municipal) in the US, while in the other Anglo-Saxon countries they are made at national or provincial levels. We argue that the incentive structures at the local level favour outcomes which reinforce each other in generating poverty, while that is not to the same degree the case in higher level jurisdictions.
There are two broad sets of methodological issues: First, obviously, those relating to using what is de facto a large cross-section of data from many different sources on which are imposed a large number of theory-driven exclusion restrictions to enable us to identify the structural system of simultaneous differential equations underlying the argument. This all relates essentially to the present or the last two decades. The second issue is the broader one of how one rules out longer-term alternative explanations of American poverty: the legacy of slavery; the massive northern migration of blacks from the 1920s on; did poverty/immigration cause local democracy historically; is local democracy conducted 'under the shadow' of state and federal preferences; suburban incorporation; whether or not this argument generalises across the US; and maybe other possibilities. In relation to the second issue, do we fundamentally have the 'temperature climate' of Acemoglu-Robinson or 'distance from Wittenberg' or even 'legal origins' (though they are being progressively undermined) as long-term identifiers?
'Being there, seeing there, feeling there: a methodological journey into measuring and mapping mobile experiences'
Dr Justin Spinney, Cardiff University
In the context of a project investigating cycling amongst older people in four UK cities, this paper explores some of the methodological and ethical issues inherent in gathering and mapping data on feelings and affects from mobile subjects. Whilst emotion mapping is nothing new (Aspinal 2013, Nold 2009) this project seeks to progress work in this area by attempting to ‘measure’ affective response relative to specific features of the built environment.
In order to do this the research is considering a variety of technologies including GPS, video ethnography, EEG/EMG, proximity, and eye-tracking. This evidently raises epistemological issues around ‘naturalistic experiments’ in terms of controlling variables and knowing what we are measuring, and substantial ethical issues inherent in the use of such invasive/pervasive technology. However this paper focuses attention in three areas: routed through phenomenology and ethnography the first of these engages with debates around what we gain and lose by being mobile with our participants. The second engages with debates around affect and the more than representational in the context of attempting to ‘objectively’ measure qualitative and ephemeral data. Thirdly and related to the previous point, the paper explores issues around the extent to which bodies can ‘speak for themselves’ within such methodologies.
In doing so the paper seeks to progress debate regarding the mobility of method, and extend post-positivist/ social constructivist conversations regarding mixed methods research that sees data as being both objectively and subjectively ‘knowable’.
Cultures of containment and collaboration: On interdisciplinary evaluation
The ‘Evaluation’ is a term which has been abused in the recent scramble to systematically measure the economic and social value of the arts. In considering the relationship between arts policy, cultural theory and arts and evaluation practice this paper offers a critical perspective on institutional ‘norms’ and ‘forms’ of evaluation. The histories of, and motivations for evaluation include a governmental impulse to employ culture as a resource that can be put to work as part of a wider global project of managing social change (Yudice 2003, Bennett 1995) and a genuine desire to learn from and improve the effectiveness and possibilities of arts-based social interventions. In the current policy context evaluation has become a technocratic ‘hoop’ for arts organisations to jump through in an endless mutual narrative driven by cultural policy, instrumentality and accountability. The value of evaluation, however, lies in the opportunity it offers for critical and reflexive learning and intersectoral dialogue on cultural value. This paper offers examples of critical approaches to evaluation, based on a method of participation action research, with a focus on arts-based mental health interventions.
Evaluating Sensitivity of Parameters of Interest to Measurement Invariance in Latent Variable Models
Dr Daniel Oberski (Tilburg University)
Groups may only be compared when they exhibit measurement equivalence or “invariance”, since otherwise substantive differences may be confounded with measurement differences. Oberski (2014) recently suggested examining directly whether measurement differences present could confound substantive analyses, and introduced the "EPC-interest" to do so. This measure approximates the change in parameters of interest that can be expected when freeing cross-group invariance restrictions. In this talk I will illustrate the use of the EPC-interest for measurement invariance testing in two examples from the literature: a 19-country comparison of the relationship between human values and immigration attitudes, and a comparison between the values of men vs. women. The empirical applications show that the EPC-interest can help avoid two undesirable situations: first, it can prevent unnecessarily concluding that groups are incomparable, and second, it alerts the user when comparisons of interest may still be invalidated even when the invariance model appears to fit the data.
Narrations, speech acts, words: a method for the study of episodes in verbal manifestations
Dr David Maldavsky (UCES, Buenos Aires)
Verbal manifestations (either journalistic notes, presidential discourses, interviews, literary texts, or colloquial exchanges) can be considered as narrated or enacted episodes. In each sketch it is possible to detect specific characters, their roles, traits, states and transformations, as well as specific values, scenes of action, and so on.
The seminar aims to describe the David Liberman algorithm (DLA), which consists of instruments useful for the study of the mentioned features on three levels of analysis: narrations, speech acts, words. For the analysis of narrations and speech acts, the DLA provides different grids, and a computerized dictionary for words. The description of the instruments will be accompanied by some examples of their application to a sample, with the corresponding procedures.
Measuring the Value of Intangible Cultural Heritage
Dr Jian Jin, Hebei University, China (Visitor in Department of Methodology, LSE)
Intangibility, multifaceted nature and inseparability present major challenges for measuring the value of Intangible Cultural Heritage (ICH). Dr Jian Jin group has been doing research on methods for evaluating ICHs. For profitable ICHs the group used an Income Valuation Method to measure the economic value of Liulingzui Brewing Technology, a provincial ICH with a history dating back over two centuries. The group is evaluating two methods, Compensation Method and Willing-To-Pay Method, in the context of Taihuazhuo, an endangered and non-profit ICH that has been transmitted from generation to generation since Tang Dynasty . For the social value of ICH Dr Jian Jin is applying a Weighted Multi-criteria Analysis and People’s Subjective Wellbeing Method.
The Representation of Islam in the UK Press, 1998-2009
Professor Tony McEnery, Lancaster University
In this talk I will present work, funded by the ESRC and undertaken at Lancaster University, looking at how Muslims and Islam are constructed in the UK national press. The study, using the techniques of corpus linguistics, is a computer aided discourse analysis of every article mentioning Muslims and Islam in an eleven year period. The resultant analysis, based on over 140 million words, reveals patterns and trends which provide an unprecedented insight into how Muslims are talked about in the UK press. While some good reporting practices are found, there are also many practices that I view as problematic with key issues, such as the wearing of the veil, leading to divisive debate and the creation of negative representations of Muslim women in particular. Yet, as the talk will show, apparently innocuous choices in spelling by the press can also prove to be contentious and revealing.
As well as presenting the findings of this study, the talk will also prove to be a useful introduction to how and why researchers may wish to use the corpus approach in research with a textual or linguistic dimension.
POLICY REPRESENTATION, SOCIAL REPRESENTATION AND CLASS VOTING IN BRITAIN
Dr Oliver Heath (Royal Holloway, University of London)
Why does the strength of class voting vary over time? Recent research has emphasized factors to do with the structure of political choice at the party level. This article examines different aspects of this choice, and investigates whether voters are more likely to respond to the social cues or political cues that parties send voters. The results from the British context suggest that the former are more important than the latter. The central implication of this finding is that social representation matters, and that the social background of political representatives influences the ways in which voters relate to political parties.
Community Fictions of Domestic Violence: Participatory Video Drama Research in Cambodia
Dr Katherine Brickell
Participatory action research (PAR) has become an important research approach in human geography and other social science subjects. The talk takes the internationally significant problem of domestic violence, and considers how one PAR method - participatory video drama (PVD) – can be used as a means to research community experiences and understanding of this human rights violation. To do this, the paper draws on four workshops held in rural and urban Cambodia during July and August 2012. These were conducted as part of a 3 year multi-method study funded by the ESRC and UK Department for International Development (DFID) on the hiatus that exists in the country between legal reform and domestic violence alleviation.
Causal Analysis, Instrumental Variables, Structures Mean Models, and the Generalized Method of Moments
Dr Paul Clarke, University of Bristol
Instrumental variable (IV) analysis is a widely used and sometimes controversial technique for obtaining causal inferences from imperfect experiments or observational studies. In this talk, I discuss the bringing together of two techniques from econometrics and biostatistics to form a flexible framework for causal analysis. Starting with the traditional use of IVs with 2-stage least squares (2SLS), I go on to discuss how an alternative approach, structural mean models (SMMs), relaxes some of the assumptions made by classical IV models. I then go on to show how SMMs can be estimated using an extension of 2SLS called the generalized method of moments (GMM), and how this solves a long-standing problem in econometrics of IV estimation for nonlinear regression for binary outcomes. To showcase this technique, I will present an application based on genetic IVs in which we exploit the full potential of GMM to allow multiple IVs and, finally, discuss on-going work in which GMM can be used to estimate complex mediation models using 'pathway' SMMs.
A cross-national measure of electoral competitiveness
Dr Rene Lindstaedt
Electoral competitiveness is a key explanatory construct across a broad swath of phenomena, finding application in diverse areas related to political incentives and behavior. Elections, regulation, governmental responsiveness, international conflict, redistribution and economic governance all feature among a long list of outcomes associated with electoral competitiveness in the literature. Although it is undoubtedly an important determinant of social outcomes, it is also one of the least well measured. Despite its frequent theoretical use, no valid measure of electoral competitiveness exists that applies across different electoral and party systems. The measures that are most frequently proposed - vote or seat share margins between the two largest parties - are largely a function of the number of political parties in a system and hence, the electoral system. We propose a cross-nationally applicable definition of electoral competitiveness that is independent of electoral and party systems - the probability of the executive's party losing its seat plurality in the lower house of the legislature - and develop a method for measuring it. An application to cross-national differences in real price levels illustrates its utility.
Crowd-Sourced Data Coding for the Social Sciences: Massive Non-expert Coding of Political Texts
Professor Kenneth Benoit, LSE
with Drew Conway, Michael Laver and Slava Mikhaylov)A large part of empirical social science relies heavily on data that are not observed in the field, but are generated by researchers sitting at their desks, raising obvious issues of both reliability and validity. This paper addresses these issues for a widely used type of coded data, derived from the content analysis of political text. Comparing estimates derived from multiple “expert” and crowd-sourced codings of the same texts, as well as other independent estimates of the same latent quantities, we investigate whether we can analyze political text in a reliable and valid way using the cheap and scalable method of crowd sourcing. Our results show that, contrary to naive preconceptions and reflecting concerns often swept under the carpet, a set of expert coders is also a crowd. We find that deploying a crowd of non-expert coders on the same texts, with careful specification and design to address issues of coder quality, offers the prospect of cheap, scalable and replicable human text coding. Even as computational text analysis becomes more effective, human coding will always be needed, both to validate and interpret computational results and to calibrate supervised methods. While our specific findings here concern text coding, they have implications for all expert coded data in the social sciences.
Dr Ben Goldacre
Bad Pharma (public lecture)
Exploring The Extraordinary Relations Between People and Place: Methods, Measurements and Mobilities.
Dr Jon Anderson, University of Cardiff
As geographers Holloway & Hubbard suggest, "we cannot study people and place independently of each other" (2001:7). This talk examines the ways in which researchers can study the world as if these relations mattered. To this end it will outline the ways in which methods can go mobile to prompt emotions and knowledges about people-in-places. It will then demonstrate how these approaches can be supplemented by new experiments with biosensor technology that can begin to measure the emotions and sensory stresses prompted by the places around us. The paper will forward a 'polylogic' approach to method which can help us to take seriously the relations between people and place in our research project.
MULTIPLE IMPUTATION OF COVARIATES ACCOMMODATING THE SUBSTANTIVE MODEL
Dr Jonathan Bartlett, London School of Hygiene and Tropical Medicine
Missing covariate data commonly occur in observational and experimental research, and are often dealt with using multiple imputation (MI). Imputation of partially observed covariates is complicated if the substantive model is non-linear (e.g. Cox proportional hazards model), or contains non-linear (e.g. squared) or interaction terms, and standard software implementations of MI may impute covariates from models that are incompatible with such substantive models. We show how imputation by fully conditional specification, a popular approach for performing MI, can be modified so that covariates are imputed from models which are compatible with the substantive model. We investigate through simulation the performance of this proposal, and compare it to existing approaches. Simulation results suggest our proposal gives consistent estimates for a range of common substantive models, including models which contain non-linear covariate effects or interactions, provided data are missing at random and the assumed imputation models are correctly specified and mutually compatible. The methods are illustrated using data from the Alzheimer's Disease Neuroimaging Initiative.
Professor John Gerring, Boston University
CONCEPTUALISING AND MEASURING DEMOCRACY: A NEW APPROACH
Extant democracy indices are problematic in several respects: (a) they focus primarily on the "liberal" (Madisonian) or "electoral" (Schumpeterian) dimensions of democracy; (b) they are comprised of components that are not always truly independent of each other; (c) they are rarely global and historical in their coverage; (d) they are not always transparent in design and replicable; (e) they do not address potential problems of measurement error and (f) they cannot make fine distinctions across polities or through time in a reliable fashion.
The Varieties of Democracy (V-Dem) project undertakes a new approach to conceptualizing and measuring democracy which may be summarized as (a) multidimensional, (b) historical, (c) disaggregated, and (d) attentive to sources of error. The combination of these four attributes makes the V-Dem Database unique among democracy indices.
In this talk, I will introduce V-Dem and then lay out a new approach to conceptualization and measurement that we plan to apply to V-Dem indicators once data collection is complete (though it is only one of several approaches the V-Dem project will utilize). This approach is dubbed a "lexical scale" (following Rawls) because it privileges deductive considerations in its construction. It is viable in situations where a concept can be meaningfully operationalized according to binary attributes arrayed in an ordinal scale.
The challenge of analysing web surveys: preliminary findings from the BBC's Great British Class Survey
Speaker: Professor Mike Savage
The social scientific analysis of social class is attracting renewed interest given the accentuation of economic and social inequalities throughout the world. This paper analyses the largest ever survey of social class ever conducted in the UK, the BBC’s 2011 Great British Class Survey, with 161,000 web respondents as well as a nationally representative sample survey, which includes unusually detailed questions asked on social, cultural and economic capital. A particular methodological focus is how to deal with the problems of a serious sample skew on the web survey, as well as seeking measures of economic, social and cultural capital. Using multiple correspondence analysis and latent class analysis, we demonstrate the existence of an elite, whose wealth separates them from an established middle class, as well as a class of technical experts and a class of ‘new affluent workers'. We also show that at the lower levels of the class structure, alongside an ageing traditional working class, there is a ‘precariat’ characterised by very low levels of capital, and a group of more emergent service workers. We think that this new seven class model recognises both social polarisation in British society, and class fragmentation in its middle layers, and will attract enormous interest from a wide social scientific community in offering an up to date multidimensional model of social class.
Electoral Spillovers and Referendum Timing
Speaker: Lukas Schmid
This paper explores the consequences of highly mobilizing referendums on concurrent less mobilizing referendums. We use detailed individual and aggregate voting data on referendums in Switzerland to examine the impact of electoral spillovers on status quo bias and government support. Furthermore, we test whether the government exploits its agenda-setting power to influence turnout. We find that high turnout leads to a considerable drop in support for status quo change and government support and present weak evidence that the government uses this relationship to schedule referendums accordingly.
Lukas Schmid is a doctoral candidate at the University of St Gallen, Switzerland
Good Pain, Bad Pain
Speaker: Jen Tarr
Pain is a difficult phenomenon to characterize and measure (Melzack and Wall, 1982). While pain is formally defined as unpleasant (International Association for the Study of Pain, 1979), amongst athletes and dancers certain types of pain are framed as positive (Roessler, 2005). However, little research has examined how ‘good pain’ is distinguished from ‘bad (injury) pain’ in everyday experience. With dancers in particular, the majority of injuries are caused by overuse rather than acute trauma, meaning that they often suffer from various types of pain prior to full-blown injury. Recognising some pains as 'good' and others as 'bad' has significant consequences for their understandings of injury and whether and how to treat it. The fact that many continue to work through the early warning signs of injury (Laws, 2002) is a key factor in dancers’ relatively short careers, which like athletes tend to end by their mid-thirties.
Our research (Thomas and Tarr, 2005-7) focused on the cultural contexts of dance injury, based on interviews with 205 dancers, dance students, and related professionals. In this seminar, I use a thematic analysis of the interview data to examine dancers' definitions of good pain, bad pain, and the difference between them. Virtually all participants continued to work through pain and to define certain types as healthy or positive, depending on its context, strength, and qualities. However, there was no one clear distinction between good pain and bad pain, and it was not something they learned through formal teaching. Rather, this line was shifting and was mediated by professional context, years of dance experience, and previous injuries. Understanding how these distinctions are learned and how they are enacted in practice is a key element not only in preventing injuries but also in understanding the range of ways in which pain and injury may be interpreted socioculturally (Aldritch and Eccleston, 2000).
Qualitative Inquiry in Everyday Life
In my talk, I wish to present and advocate one specific way of doing and thinking about qualitative inquiry, which I call “qualitative inquiry in everyday life”. The presentation will be based on a recent book manuscript of the same title (forthcoming July 2012, Sage Publications), which is framed as a “survival guide” for students and researchers, who would like to conduct a qualitative study with limited resources. The way forward is to use materials from everyday life that we are occupied with anyway – such as books, television, the internet, the media and everyday conversations and interactions – to understand larger social issues. As living human beings in cultural worlds, we are constantly surrounded by “data” that call for analysis, and as we cope with the different situations and episodes of our lives, we are engaged in understanding and interpreting the world as a form of qualitative inquiry. With this way of working, rigour is not to be found in careful research designs or pre-specified methodological steps (since the researcher is always already in “the thick of things”), but in a disciplined and analytic awareness, and abductive forms of reasoning, informed by theory.
Svend Brinkmann is Professor of Psychology in the Department of Communication and Psychology at the University of Aalborg, Denmark, where he serves as co-director of the Center for Qualitative Studies. His research is particularly concerned with philosophical, moral, and methodological issues in psychology and other human and social sciences. He is the editor of a newly founded journal, Qualitative Studies, and author and co-author of numerous articles and books, including InterViews: Learning the Craft of Qualitative Research Interviewing (in its second edition).Qualitative Validation of Quantitative Text Scaling
Item response theory (IRT) models for roll-call voting data provide political scientists with parsimonious descriptions of political actors relative preferences. However, models using only voting data tend to obscure variation in preferences across different issues due to identification and labelling problems that arise in multidimensional scaling models. Latent Dirichlet Allocation (LDA) models are an increasingly applied approach to using relative word frequencies to estimate the degree to which each text in a corpus discusses a set of issues. However, while models based on relative word frequencies are powerful for discovering which issues are being discussed in which texts, they have proven less useful for discovering variation in political positions within corpuses that cover a range of issues. We combine these two models into a new model for discovering preference variation within issues, using voting data augmented with texts describing each vote. We demonstrate our approach using data from the US Supreme Court.
Qualitative Validation of Quantitative Text Scaling
Statistical methods for scaling latent traits from political texts have received widespread attention in political science, typically for measuring the left-right policy positions of political actors. Validation and interpretation of these estimates typically involves a combination of a priori identification of dimensions present in texts examined, external comparison to independent data, as well as basic reasonability standards to establish face validity. In this paper, we apply a new benchmark to validating scaling estimates: qualitative human readings of the texts. Our validation compares human interpreted differences to statistical point estimates, as well as human perceptions of differences to statistically derived confidence intervals. For testing we draw on texts from a budget debate taking place in Ireland in late 2009, implementing a historically unprecedented level of austerity measures, represented by 14 speeches made in the Irish Dáil by key spokespersons from all of the major parties. We compare the human positioning of the texts to those of the “unsupervised” unidimensional Poisson scaling model of Slapin and Proksch (2008). We also compare human perceptions of difference to statistical conclusions reached by different approaches to computing statistical confidence intervals from the text scaling model, including non-parametric bootstrapping of the texts. Our results confirm the basic validity of the statistical estimates, and suggest that the most appropriate form of measuring error is non-parametric bootstrapping of the textual data rather than using confidence intervals that depend on unrealistic parametric assumptions of the model.
Group means as explanatory variables in multilevel models
Jouni Kuha (Joint work with Anders Skrondal and Stephen Fisher)
Research questions for models for clustered data often concern the effects of cluster-level averages of individual-level variables. For example, data from a social survey might characterise neighbourhoods in terms of average income, ethnic composition etc. of people within each neighbourhood. Unless the true values of such averages are known from some other source, they are typically estimated by within-cluster sample estimates, using data on the subjects in the observed data. This incurs a measurement error bias when these estimates are used as explanatory variables in subsequent modelling, even if the individual observations are measured without error. The measurement error variance can, however, be estimated from within-cluster variation, using knowledge of the sampling design within each cluster, and we can then apply relatively standard measurement error methods to adjust for the error. This talk considers such estimation for multilevel models (generalised linear mixed models).
The methods are illustrated with models for political attitudes and behaviour, using data from the 2010 British Election Study.
"Does Direct Democracy Hurt Immigrant Minorities? Evidence from Naturalization Decisions in Switzerland"
Do minorities fare worse under direct democracy than under representative democracy? We provide new evidence by studying naturalization requests of immigrants in Switzerland, which were typically decided at the municipal level in citizen’ assemblies. Using panel data from 1,400 municipalities for the 1990-2010 period, we exploit recent Federal court rulings that led most municipalities to transfer the naturalization decision to an elected municipality council. We show that naturalization rates surged by 50% once legislatures, rather than citizens in popular referenda, decided on local naturalization applications. While citizens face no constraints against voting their prejudice, rejections are more costly for accountable legislators who are forced to justify potentially arbitrary rejections. Consistent with this mechanism, we find that the increase in naturalization rates caused by switching from direct to representative democracy was much stronger in areas where voters held stronger anti-immigrant preferences and among more marginalized immigrant groups from Yugoslavia and Turkey. Taken together our results suggest that direct democracy should no longer be used for naturalization decisions in order to reduce the risk of discriminatory rejections.
Highlights from previous seminars
Patrick Sturgis, University of Southampton "The causal effect of schooling on social mobility: findings from a natural experiment"
Francesco Lapenta, Roskilde Universitet "Geomedia based methods. Exploring the theoretical and methodological tenets of the localization and visualisation of mediated social relations with direct visualisations techniques."
Arthur Spirling, Harvard University "Partisan Convergence in Executive-Legislative Interactions: Modeling Debates in the House of Commons, 1832–1915"
Clive Seale, Centre for Health Sciences, Queen Mary University of London. Comparative Keyword Analysis: A Computer-Assisted Method for the Qualitative Analysis of Text.
Uwe Flick, Alice Salomon University, Berlin. Triangulation Revisited – again: Challenges and Perspectives for Qualitative Research in Times of Mixed Methods
Speaker: Murray Lee, Sydney Institute of Criminology, University of Sydney School of Law. Police Image Work and the Manufacture of Public Confidence
Jouni Kuha, LSE, Exit polls and seat predictions: Experiences from the 2010 General Election
Nina Wakeford Reader in Sociology and ESRC Research Fellow, Goldsmiths College, University of London How far can we go? Experiments with visual methods
Ahmet K. Süerdem, Professor of Business Administration, Istanbul Bilgi University, and visiting scholar, Department of Social Psychology, LSE
Dr Krista Gile (Postdoctoral Prize Research Fellow, Nuffield College, University of Oxford) Network Model-Assisted Prevalence Estimation from Respondent-Driven Sampling Data.
Carlos Barahona (Principal Statistician, Statistical Services Centre , University of Reading) Integrating statistical principles and participatory approaches in research: A recipe for getting the best of both worlds or the road to disaster?
Bernd Beber (Assistant Professor of Politics, New York University)
Raymond Duch (Professorial Fellow, Nuffield College, University of Oxford)
Andrew Gelman (Professor of Statistics and Political Science, Columbia University) Why we (usually) don't have to worry about multiple comparisons.
Dr Rajesh Shukla, National Centre for Applied Economic Research, Delhi; Probabilty Sampling India - the National Reading Habits Survey.
Dr Ben Goldacre. 'Bad Science'.
Stuart Shulman, University of Pittsburgh; Dr Matthias Trier, Technical University Berlin. 'Bridging methodologies? From collaborative coding to dynamic network analysis'
Dr Rajesh Shukla, National Centre for Applied Economic Research, Delhi. 'Sampling strategies in India'
Torun Dewan, LSE. 'The impact of individual and collective performance on ministerial tenure'
Nikhil Shah, LSE (former MSc student). 'Rethinking genre and mapping musical taste: a multidimensional scaling analysis of web 2.0 data'
Corinne Squire, University of East London. 'Using narrative methods: The case of HIV research'
Peter Lynn, Institute for Social and Economic Research, University of Essex. 'The effect of interviewer continuity on measurement error in panel surveys'
Patrick Lescure, IMAGE ALCESTE, Lyon. 'The ALCESTE content analysis method and its application in qualitative survey analysis'
Colm O'Muircheartaigh, University of Chicago. 'Design decisions and disciplinary perspectives: the case of the US National Children's Survey'
John Goldthorpe, University of Oxford and Jouni Kuha, LSE; 'Path Analysis for Discrete Variables: The Role of Education in Social Mobility'
Simon Glendinning, LSE; 'Maurice Merleau-Ponty: Phenomenological Method and the Limits of Science'
Conor Gearty, LSE; 'British Perceptions of National Security, Civil Liberties and Human Rights'
Will Jennings, LSE; 'Measuring Performance in a Noisy World: The Potential Uses of Time Series Intervention Models for Evaluation of Policy and Public Sector Performance'
Ed Fieldhouse, University of Manchester; 'The Effectiveness of Local Party Campaigns in 2005: Combining Evidence from Campaign Spending and Agent Survey Data'
Sean Wallis, University College London; 'Can Computers Make Sense of Sentences? Text mining, problems and challenges'
Andreas Dafinger, London School of Economics; 'What Goes Without Saying - Interdisciplinary Approaches to Descriptive and Normative Spatial Order: Studies in sub-Saharan Africa'
Roger Jowell, City University; 'Measuring Attitudes across Nations and over Time'
Hasok Chang, University College London; 'Who Will Judge the Judges? Challenges in the Validation of Measurement Standards'
Saadi Lahlou, EdF & EHESS Paris; 'Fine Grained Behavioural Observation in the Long Term: Studying Experimental Reality'
Edmund Chattoe, University of Oxford; 'What is Simulation and What is it Used For?'
Guy Cook, Open University; 'Genetically Modified Language: Investigating the Discourse of a "Public" Debate'
Heather Hamill, University of Oxford; 'Streetwise: An Ethnographic Study of Taxi Drivers in Belfast, N. Ireland and New York, USA'
Roberto Franzosi, University of Reading; 'From Words to Numbers: Narrative, Data and Social Science'
Peter Abell, London School of Economics; 'Narrative Explanation: An Alternative to Variable-Centered Explanation?'
For more details, please contact Dr Sally Stares, S.R.Stares@lse.ac.uk