Home > Department of Statistics > Events > Past Seminars > 2012-13 Seminar Series

Department of Statistics
Columbia House
London School of Economics
Houghton Street


Online query form|

Frequently asked questions|


BSc Queries

 +44 (0)20 7955 7650


MSc Queries

 +44 (0)20 7955 6879 


MPhil/PhD Queries

+44 (0)20 7955 7511


Past Events 2012-13

Statistics Seminar Series 2012-13

The Department of Statistics hosts seminars throughout the year. Seminars take place on Friday afternoons at 2pm, unless otherwise stated, in the Leverhulme Library (COL 6.15). All are very welcome to attend and refreshments are provided. Please contact Events| for further information about any of these seminars.

22 March 2013

Philip Preuss (Ruhr-Universitaet Bochum)

 Detection of multiple structural breaks in multivariate time series

Abstract:  We propose an integrative nonparametric procedure for the detection and estimation of mul- tiple structural breaks in the autocovariance function of a multivariate (second-order) piecewise stationary process, which also identi es the components of the series where the breaks occur. The new method is based on a comparison of the estimated spectral distribution on di erent segments of the observed times series and consists of three steps: it starts with a consistent bootstrap test, which allows to prove the existence of structural breaks at a controlled type I error. Secondly, it estimates a set of possible break points and nally this set is reduced to identify the relevant struc- tural breaks and corresponding components which are responsible for these breaks. In contrast to all other methods which have been proposed in the literature, our approach is not especially designed for detecting one single change point and addresses the problem of multiple structural breaks in the autocovariance function directly with no use of the binary segmentation algorithm.
We prove that the new procedure detects all components and the corresponding locations where structural breaks occur with probability converging to one as the sample size increases and provide a data-driven rule for the selection of regularization parameters. The results are illustrated by analyzing data sets containing financial returns, and in a simulation study it is demonstrated that the new procedure outperforms the currently available methods for detecting break points in the dependency structure.

15 March 2013

Fiona Steele (Bristol University)

Title: Adjusting for Selection Bias in Longitudinal Analyses of the Relationship between Employment Transitions and Mental Health.

Abstract: There is substantial interest in understanding the association between labour force participation and mental health, and in particular the impact of unemployment on wellbeing. While panel data allow detailed examination of the dynamics of the relationship between changes in labour force participation and mental health, selection bias remains a serious concern. We test for two types of selection effect: (i) direct selection (where prior health affects employment status), and (ii) indirect selection (due to unmeasured characteristics influencing both health and employment outcomes). We then examine the impact of adjusting for selection biases on estimates of the effect of employment transitions on mental health. We investigate the relationship between men's employment transitions and mental health using data from the British Household Panel Survey, 1991-2009. We model the effect of a change in employment status between years t-1 and t on mental health at t, adjusting for mental health at t-1. Using a dynamic simultaneous equations model we allow explicitly for an effect of health at t-1 on employment transitions between t-1 and t to allow for direct selection. The health and employment equations include individual-specific random effects which are correlated across equations to allow for indirect selection due to shared unmeasured influences.

8 February 2013

Sanjay Chaudhuri (National University of Singapore)

 An Conditional Empirical Likelihood Approach to Combine Sampling Design and Population Level Information

Abstract: Inclusion of available population level information in statistical modelling is known to produce more accurate estimates than those obtained only from the random samples. However, a fully parametric model which incorporates both these informations may be computationally challenging to handle. Empirical likelihood based methods can be used to combine these two kinds of information and estimate the model parameters in a computationally efficient way. In this article we consider methods to include sampling weights in an empirical likelihood based estimation procedure to augment population level information in sample-based statistical modeling. Our estimator uses conditional weights and is able to incorporate covariate information both through the weights and the usual estimating equations. We show that under usual assumptions, with population size increasing unbounded, the estimates are strongly consistent, asymptotically unbiased and normally distributed. Moreover, they are more efficient than other probability weighted analogues. Our framework provides additional justification for inverse probability weighted score estimators in terms of conditional empirical likelihood. We give an application to demographic hazard modeling by
combining birth regitration data with panel survey data to estimate annual first birth probabilities. 

This work is joint with Mark Handcock, Department of Statistics, University of California, Los Angeles, USA and Michael Rendall, Department of Sociology, University of Maryland, College Park, USA.

1 February 2013

Vassilis Vasdekis (Athens University of Economics and Business)

Title: Composite likelihood estimation methods for a class of latent variable models.

Abstract: In many applications multivariate longitudinal data are collected and analysed for the purpose of measuring changes to observations or constructs over time such as attitudes, opinions, performance or ability. Latent variable models can provide construct measurements and explain interrelationships between observed variables by focusing analysis on multivariate observed categorical and continuous outcomes. Maximization of the data likelihood is computationally cumbersome for latent variable models since it requires calculation of multiple integrals. Composite likelihood methods are pseudo-likelihood methods based on low dimensional marginal densities. They provide consistent estimates of model parameters and in many situations they reduce estimation complexity. Therefore, they serve as a feasible alternative to full information maximum likelihood methods. A new estimator based on weighting composite likelihood estimators is suggested. The approach is seen to provide greater efficiency than its unweighted counterpart. A data example is also presented for illustrating the technique.

25 January 2013

David van Dyk (Imperial College)

Title: Causal Inference in Observational Studies with Non-Binary Treatments

Abstract: Propensity score methods have become a part of the standard toolkit for applied researchers who wish to ascertain causal effects from observational data. While they were originally developed for binary treatments, several researchers have proposed generalizations of the propensity score methodology for non-binary treatment regimes. Such extensions have widened the applicability of propensity score methods and are indeed becoming increasingly popular themselves. In this talk, I examine the two main generalizations of propensity score methods, namely, the generalized propensity score (GPS) of Hirano and Imbens (2004) and the propensity function of Imai and van Dyk (2004). We compare the assumptions, theoretical properties, and empirical performance of these two alternative methodologies. On a theoretical level, the GPS is advantageous in that it can be used to estimate the full dose response function rather than the simple average treatment effect that is typically estimated with the probability function. Unfortunately, our analysis shows that in practice the response model used with the GPS is less flexible than those typically used with propensity score methods and is prone to misspecification. We propose methods to improve the robustness of the GPS to potential model misspecification and the flexibility of the probability function in estimating the dose response function. We illustrate our findings and proposals through simulation studies, including one based on an empirical application.

This is joint work with Shandong Zhao and Kosuke Imai.

14th December 2012

Órlaith Burke (Oxford University)

Seasonal behaviour of indoor radon concentrations with an extension to time-series cross-sectional models.

30 November 2012

Bianca De Stavola (London School of Hygiene and Tropical Medicine) 

Mediation by path analysis or causal inference: What is the difference?|

16 November 2012 

Michael R. Elliott (University of Michigan)
Causal Assessments of Surrogate Markers when Markers and Outcomes are Multivariate Normal|

2 November 2012

Leonardo Bottolo (Imperial College)
Improving AR-GARCH prediction|

19 October 2012

Siegfried Hormann (Universite' libre de Bruxelles)

Dynamic functional principle components|

Joint Statistics and Econometrics Seminar Series 2013-13 (Lent term)

The Departments of Statistics and Economics jointly organize these workshops throughout the year. 

During Michaelmas term, they take place on Friday mornings at 12pm in NAB 2.16. In Lent term they will be held in the Leverhulme Library (COL 6.15)  All are very welcome to attend and refreshments are provided. 

For information regarding the Michaelmas term series see here|. Please contact Dr. Marcia Schafgans| and Dr. Matteo Barigozzi| for further information.

15 March 2013

Han Ai (Chinese Academy of Sciences)

Title: Autoregressive Conditional Models for Interval-Valued Time Series Data

Abstract: An interval-valued observation in a time period contains more information than a point-valued observation in the same time period. Examples of interval data include the maximum and minimum temperatures in a day, the maximum and minimum GDP growth rates in a year, the maximum and minimum asset prices in a trading day, the bid and ask prices in a trading period, the long term and short term interests, and the 90%-tile and 10%-tile incomes of a cohort in a year, etc. Interval forecasts may be of direct interest in practice, as it contains information on the range of variation and the level or trend of economic processes. Moreover, the informational advantage of interval data can be exploited for more efficient econometric estimation and inference. We propose a new class of autoregressive conditional interval (ACI) models for interval-valued time series data. A minimum distance estimation method is proposed to estimate the parameters of an ACI model, and the consistency, asymptotic normality and asymptotic efficiency of the proposed estimator are established. It is shown that a two-stage minimum distance estimator is asymptotically most efficient among a class of minimum distance estimators, and it achieves the Cramer-Rao lower bound when the left and right bounds of the interval innovation process follow a bivariate normal distribution. Simulation studies show that the two-stage minimum distance estimator outperforms conditional least squares estimators based on the ranges and/or midpoints of the interval sample, as well as the conditional quasi- maximumlikelihood estimator based on the bivariate left and right bound information of the interval sample. In an empirical study on asset pricing, we document that when return interval data is used, some bond market factors, particularly the default risk factor, are significant in explaining excess stock returns, even after the stock market factors are controlled in regressions. 
This differs from the previous findings (e.g., Fama and French (1993)) in the literature.

8 March 2013

Raffaella Giacomini (University College London)

Title: Forecasting with judgment

Abstract: The paper seeks to answer the following questions: how to define judgement? How to incorporate it into existing model-based forecasts in a rigorous way? How to know whether and when incorporating judgement gives more accurate forecasts? We broadly define judgement as a set of moment conditions involving a subset of the variables in a benchmark model, but specialize the discussion to two empirically relevant types of judgement: 1) mean and/or variance forecasts based on survey data; 2) (nonlinear) restrictions based on economic theory - such as Euler equations or Taylor rules - imposed on forecasts from atheoretical models. We propose incorporating judgement into existing forecasts from a benchmark model using exponential tilting. We provide theoretical results that help establish whether and when incorporating judgement improves forecast accuracy, and illustrate the usefulness of the method for anchoring yield curve forecasts.

1 March 2013

Christian Brownlees (Unversitat Pompeu Fabra, Barcelona)

NETS: Network Estimation for Time Series|

22 February 2013

Howell Tong (LSE)

Title: On conditionally heteroscedastic AR models with thresholds.

15 February 2013

Christian Francq (Université Lille 3)

Risk-parameter estimation in volatility models|

 18 January 2013

Petyo Bonev (University of Mannheim)

Title: Nonparametric Duration IV Methods.

Abstract: Dynamic selection and endogeneous noncompliance hamper the evaluation of treatmenteffects when the outcome of interest is a duration variable. Existing methods either restrict their analysis to settings where only one of those two problems exists, or adopt parametric or semi-parametric structure. In this paper we develop two completely nonparametric Instrumental Variable approaches for duration data which enable us to identify treatment effects in the presence of both dynamic selection and endogeneous noncompliance. We suggest corresponding estimators. Our approaches are revealed to have as special cases numerous existing models. We suggest simple procedures to test for endogeneity. We apply our estimator to a French policy reform to estimate the effect of a change in the unemployment insurance system on the duration of unemployment.

Risk and Stochastics Seminar Series     2012-13

The Risk and Stochastics Seminar aims to promote communication and discussion of research in the mathematics of insurance and finance and their interface, to encourage interaction between practice and theory in these areas, and to support academically students in related programmes at postgraduate level. All are welcome to attend. Sessions run regularly during LSE terms, and will place on Thursdays at 5.00pm (unless otherwise stated below) in COL 6.15.

The current up-to-date schedule is given below. Please contact Events| for further information about any of these seminars. All are very welcome to attend.  

30th May 2013

Mikhail Urusov (University of Duisburg-Essen) 16.00-17.00. OLD 3.28 

Title:  On the boundary behaviour of diffusions and the martingale property of the associated local martingales

Abstract: Link to PDF

15th May 2013

Michael Schroeder (TBA) 15.00- 16.30 OLD 1.29

Title: Mechanisms for no-arbitrage term-structure modelling with applications to interest-rates and realized-variance.

Abstract: Suppose that the sentiment is changing in some financial market, or that conditions have changed recently. Examples include volatility levels which are expected to change, or interest-rates  expected to be adjusted.

How do we quantify the effects of these changes on derivatives positions. We will discuss mechanisms for the construction of `no-arbitrage' term structures which enable this; these retain tractability in valuing derivatives and comply with stylized facts like mean-reversion and  positivity of rates. This will be illustrated in a paradigm valuation of  typical fixed-income derivatives.

4th October 2012

Mingyu Xu| (Chinese Academy of Sciences) 16.00hrs-17.00hrs

Title: BSDE with a ratio constraint and its application

Abstract: Non-linear backward stochastic differential equations (BSDEs in short) were first studied by Pardoux and Peng (1990), who proved the existence and uniqueness of the adapted solution, under smooth square integrability assumptions on the coefficient and the terminal condition, and when the coefficient $g(t,\omega ,y,z)$ is Lipschitz in $(y,z)$ uniformly in $(t,\omega )$. From then on, the theory of backward stochastic differential equations (BSDE) has been widely and rapidly developed. And many problems in mathematical finance can be treated as BSDEs. The natural connection between BSDE and partial differential equations (PDE) of parabolic and elliptic types is also important applications. In this talk, we study a new development of BSDE, BSDE with ratio constraint, i.e. portfolio process is controlled by a function of wealth process. The existence and uniqueness results are presented and we will give some application of this kind of BSDE at last.

20th September 2012

Elisa Alòs| (Universitat Pompeu Fabra)

Title :A decomposition formula for option prices in the Heston model and applications to option pricing approximation

Abstract: By means of classical Itôs calculus we decompose option prices as the sum of the classical Black-Scholes formula with volatility parameter equal to the root-mean-square future average volatility plus a term due to correlation and a term due to the volatility of the volatility. This decomposition allows us to develop first and second-order approximation formulas for option prices and implied volatilities in the Heston volatility framework, as well as to study their accuracy for short maturities. Numerical examples are given.