|
27 April 2012
|
14.00- Arthur Gretton (UCL)
Title: Hypothesis Testing and Bayesian Inference: New Applications of Kernel Methods
Abstract: In the early days of kernel machines research, the "kernel trick" was considered a useful way of constructing nonlinear learning algorithms from linear ones, by applying the linear algorithms to feature space mappings of the original data. More recently, it has become clear that a potentially more far reaching use of kernels is as a linear way of dealing with higher order statistics, by mapping probabilities to a suitable reproducing kernel Hilbert space (i.e., the feature space is an RKHS).
I will describe how probabilities can be mapped to kernel feature spaces, and how to compute distances between these mappings. A measure of strength of dependence between two random variables follows naturally from this distance. Applications that make use of kernel probability embeddings include:
* Nonparametric two-sample testing and independence testing in complex (high dimensional) domains. In the latter case, we test whether text in English is translated from the French, as opposed to being random extracts on the same topic.
* Inference on graphical models, in cases where the variable interactions are modeled nonparametrically (i.e., when parametric models are impractical or unknown).
15.15- Xiao-Li Meng (Harvard University)
Title: Statistical Education and Educating Statisticians:
Producing wine connoisseurs and master winemakers
Abstract: The distinction between statistical education and educating statisticians is of particular importance at the pre-graduate school level. In recent years we have taken a broader view of statistical education for Harvard's undergraduates, by shifting the focus from preparing a few to pursue Ph.D. level quantitative studies to helping many gain a basic appreciation of statistical argument and insight, as a part of their liberal arts critical thinking training and experience. Intriguingly, the journey, guided by the philosophy that one can become a wine connoisseur without ever knowing how to make wine, apparently has led us to produce many more future winemakers than when we focused only on producing a vintage. At the Ph.D. level, our focus has always been to produce the best winemakers, to take the wine analogy further, but true expert winemakers need to master far more than merely the chemical process of fermenting juice into alcohol, especially with ever increasing competition and demand. We therefore introduced a Professional Development Curriculum (PDC) parallel to the usual course curriculum, starting from "Stat 303: The Art and Practice of Teaching Statistics," a required one-year course for all entering Ph.D.s, aiming at both producing well trained teaching fellows for undergraduate courses and effective statistical communicators in general. This talk shares a number of stories from our intoxicating journey and experiments, including a Riesling randomized trial conducted for "Stat 105: Real-Life Statistics: Your Chance for Happiness (or Misery) " to assess the single most influential factor in students' ability to judge wine quality (once they are over 21).
|
|
16 March 2012
|
Jae-Kwang Kim (Iowa State University)
Title: An efficient method of estimation for longitudinal surveys with monotone missing data.
Abstract:
Panel attrition is frequently encountered in panel sample surveys. When the panel attrition is related to the observed study variable, the classical approach of nonresponse adjustment using a covariate-dependent dropout mechanism can be biased. We consider an efficient method of estimation with monotone panel attrition when the response probability depends on the previous values of study variable as well as other covariates. Because of the monotone structure of the missing pattern, the response mechanism is missing at random. The proposed estimator is asymptotically optimal in the sense that it minimizes the asymptotic variance of a class of estimators that can be written as a linear combination of the unbiased estimators of the panel estimates for each wave. The proposed estimator incorporates all available information using the idea of generalized least squares method. Variance estimation is discussed and results from a limited simulation study are also presented. This is a joint work with Dr. Ming Zhou.
|
|
9 March 2012
|
Subhra Sankar Dhar (Cambridge University)
Title: Comparison of Multivariate Distributions Using Quantile-Quantile Plots and Related Tests
Abstract: The univariate quantile-quantile (Q-Q) plot is a well-known graphical tool for examining whether two data sets are generated from the same distribution or not. It is also used to determine how well a specified probability distribution fits a given sample. In this talk, we will develop and study a multivariate version of Q-Q plot based on spatial quantiles (see Chaudhuri (1996), JASA). The usefulness of the proposed graphical device will be illustrated on different real and simulated data, some of which have fairly large dimensions. We will also develop certain statistical tests that are related to the proposed multivariate Q-Q plots and study their asymptotic properties. The performance of those tests compared to some other well-known tests for multivariate distributions will be discussed also.
This is a joint work with Biman Chakraborty and Probal Chaudhuri.
|
|
2 March 2012
|
Idris Eckley (Lancaster University)
Title: Alias detection and spectral correction for locally stationary time series
Abstract: Aliasing occurs when power exists in a signal at frequencies higher than the Nyquist rate (which is determined by the sampling rate). When it occurs, aliasing causes high frequency information to wrap round and mimic power at lower frequencies.
It is all too easy to overlook aliasing when conducting an analysis of a time series. Indeed it is rarely tested for, even though a bispectrum-based test of aliasing for (stationary) time series was proposed by Hinich and Wolinsky in 1988. For locally stationary series the situation is a bit different in that aliasing can be intermittent, depending on whether the spectrum locally contains frequencies higher than the Nyquist rate or not. This talk will introduce a wavelet-based method to separate the spectral components of a locally stationary time series into two classes: (i) aliased or white noise components and (ii) lower frequency uncontaminated components. In particular we will consider the case of Shannon wavelets which can separate components even for signals that are not band-limited. Finally, we show our test working on simulated data and an example provided by an industrial collaborator.
(Joint work with Guy Nason, University of Bristol)
|
|
17 February 2012
|
Kunnummal Muralidharan (Univeristy of Baroda)
Title: Theory of inliers: Modeling and Applications
Abstract: An inlier in a set of data is an observation or subset of observations not necessarily all zeroes, which appears to be inconsistent with the remaining data set. They are the resultant of instantaneous or early failures usually encountered in life testing, financial, Management, clinical trials and many other studies. Unlike in outlier theory, here inliers form a group of observations which are defined by the model itself. With the inclusion of inliers, the model will become either a non-standard distribution or having more than two modes and hence usual method of statistical inference may not be appropriate to proceed with. We discuss some inliers prone models with some weak assumptions to study the estimation of inliers in exponential distribution. Various inlier prone models and estimation procedures are discussed. The detection of inliers and the problems associated with detections are presented. An illustration and a real life example are also discussed.
|
|
10 February 2012
|
Patrick J. Wolfe (UCL)
Title: Modelling Network Data
Abstract: Networks are fast becoming a primary object of interest in statistical data analysis, with important applications spanning the social, biological, and information sciences. A common aim across these fields is to test for and explain the presence of structure in network data.In this talk we show how characterizing the structural features of a network corresponds to estimating the parameters of various random network models, allowing us to obtain new results for likelihood-based inference and uncertainty quantification in this context. We discuss asymptotics for stochastic blockmodels with growing numbers of classes, the determination of confidence sets for network structure, and a more general point process modeling for network data taking the form of repeated interactions between senders and receivers, where we show consistency and asymptotic normality of partial-likelihood-based estimators related to the Cox proportional hazards model (arXiv:1011.1703, 1011.4644).
|
|
3 February 2012
|
Philip Dawid (Cambridge University)
Title: Proper Local Scoring Rules
Abstract: A scoring rule S(x, Q) measures the quality of a quoted distribution for an uncertain quantity X in the light of the realised value x of X. It is proper when it encourages honesty, i.e, when, if your uncertainty about X is represented by a distribution P, the choice Q = P minimises your expected loss. Traditionally, a scoring rule has been called local if it depends on Q only through q(x), the density of Q at x. The only proper local scoring rule is then the log-score, -log q(x). For the continuous case, we can weaken the definition of locality to allow dependence on a finite number m of derivatives of q at x. A characterisation is given of such order-m local proper scoring rules, and their behaviour under transformations of the outcome space. In particular, any m-local scoring rule with m 0 can be computed without knowledge of the normalising constant of the density. Parallel results for discrete sample spaces will be given.
Papers available at arXiv:1101.5011v1, arXiv:1104.2224v1
Joint work with Matthew Parry and Steffen Lauritzen
|
|
20 January 2012
|
Steffen Unkel (The Open University)
Title: Exploratory factor analysis of data matrices with more variables than observations
Abstract: The classical fitting problem in exploratory factor analysis (EFA) is to find estimates for the factor loadings matrix and the matrix of unique factor variances which give the best fit to the sample covariance or correlation matrix with respect to some goodness-of-fit criterion. Predicted factor scores can be obtained as a function of these estimates and the data. In this talk, the EFA model is considered as a specific data matrix decomposition with fixed unknown matrix parameters. Fitting the EFA model directly to the data yields simultaneous solutions for both loadings and factor scores. Recently, new methods were introduced for the simultaneous least squares estimation of all EFA model unknowns. The algorithms are based on the singular value decomposition of data matrices, facilitate the estimation of both common and unique factor scores, and work equally well when the number of variables exceeds the number of observations. The methods are illustrated by means of Thurstone's 26-variable box data and a real high-dimensional data set.
|
|
20 December 2011
|
Rainer Dahlhaus (University of Heidelberg)
Title: Nonlinear Phase Estimation for Oscillatory Processes
Abstract: The estimation of nonlinear phases or instantaneous frequencies of nonstationary signals is currently a major issue indifferent areas. For example in physics phase synchronization has been a major topic for several years where phase estimation is a prerequisite. Another area is the Hilbert-Huang transform where phase estimation is a key step in EMD (empirical mode decomposition). In these areas phase estimation has been carried out by the Hilbert transformation, maximum periodogram methods or several ad-hoc methods.
In this talk we present a nonlinear, non-Gaussian state-space model for phase estimation where the phase, amplitude and baseline are treated as latent Markov processes. For the estimation, we suggest a Rao-Blackwellized particle smoother that combines the Kalman smoother and an efficient sequential Monte Carlo smoother. In addition we consider oscillation processes where non-cosine type fluctuation patterns with an unobserved phase are modeled. For the estimation of the nonparametric fluctuation pattern a nonparametric EM algorithm is developed.
We also discuss phase synchronization of several oscillators.
The methods are demonstrated for noisy Rössler attractors and electrocardiogram recordings.
(Based on joint work with Jan Neddermeyer)
|
|
9 December 2011
|
Ingrid Van Keilegom (Université catholique de Louvain)
Title: Boundary estimation in the presence of measurement error with unknown variance
Abstract: Boundary estimation appears naturally in economics in the context of productivity analysis. The performance of a firm is measured by the distance between its achieved output level (quantity of goods produced) and an optimal production frontier which is the locus of the maximal achievable output given the level of the inputs (labor, energy, capital, etc.). Frontier estimation becomes difficult if the outputs are measured with noise and most approaches rely on restrictive parametric assumptions. This paper contributes to the direction of nonparametric approaches.
We start with a slightly simplified version of the general problem, which can be written as Y=X Z, where Y is the observable output, X is the unobserved variable of interest with support [0,T] and density f, and Z is the noise. Suppose that f(T)>0, and that Z is independent of X and is log-normally distributed with log Z ~ N(0,s^2) for some unknown variance s^2. The novelty of our approach consists in proposing a method for simultaneous estimation of T and s^2. In addition to this univariate problem, we also consider a model for the extension to the case with covariates, and propose estimators of the frontier function and the variance function under this model.
The asymptotic consistency and the rate of convergence of our estimators are established, and simulations are carried out to verify the performance of the estimators for small samples. We also apply our method on a dataset concerning the production output of American electricity utility companies.
|
|
18 November 2011
|
Dawei Huang (Chinese Academy of Science)
Title: Financial Time Series: Trend Classifications Based on Feature Transformation and Selection
Abstract: In this talk, we discuss how to classify a financial time series into up and down trends as well as to identify the tops and bottoms from statistical machine learning point of view. Firstly, we define the up and down trends with a parameter controlling the length of the trend. Secondly, we derive the optimal regressand in the regression for two class linear discrimination problem. Thirdly, we introduce so-called correlation booster to increase the linear relationship between the regressand and features. The possibility of classification is confirmed by Principal Component Analysis. Finally, LASSO algorithm with cross validation is used for selecting features and building models. These models can classify: 1. up and down trends; 2. tops from up trends; 3. bottoms from down trends. This method is applied to different financial time series, including gold and silver prices, stock indices and stock prices. Encouraging results are showed for real data.
|
|
11 November 2011
|
Maria Alejandra Molina (LSE)
What drives the survival and growth of new firms in Brazil? A learning and capability perspective
|
|
28 October 2011
|
Philipp Rode (LSE Cities)
Title: Statistical projects of LSE Cities
Abstract: LSE Cities's research activities are characterised by handling wide-ranging collection and processing of global, regional and local data that have helped foster and communicate a better understanding of cities from various perspectives, such as economics, sociology, environment or governance. Data on urbanisation, cities and space is by its very nature more rudimentary, fragmented and heterogeneous than in many other disciplines; it often requires novel and innovative approaches to analysis, while ensuring validity and significance of the research. LSE Cities is inviting M.Res. students from the Statistics Department to join us in thinking about paths towards robust analysis of urban data on our various research fronts. Options for dissertations, types of data and relevant research projects will be introduced at the seminar on October 28, 2011.
|
|
14 October 2011
|
Alexey Sorokin (MAN Investments)
Title: Non-invertibility in herteroscedastic time series models
Abstract: In order to calculate the unobserved volatility in conditional heteroscedastic time series models such as GARCH, the natutral recursive approximation is very often used. A model is called invertible if this recursive approximation converges to the unobserved volatility in probability. It turns out that a stationary GARCH (p, q) model is always invertible, but some other well-known heteroscedastic models, in particular some asymmetric ones, are not. For such models, a pair (true volatility, approximation) has a non-degenearte stationary distribution. As a result, the volatility forecast given by the recursive approximation is inconsistent even if the true parameter vector is know. In the talk, I will present an "almost" criterion of invertibility for 2 particular models, present numerical examples and discuss challanges in obtaining a general condition for non-invertibility.
|