Details of all annual PhD Presentation Events will be posted here. These events take place in the Summer term each year, usually over two days.
2015/16 PhD Presentation Event
Monday 9 and Tuesday 10 May 2016
Reinhard Fellman
Title: Effects of European Sovereign Debt Crisis on the Long Memory in Credit Default Swaps
Abstract: We study the presence of long memory in sovereign credit default swaps (CDS) for a variety of maturities (1, 5, 10, 30 years) in the European Monetary Union by the time-varying average generalized Hurst exponent (TVA-GHE) from 2007-2014. We obtain daily TVA-GHE based on a 2 year moving time window and test for significance of long-memory employing (i) a pre-whitening and post-blackening bootstrap approach and confirmed results by using (ii) a random permutation shuffling procedure. We reveal that while numerous (peripheral) countries suffered from an unsustainable combination of overly high government structural deficits and accelerating debt levels, sovereign credit default risk increased and long memory decreased considerably during the European sovereign debt crisis. This behaviour lasted until the European Central Bank (ECB), international institutions (e.g. the International Monetary Fund (IMF)) as well as national governments implemented extraordinary policy interventions (e.g. financial assistance programmes) aiming to restore stability in financial markets in 2011, which followed with an increased persistence in sovereign CDS. Moreover, degree of long memory decreases with CDS maturity which is in contrast to economic theory. We conclude that changes in long memory might be associated with changes in predictability of default or non-default cases, where events in the far future are less predictable.
Andy T Y Ho
Title: First hitting time of the super-Maximum of a standard Brownian motion.
Abstract: We study the first hitting time of a super-maximum of standard Brownian motion. The explicit expression for the Laplace Transform and distribution functions of this hitting time were obtained. The problem is solved by setting up the infinitesimal generator of a standard Brownian motion. A suitable martingale is obtained from the solution of the generator following by the Fourier transform of the solution. Further we solve the joint density function of the first hitting time and the corresponding level of the reflected Brownian motion from the double Laplace Transform. An acceptance-rejection algorithm was also developed to generate the pair of hitting time and associated reflected Brownian motion. The problem is motivated by contingent convertibles.
Haziq Jamil
Title: Bayesian variable selection for linear models using I-priors
Abstract: In last year’s presentation event, I showed that the use of I-priors in various linear models can be considered as a solution to the over-fitting problem. In that work, estimation was still done using maximum likelihood on the marginal likelihood (after integrating out the prior), so in a sense it was a type of empirical-Bayes approach. Switching over to a fully Bayesian framework, we now look at the problem of variable selection, specifically in an ordinary linear regression setting. The appeal of Bayesian methods is that it reduces the selection problem to one of estimation, rather than a true search of the variable space for the model that optimises a certain criterion. I will review several Bayesian variable selection methods currently present in the literature, and show how we can make use of I-priors in such methods. Simulation studies show that the I-prior performs well in the presence of multicollinearity. Research is still ongoing, with hopes that the I-prior is able to cope well under sparse linear regression, and also able to be extended to generalised linear models such as binary response models.
Luting Li
Title: First Passage Time Problem for Ornstein-Uhlenbeck Process
Abstract: In this project we analyze the first passage time (FPT) problem of the Ornstein-Uhlenbeck (OU) process to an arbitrary threshold. By applying perturbation expansions on the mean-reverting parameter we give an explicit solution of the inverse to the perturbed Laplace transform. Numerical examples in comparison with other known methods are provided to show the accuracy and computational efficiency of this new approach. Potentially this technique could be applied to similar problems from other (jump) diffusion processes.
Shiju Liu
Title: Survival probability of a risk process with variable premium income
Abstract: Although the study of the ruin problem for classical collective risk process has been the centre of interest in a number of papers focusing on constant premium rate, only a few publications considering premium whose value depends on current surplus. We begin with a risk process with this generalized non-constant premium rate. It is assumed that the wealth available is invested at some continuously interest rate, leading that the premium income is a linear function of surplus. We also assume that the aggregate loss is an inverse Gaussian process. Our purpose is to study the probability of survival of an insurance company in infinite time horizon by applying Laplace transform to an infinitesimal generator. Explicit formula of survival probability and numerical results with different initial capitals and interest rates are given.
Hyeyoung Maeng
Title: A flexible model for prediction in functional time series
Abstract: Functional data analysis has attracted great attention of scholars and is constantly growing due to recent developments in computer technology which enable the record of dense datasets on arbitrarily fine grids. In financial markets, enormous amounts of high-frequency data occur everyday, and it has become necessary to handle such huge volumes of information at the same time. In functional time series analysis, little research has been conducted on the prediction model besides the functional autoregressive model of order one(ARH(1)), while the prediction problem has been investigated from many angles in typical time series analysis. In this talk, the new model for the prediction in functional framework will be discussed, which allows more smoothing on curves observed far from the point of prediction compared to the observations located close to the prediction point. It gives more flexibility to the ordinary model in the sense that we allow more weight on the interval considered more important than others.
Cheng Qian
Title: Spatial weight matrix estimation.
Abstract: Spatial econometrics focus on cross sectional interaction between physical or economic units. However, most of studies apply a prior knowledge about spatial weight matrix in spatial econometrics model. Therefore misspecification on spatial weight matirx could affect significantly accuracy of model estimation. Lam (2014) has provided an error upper bound for the spatial regression parameter estimators in a spatial autoregressive model, showing that misspecification can indeed introduce large bias in the final estimates. Meanwhile, new researches on spatial weight matrix estimation only consider static effects but not include dynamic effects between spatial units. Our model firstly use the different linear combinations of same spatial weight matrix specifications for different time-lag responds in proposed spatial econometrics model. To overcome endogeneity from autoregression, instrumental variables are introduced. The model we use in this paper can also find fixed effects and spillover effects. Finally, we also develop asymptotic normality for our estimation under the framework of functional dependence measure introduced in Wu (2011). The proposed methods are illustrated using both simulated data.
Yan Qu
Title: Exact Simulation of Point Process with Mean reverting Intensity driven by Levy subordinator.
Abstract: The mean reverting processes driven by Levy processes have a wide application in modelling intensity of event arrivals in finance and economics. We aim to develop an efficient Monte Carlo simulation scheme for exactly simulating point processes with mean reverting stochastic intensities driven by Levy subordinators. The simulation scheme for the point process will based on simulating the inter-arrival time and the associated intensity level, we will use the joint Laplace transform of the intensity and the inter-arrival of the process to derive these distributional properties. Our main work concentrated on intensities driven by Inverse Gaussian Process and Gamma Process. The main approach, instead of directly working out the Laplace transform of the joint distributions of the process to derive the transition densities, based on distributional decomposition of the process with aid of cutting the Levy measure of the driven subordinator with some proper value. Through this method, all the inter-arrival times and intensity levels at jump arrival times can be decomposed to familiar random variables that allows us to simulate exactly without introducing bias or truncation error.
Ragvir Sabharwal
Title: Sequential Changepoint Detection in Factor Models for Time Series
Abstract: We address the problem of detecting changepoints in a Static Approximate Factor Model (SAFM). In particular, we consider three different types of changes: (i) emerging factors, (ii) disappearing factors, and (iii) changes in loadings. We make two key contributions. First, we introduce a changepoint estimator based on eigenvalue ratios and prove consistency of this estimator in the offline setting. Second, we propose methodologies for adapting our estimator to the sequential setting.
Alexandra Tsimbalyuk
Title: The Distribution of a Perpetuity
Abstract: We consider the problem of estimating the joint distribution of a perpetuity, e.g. a cumulative discounted loss process in the ruin problem, and the underlying factors driving the economy in an ergodic Markov model. One can identify the distribution in two manners: first, as the explosion probability of a certain locally elliptic (explosive) diffusion; second, using results regarding time reversal of diffusions, as the invariant measure associated to a certain (different) ergodic diffusion. These identifications enable efficient estimation of the distribution through both simulation and numerical solutions of partial differential equations. When the process representing the value of an economic factor is one-dimensional, or more generally reversing, the invariant distribution is given in an explicit form with respect to the model parameters. In continuous time and a general multi-dimensional setup, the lack of knowledge of the invariant distribution could pose an issue. In my talk I will show how one can amend the situation in the discrete-time case.
Georgios Vichos
Title: The Implied Risk Aversion in Risk-Sharing Transactions
Abstract: We consider a market of a given vector of securities and finitely many financial agents, who are heterogeneous with respect to their risky endowments and risk aversions. The market is assumed to be thin, meaning that each agent's actions could heavily influence the price and allocation of the securities. In contrast with the majority of related literature, we assume that agents' risk aversion is not public information, which implies that agents may strategically choose the risk aversion that they will implement in the trading. In this environment, equilibrium is modelled as the outcome of a Nash-type game, where the agents' sets of strategic choices are the demand functions on the traded securities.
Under the standard assumptions of exponential utility preferences and normal distributed pay-offs, we first show that the agents have motive to declare different risk aversions than their true ones. The Nash equilibrium is then characterized as a solution to a system of quadratic equations, which is shown to have a unique solution in the market with two agents or with multiple agents under the additional assumption that all but one endowments have beta less than one. Interestingly enough, it is shown that agents with sufficiently low (true) risk aversion profit more from Nash equilibrium as compared to the one with no strategic behavior.
Diego Zabaljauregui
Title: Optimal Market Making in FX
Abstract: We consider an FX market maker (MM) and his clients. The MM role is to give bid-ask quotes in continuous time, and to do this he observes the price of the same pair on an external dealer to dealer market. This price evolves as a GBM. The clients also have access to that price but with a slight delay and they give buy and sell market orders at random times. We model the accumulated buy and sell orders with Cox processes whose intensities depend on the spreads and quadratic variation of the MM prices relative to the delayed external price. The MM accumulates orders over a small finite time horizon, at the end of which he hedges the remaining net inventory on the external market. The objective is to maximize the terminal expected P&L of the MM by controlling his quotes.
Xiaolin Zhu
Title: Hitting Time Problem of Stochastic Process with Non-deterministic Drift
Abstract: Stochastic process with non-deterministic drift, in particular, driven by the process itself has been widely used in modelling the evolution of short rate. The hitting time problem of such process to some constant or time-varying level is one of our interest. We start with the Ornstein Uhlenbeck process as an example and obtain explicit expression of Laplace transform and distribution function of the first hitting time. Departure from which, the attention is turned to the stochastic process whose drift is driven by its maximum level. Instead of working with a fixed drift term, we set the drift as an undetermined function of the maximum level and subject to change accordingly. We are going to discuss some findings of the drift function and present related results.
Yajing Zhu
Title: A general three-step method for estimating the effect of multiple latent categorical predictors on a distal outcome
Abstract: Latent class analysis (LCA) is widely used to derive categorical variables from multivariate data which are then included as predictors of a distal outcome. The traditional ‘modal class’ approach is to assign subjects to the latent class with the highest posterior probability. However, regression coefficients for the modal class will be biased due to potential misclassification and the unintended influence of the distal outcome on class membership. To address these problems, Asparouhov and Muthén (2014) proposed a 3-step method in which the modal class is treated as an imperfect measurement of the true class in the regression for the distal outcome, with measurement error determined by the misclassification probabilities. Our work extends their proposition to the multiple latent categorical variable case and assesses the relative performance of the 3-step method against the traditional modal class approach under settings of associated and independent latent class variables at different entropy levels. Current results show that the 3-step method is robust to unclear class separation and outperforms the modal class approach in most scenarios. The results are particularly useful for empirical studies that have more than one, possibly associated, latent constructs with unclear class separation.
2014/15 PhD Presentation Event
Tuesday 19 and Wednesday 20 May 2015
Rafal Baranowski
Title: Multi-zoom autoregressive time series models
Abstract: We consider the problem of modelling financial returns observed at a high or mid frequency, for example one minute. To this end, we adopt so-called “multi-zoom” approach, in which the returns are assumed to depend on a few past values observed at (unknown) lower frequencies such as one day ore one week. When the dependence is additionally assumed to be linear, the returns follow the Multi-Zoom Autoregressive (MZAR) time series model. We introduce an estimation procedure allowing for fitting MZAR models to the data and demonstrate preliminary theoretical results providing theoretical justification of our methodology. Finally, in a extensive simulation study based on the data from the New York Stock Exchange Trade and Quotes Database, we show that MZAR models can offer a very good predictive power for forecasting high- and mid-frequency financial returns. (With Piotr Fryzlewicz)
Wenqian Cheng
Title: Models for Chinese micro-blog data
Abstract: Before the arrival of modern information and communication technology, it was not easy to capture people's consuming and company-rating preferences; however, the prevalence of social-networking websites provides opportunities to capture those trends in order to predict social and economic changes. With the establishment of numerous text mining methods in statistical learning, valuable information can be derived via the devising of patterns and trends over the textual content. Latent Dirichlet allocation (LDA), which can be regarded as an improvement of PLSI(Probabilistic latent semantic indexing), is one of the most common model for discovering topics from large sets of textual data. In LDA, each document in the collection is modelled as a mixture over an underlying set of topics; meanwhile, explicit representations of documents are provided by topic probabilities. In this presentation, the fundamental concept and structure of LDA will be clarified and variations of topic models that unveil the evolution of topics over time on Chinese micro-blog (Weibo) will be proposed. Approaches to resolve and eliminate the disturbance of randomness are attempted to generate more stable topic distributions. Methods for topic evolution analysis are employed to measure the trend, strength, and variability of topics.
Phoenix Feng
Title: A nonparametric eigenvalue-regularised integrated volatility matrix estimator using high-frequency data for portfolio allocation
Abstract: In portfolio allocation of a large pool of assets, the use of high frequency data allows the corresponding high-dimensional integrated volatility matrix estimator to be more adaptive to local volatility features, while sample size is significantly increased. To ameliorate the bias contributed from the extreme eigenvalues of the sample covariance matrix when the dimension $p$ of the matrix is large relative to the sample size $n$, and the contamination by microstructure noise, various researchers attempted regularization with specific assumptions on the true matrix itself, like sparsity or factor structure, which can be restrictive at times. With non-synchronous trading and contamination of microstructure noise, we propose a nonparametrically eigenvalue-regularized integrated volatility matrix estimator (NERIVE) which does not assume specific structures for the underlying integrated volatility matrix. We show that NERIVE is almost surely positive definite, with extreme eigenvalues shrunk nonlinearly under the high dimensional framework $p/n \rightarrow c > 0$. We also prove that almost surely, the optimal weight vector constructed using NERIVE has maximum weight magnitude of order $p^{-1/2}$. The asymptotic risk of the constructed optimal portfolio is also theoretically analyzed. The practical performance of NERIVE is illustrated by comparing to the usual two-scale realized covariance matrix as well as some other nonparametric alternatives using different simulation settings and a real data set.
Ali Habibnia
Title: Nonlinear forecasting with many predictors
Abstract: Although there is a rapidly growing literature on forecasting with many predictors, only few publications have appeared in recent years concerning possible nonlinear dynamics in high-dimensional time series. The aim of this study is to
develop forecasting models capable of capturing nonlinearity and nonnormality in high-dimensional time series with complex patterns. This study is organized as follows. First, it is logical to ask if the use of such nonlinear techniques is justified by the data, therefore we applied different types of nonlinearity tests available in the literature to determine if complex real world time series like financial returns behave in a linear or nonlinear fashion. The experimental results indicate that the financial series are rarely pure linear. There are strong evidence of existence of nonlinearity in and between financial series. Hence we proposed a two-stage forecasting procedure based on improved factor models with two neural network extensions. In the first stage, we perform a neural network PCA to estimate common factors, which allows the factors to have a nonlinear relationship to the input variables. In the second stage, we conduct a nonlinear factor augmented forecasting equation, which is the prediction of the variable of interest by using common factors, based on neural network models. Out-of-sample forecast results show that the proposed neural network factor model signicantly outperformed linear factor models and the random walk. Finally, We introduced a one-shot procedure to forecast high-dimensional time series with complex patterns, which is based on a neural network with skip-layer connections optimized by a novel learning algorithm including both L1 and L2 norms simultaneously. Both techniques introduced in this study include linear and nonlinear structures and if there is no nonlinearity between variables, they converge to a linear model.
Andy Ho
Title: On exact simulation algorithms for some distribution related to Brownian motion
Abstract: I survey exact random variate generators for several distributions related to Brownian motion. Various parameters such as extremes and locations of extremes of Brownian motions, first exit time of Brownian motions, supremum of reflected Brownian motion, supremum of reflected Brownian motion and its location and supremum of Brownian motion with drift. Exact simulation is important for financial modelling such as barrier options to avoid bias.
Charlie Hu
Title: NERCOME estimator for integrated covariance matrices
Abstract: Introduced by Lam (2014), Nonparametric eigen-regularized covariance matrix estimator (NERCOME) is a novel method to estimate covariance matrix through splitting of the data. It enjoys many nice properties. However, one of the key assumptions is that the data must be independent. We consider the estimation of integrated covariance (ICV) matrices of high dimensional diffusion processes based on high frequency observations. We extend NERCOME method to allow a time-varying structure for a particular class $\mathcal{C}$ of diffusion processes which the data follows (Zheng and Li ($2011$)). We prove some asymptotic results. Finally we use both simulated data and real data examples to compare our estimator with the commonly used realized covariance matrix (RCV) and time-variation adjusted realized covariance (TVARCV) matrix.
Na Huang
Title: NOVELIST estimator of large correlation and covariance matrices and their inverses
Abstract: We propose a "NOVEL Integration of the Sample and Thresholded covariance estimators" (NOVELIST) to estimate the large covariance (correlation) and precision matrix. NOVELIST performs shrinkage of the sample covariance (correlation) towards its thresholded version. The sample covariance (correlation) component is non-sparse and can be low-rank in high dimensions. The thresholded sample covariance (correlation) component is sparse, and its addition ensures the stable invertibility of NOVELIST. The benefits of the NOVELIST estimator include simplicity, ease of implementation, computational efficiency and the fact that its application avoids eigenanalysis. We obtain an explicit convergence rate in the operator norm over a large class of covariance (correlation) matrices when the dimension $p$ and the sample size $n$ satisfy log $p/n\to 0$. In empirical comparisons with several popular estimators, the NOVELIST estimator in which the amount of shrinkage and thresholding is chosen by cross-validation performs well in estimating covariance and precision matrices over a wide range of models and sparsity classes.
Haziq Jamil
Title: Regression modelling using I-priors
Abstract: The I-prior methodology is a new modelling technique which aims to improve on maximum likelihood estimation of linear models when the dimensionality is large relative to the sample size. By putting a prior which is informed by the dataset (as opposed to a subjective prior), advantages such as model parsimony, lesser model assumptions, simpler estimation, and simpler hypothesis testing can be had. By way of introducing the I-prior methodology, we will give examples of linear models estimated using I-priors. This includes multiple regression models, smoothing models, random effects models, and longitudinal models. Research into this area involve extending the I-prior methodology to generalised linear models (e.g. logistic regression), Structural Equation Models (SEM), and models with structured error covariances.
Cheng Li
Title: Trading in limit order market with asymmetry information
Abstract: We study a trading problem in limit order market with asymmetry information. There are two types of agents, noisy traders and an insider. Noisy traders come to the market with liquidation purpose only. The insider knows the fundamental value of a risk asset before the trade and is to maximise her expected profit. In Glosten and Milgrom [1985] model, aggregated demand is a point process and the insider is only allowed to place market orders. We borrow the main structure of the model from Glosten and Milgrom [1985]. At the same time, the insider is allowed to apply hybrid way, i.e. limit and market orders, to maximise her expected profit by a trade-off between limit and market orders. This is formulated as a control problem that we characterise in terms of HJB system.
Shiju Liu
Title: Joint law of classical Cramér-Lundberg risk model
Abstract: Classical collective risk model, Cramér–Lundberg risk model, focuses on the probability of ruin of an insurance company. We begin with this risk model with claim sizes following an identical inverse Gaussian distribution, and study corresponding joint laws in finite time horizon through Gerber-Shiu expected discounted penalty functions and Laplace transform. The joint distribution of first passage time and overshoot with zero initial capital is derived. Particular attention is given to the asymptotic result for the joint distribution of first passage time, overshoot and any nor-zero initial capital, which could provide us with the probability of ruin at any finite time with different initial capital. Numerical results of probability of ruin are given with different ruin time and initial capital.
Anna-Louise Schröder
Title: Change-point detection in multichannel EEG data
Abstract: We present a novel method for detecting frequency-specific change points in the spectral features of multi-channel electroencephalogram (EEG) recordings. Our method detects temporal changes in the spectral energy distribution at EEG channels and in the coherence between channel pairs. As opposed to existing methods on multi-channel change-point detection, our proposed method is able to localize change points not only in time and space, but also attribute them to specific frequency bands (e.g., delta, alpha, beta and gamma). This is feature is important and highly relevant in advancing our understanding of specific changes in neuronal activity. One such example is the design of early warning systems for epileptic seizure patients. Our proposed method is computationally fast and its results are easily interpretable. We illustrate this with an application to EEG seizure data that provides insights to spectral energy changes in pre-seizure brain activity.
Ewelina Sienkiewicz
Title: Real-world probabilistic modelling of El Niño
Abstract: In this research I apply non-linear analysis methods, which I have been developing in my PhD thesis, to situations of economic interest such as El Niño forecasting. El Niño is a global climatic phenomenon with widespread climate and economic impacts. Prediction of El Niño behaviour would be of great value in many countries, but existing forecast methods are inadequate to provide useful information on timescales of interest. In part this is due to model error, which is the focus of my thesis and known to be important in climate simulation. In this study I first consider a perfect model scenario based on Columbia University’s model for El Niño, and present the results of an experiment tracking the decay of information due to sensitivity in initial conditions. This illustrates the use of the tools I have developed to interpret, value and apply probabilistic forecasts. I then explore the novel use of the information deficit in model development and forecast evaluation. Findings about predictability of the El Niño model are similar to my previous conclusions about the predictability of toy mathematical systems. Increasing the ensemble size, cutting down the noise level or choice of data assimilation technique can have practical implications for the real-world use of the forecast system.
Tayfun Terzi
Title: Proposing a new measure for detecting (latent variable model aberrant) semi-plausible response patterns
Abstract: New challenges concerning bias to measurement error have arisen due to the increasing use of paid participants: semi-plausible response patterns (SpRPs). SpRPs result when participants only superficially process the information of (online) experiments or questionnaires and attempt only to respond in a plausible way. This is due to the fact that participants who are paid are generally motivated by fast cash, and try to efficiently overcome objective plausibility checks and process other items only superficially, if at all. The consequences are biased estimations, blurred or even covered true effect sizes, and contaminated valid models. A new measure developed for the identification of SpRPs in a latent variable framework is evaluated and future research outlined.
2013/14 PhD Presentation Event
Tuesday 20 and Wednesday 21 May 2014
Rafal Baranowski
Title: Ranking-based subset selection for high-dimensional data
Abstract: In this presentation, we consider high-dimensional variable selection problem, where the number of predictors is much larger than the number of observations. Our goal is to identify those predictors, which truly affect the response variable. To achieve this, we propose the Ranking Based Subset Selection (RBSS), which combines subsampling with any variable selection algorithm allowing to rank “importance” of the explanatory variables . Unlike the existing competitors such as Stability Selection (Meinshausen and Bühlmann, 2010), RBSS can identify subsets of relevant predictors selected by the original procedure with relatively low but yet significant probability. We provide a real data example, which demonstrates that this issue arises in practice and show that RBSS offers a very good performance then. Moreover, we report results of an extensive simulation study and some of the theoretical results derived, which show that RBSS is a valid and powerful statistical procedure.
Wenqian Cheng
Title: Text mining and time series analysis on Chinese microblogs
Abstract: This presentation will discuss some text mining and time series analysis results on Chinese Micro-blogs (Weibo). First, It will give brief review towards social media/micro-blog, techniques of Micro-blog data acquisition, and some exploratory data analysis. The aim of using text mining is to understand general public’s perspectives towards certain keywords (e.g. specific companies). Useful information is typically derived through the devising of patterns and trends through statistical pattern learning. Text mining methods such as Clustering and Support Vector Machine are applied. In addition, to discover the abstract “topics” that occur in a collection of posts, topic modelling was applied in the simulation study. Next, time series analysis on sentiment and on the correlation between posts amount and stock price will be presented. Plans and problems for next stage will be proposed in the end.
Marco Doretti
Title: Measuring the efficacy of the UK counterweight programme via g-computation algorithm
Abstract: One of the purposes of longitudinal studies is the evaluation of the impact of a sequence of treatments/exposures on an outcome measured at the final stage. When dealing with observational data, particular care is needed in stating dependencies among variables into play, in order to avoid a number of drawbacks that could affect the validity of performed inference. Time-varying confounding is one of the most important and arises naturally when the causality framework is adapted to a multi-temporal context, as there may be variables that at each time act as confounders for the treatments/outcome relation but are also influenced by previous treatments, lying therefore on the causal paths under investigation. The g-computation algorithm (Robins 1986, Ryan et al. 2012) is probably the most popular method to overcome this issue. In order to handle informative drop-out, we propose an extension of Heckman correction to deal with several occasions. The motivating example consists of a follow-up study implemented within the Counterweight Programme, one of the most relevant protocols enforced to tackle the problem of obesity in the last decades in UK (Taubman et al. 2009), from which the dataset used for the application has been gathered.
Essential references:
Robins, J. (1986) - A new approach to causal inference in mortality studies with a sustained exposure period - application to control of the healthy worker survivor effect. Mathematical Modelling.
Daniel, R. M. et al. (2012) - Methods for dealing with time-dependent confounding. Statistics in Medicine.
Taubman, S. L. et al. (2009) - Intervening on risk factors for coronary heart disease: an application of the parametric g-formula. International Journal of Epidemiology.
Tomasz Dubiel-Teleszynski
Title: Data augmentation: simulating diffusion bridges using Bayesian filters
Abstract: We propose a new approach to simulating diffusion bridges. We focus on bridges for nonlinear processes however our method is applicable to linear diffusion processes as well. Novelty of our data augmentation technique lies in the proposal which is based on a Bayesian filter, in particular Kalman filter or unscented Kalman filter, applied to Euler approximation of a given diffusion process. We thus follow multivariate normal regression theory applying unscented transformation whenever diffusion process is nonlinear. Bridges we study are for mean reverting processes, such as linear Ornstein-Uhlenbeck process, square root process with nonlinear diffusion coefficient and inverse square root process with nonlinear drift and diffusion coefficient. We introduce a correction to approximation of drift in the Euler scheme and generalize it for a class of mean-reverting processes with polynomial drift. Setting our method against other techniques found in the literature, in cases we study we find acceptance rates we obtain comparable for values of mean-reversion parameter lying in the unit interval. However, unlike the other methods our method leads to incomparably higher acceptance rates for values of this parameter higher than unity. We believe this result to be of interest especially when modelling term-structure dynamics or other phenomena with inverse square-root processes. Our next goal is to extend these results to a multidimensional setting and simulate diffusion processes conditional on their integrals, followed by applications in stochastic volatility models.
Ali Habibnia
Title: Financial forecasting with many predictors with neural network factor models
Abstract: Modelling and forecasting financial returns have been an essential question of recent studies in academia as well as in financial markets to understand market dynamics. Financial returns present special features, which makes the forecast of this variable hard. This study aims to propose a non-linear forecasting technique based on an improved factor model with two neural network extensions. The first extension proposes an auto-associative neural network principal component analysis as an alternative for factor estimation, which allows the factors to have a non-linear relationship to the input variables. After finding the common factors, the next step will propose a non-linear factor augmented forecasting equation based on a single hidden layer feed forward neural network model. In this study, statistical approach has been demonstrated to show that the modelling procedure is not a black box. This proposed neural network factor model can capture both non-linearity and non-guasianity of a high-dimensional dataset. Therefore, this model can be more accurate to forecast the complex behaviour in financial data.
Charlie Hu
Title: Nonparametric eigenvalue-regularized precision or covariance matrix estimator
Abstract: Recently there are numerous works on the estimation of large covariance or precision matrix. The high dimensional nature of data means that the sample covariance matrix can be ill-conditioned. Without assuming a particular structure, much efforts have been devoted to regularizing the eigenvalues of the sample covariance matrix. Lam (2014) proposes to regularize these eigenvalues through subsampling of the data. The method enjoys asymptotic optimal nonlinear shrinkage of eigenvalues with respect to the Frobenius error norm. Coincidentally, this nonlinear shrinkage is asymptotically the same as that introduced in Ledoit and Wolf 2012. One advantage of our estimator is its computational speed when the dimension p is not extremely large. Our estimator also allows p to be larger than the sample size n, and is always positive semi-definite.
Na Huang
Title: NOVELIST estimator for large covariance matrix
Abstract: We propose a NOVEL Integration of the Sample and Thresholded covariance estimators (NOVELIST) to estimate large covariance matrix. It is shrinkage of the sample covariance towards a general thresholding target, especially soft or hard thresholding estimators. The benefits of NOVELIST include simplicity, ease of implementation, and the fact that its application avoids eigenanalysis, which is unfamiliar to many practitioners. We obtain an explicit convergence rate in the operator norm over a large class of covariance matrices when dimension p and sample size n satisfy log p/n→0. Further we show the rate is a trade-off between sparsity, shrinkage intensity, thresholding level, dimension and sample size under different covariance structures. The simulation results will be presented and comparison with other competing methods will also be given.
Cheng Li
Title: Limit convergence of BSDEs driven by a marked point process
Abstract: We study backward stochastic differential equations (BSDEs) driven by a random measure, or equivalently, by a marked point process. When some assumptions hold, there exists a unique supersolution with its unique decomposition to the BSDE. Thanks to Peng’s paper written in 1999, we can follow his method with proper modifications to prove limit theorem of BSDEs driven by a marked point process, i.e. if there exists a sequence of supersolutions of BSDEs increasingly converges to a supersolution Y, there also exists the convergence to Y’s unique decomposition. Moreover, we can apply this limit convergence theorem to show the existence of the smallest supersolution of a BSDE with a constraint. Finally, we apply our results to consider the insider trading problem.
Shiju Liu
Title: Excursions of Lévy processes
Abstract: We study the classical collective risk model, Cramér-Lundberg risk model, driven by a compound Poisson process, which concerns the probability of ultimate ruin of an insurance company both in finite time horizon and infinite time horizon. Particular attention is given to Gerber-Shiu expected discounted penalty functions, which provide a method of calculating the probability of ruin. We derive the Laplace transforms of claim sizes following an inverse Gaussian distribution and mixture of two exponential distributions and we obtain the asymptotic formulas of probability of ruin based on the two scenarios mentioned above. The infinite divisibility of Lévy processes and the Lévy-Khintchine representation theorem are introduced as preliminaries to study the excursions of Lévy processes as well as applications in financial mathematics.
Anna-Louise Schröder
Title: Adaptive trend estimation in financial return data - recent findings and new challenges
Abstract: Financial returns can be modelled as centred around piecewise-constant trend functions which change at certain points in time. We can capture this in a model using a hierarchically-ordered oscillatory basis of simple piecewise-constant functions which is uniquely defined through Binary Segmentation for change-point detection. The resulting interpretable decomposition of nonstationarity into short- and long-term components yields an adaptive moving-average estimator of the current trend, which beats comparable forecast estimators in applications on daily return data. In my presentation I discuss some challenges and interesting questions as well as potential paths to improve the existing framework. I also show some promising results for a multivariate extension of this model.
Ewelina Sienkiewicz
Title: How long in the future can you trust the forecast?
Abstract: In this research I quantify the predictability of a chaotic system, estimate how far in the future it is predictable for and identify the two main limitations. Sensitivity to initial conditions complicates the forecasting of chaotic dynamical systems, even when the model is perfect. Structural model inadequacy is a distinct source of forecast failure, failures which are sometimes mistakenly interpreted to be due to chaos. These methods are demonstrated using a toy mathematical system (Henon Map) as an illustration. Model inadequacy is shown to be important in real-world forecasting practice using the example of climate models. The research findings based on North American Regional Climate Change Assessment Program (NARCCAP) database show significant divergence between Regional and Global Climate Models estimates of surface radiation, and consider the implications for the reliability of such models.
Tayfun Terzi
Title: Methods for the identification of semi-plausible response patterns (SpRPs)
Abstract: New challenges concerning bias from measurement error have arisen due to the increasing use of paid participants: semi-plausible response patterns (SpRPs). SpRPs result when participants only superficially process the information of (online) experiments or questionnaires and attempt only to respond in a plausible way. This is due to the fact that participants who are paid are generally motivated by fast cash, and try to efficiently overcome objective plausibility checks and process other items only superficially, if at all. Thus, those participants produce not only useless but detrimental data, because they attempt to conceal their malpractice from the researcher. The potential consequences are biased estimation and misleading statistical inference. The inferential objective is to derive identification statistics within latent models that detect these behavioural patterns (detection of error), by drawing knowledge from related fields of research (e.g., outlier analysis, person-fit indices, fraud detection).
Youyou Zhang
Title: The joint distribution of excursion and hitting times of the Brownian motion with application to Parisian option pricing
Abstract: We study the joint law of excursion time and hitting time of a drifted Brownian motion by using a three state semi-Markov model obtained through perturbation. We obtain a martingale to which we can apply the optional sampling theorem and derive the double Laplace transform. This general result is applied to address problems in option pricing. We introduce a new option related to Parisian options being triggered when the age of an excursion exceeds a certain time or/and a barrier is hit. We obtain an explicit expression for the Laplace transform of its fair price.
2012/13 PhD Presentation Event
Tuesday 21 and Wednesday 22 May 2013
Rafal Baranowski
Title: Subset stability selection
Abstract: In this presentation, we provide a brief introduction to the concepts standing behind recently developed variable screening procedures in a linear regression model. These techniques aim to remove a great number of unimportant variables from the analysed data set, preserving all relevant ones. In practice, however, it may occur that the obtained set does not include any important variables at all! That is why there is a need for a tool, which could assess reliability and stability of a set of variables and implement these assessments in the further analysis. We introduce a new method, termed “subset stability selection”, which combines any variable screening procedure with resampling techniques, in order to find significant variables only. Our method is fully nonparametric, easily applicable in much wider context than linear regression only and it exhibits very promising finite sample performance in the simulation study provided.
Zhanyu Chen
Title: Hedging of barrier options via a general self-duality
Abstract: Classical put-call symmetry relates the price of puts and calls under a suitable dual market transform. One well-known application is the semi-static hedging of path dependent barrier options with European options. Nevertheless, one has to relieve restrictions on modelling price processes so as to fit empirical data of stock prices. In this work, we develop a general self-duality theorem to develop hedging schemes for barrier options in stochastic volatility models with correlation.
Wenqian Cheng
Title: Data analysis and text mining on mico-blogs
Abstracts: This presentation will discuss some data analysis and text mining on Micro-blogs, especially for Chinese Micro-blog (Weibo). Some brief introduction towards social media/micro-blog and comparison between Twitter and Weibo will be presented. It will cover several techniques of Micro-blog data acquisition, including downloading via Application Programming Interface (API), Web crawling tools, Web parsing applications. For initial data analysis, some works towards posting pattern recognition and correlation with share price has been conducted. Further text mining study towards Weibo includes Chinese word segmentation, word frequency counting, and sentiment analysis will be introduced. Plans and problems for next stage will be proposed in the end.
Baojun Dou
Title: Sparse factor model for multivariate time series
Abstract: In this work, we model multiple time series via common factors. Under the stationary settings, we concentrate on the case when the factor loading matrix is sparse. We proposed a method to estimate the factor loading matrix and to correctly pick up the zeros from it. Two aspects of asymptotic results are investigated when the dimension of the time series p is fixed: (1) parameter consistency: the convergent rate of the new sparse estimator and (2) sign consistency. We have obtained a necessary condition for sign consistency of the estimator. Future work will allow p goes to infinity.
Ali Habibnia
Title: Forecasting with many predictors with a neural-based dynamic factor model
Abstract: The contribution of this study is to propose a non-linear forecasting technique based on an improved dynamic factor model with two neural network extensions. The first extension proposes a bottleneck-type neural network principal component analysis as an alternative for factor estimation, which allows the factors to have a nonlinear relationship to the input variables. After finding the common factors, the next step will propose a non-linear factor augmented forecasting equation based on a multilayer feed forward neural network. Neural networks as a function approximation method can capture both non-linearity and non-normality of the data. Therefore, this model can be more accurate to forecast non-linear behaviour in macroeconomic and financial high-dimensional time series data.
Mai Hafez (poster presentation)
Title: Multivariate longitudinal data subject to dropout and item non-response - a latent variable approach
Abstract: Longitudinal data are collected for studying changes across time. Studying many variables simultaneously across time (e.g. items from a questionnaire) is common when the interest is in measuring unobserved constructs such as democracy, happiness, fear of crime, social status, etc. The observed variables are used as indicators for the unobserved constructs "latent variables" of interest. Dropout is a common problem in longitudinal studies where subjects exit the study prematurely. Ignoring the dropout mechanism can lead to biased estimates, especially when the dropout is non - ignorable. Another possible type of missingness is item non-response where an individual chooses not to respond to a specific question. Our proposed approach uses latent variable models to capture the evolution of the latent phenomenon over time while accounting for dropout (possibly non - random), together with item non-response.
Qilin 'Charlie' Hu
Title: Factor modelling for high dimensional time series
Abstract: Lam et al. (2011) propose an autocorrelation based estimation method for high dimensional time series using a factor model. When factors have different strengths, a two step procedure which estimate strong factors and weak factor separately will perform better than doing the estimation in one go. It is well known that PCA method (Bai and Ng, 2002) is only valid for high dimensional data (consistency comes from dimension going to infinity). On the other hand, we derive some convergence results, which show that the autocorrelation based method can takes advantage of low dimensional estimation and estimate weaker factor better, while itself is a high dimensional data analysis procedure. This result can be applied to some macroeconomic data.
Alex Jarman (poster presentation)
Title: Forecasting the probability of tropical cyclone formation - the reliability of NHC forecasts from the 2012 hurricane season
Abstract: see poster
Cheng Li
Title: Asymptotic equilibrium in glosten-milgrom model
Abstract: Kyle (1985) studied a market with asymmetry information and obtained the equilibrium in the market. Back (1992) generalized it in continuous time. In Back’s result, the fundamental value of the risky asset can take any continuous distribution. This general result is contrast to the studies in Glosten-Milgrom equilibrium where the fundamental value of the risk asset is assumed to have a Bernoulli distribution in Back and Baruch (2004). We have taken on this project to study the existence of Glosten-Milgrom equilibrium, when the fundamental value of the risky asset has the discrete general distribution. We also introduce a notion of asymptotic equilibrium for Glosten-Milgrom equilibrium which allows a sequence of Glosten-Milgrom equilibriums to approximate Kyle-Back equilibrium, when the value of risky asset has general discrete distributions.
Anna Louise Schroeder
Title: How to quantify the predictability of a chaotic system
Abstract: I present a new time series model for nonstationary data that is able to cope with a very low signal-to-noise ratio and time-varying volatility, both of which are typical features of financial time series. Core of our model is a set of data-adaptive basis functions and coefficients which specify location and size of jumps in the mean of a time series. The set of these change points can be determined with a uniquely identifiable hierarchical structure, allowing for unambiguous reconstruction. Thresholding the estimated wavelet coefficients adequately, our model provides practitioners with a flexible forecasting method: only those change points of higher importance (in terms of jump size) taken into account in forecasting returns.
Ewelina Sienkiewicz (poster presentation)
Title: How to quantify the predictability of a chaotic system
Abstract: Models are tools that describe reality in form of mathematical equations. For example General Circulation Models (GCM) represent actual climate system and are used to investigate major climate processes and help us better understand certain dependencies amongst climate variables. Global forecasts help foresee severe weather anywhere on the planet and save many lives, although meteorology is unreliable in long run. A model is only an approximate representation of nature, which is reflected by model error. In addition, small uncertainties in the initial conditions usually bring up errors in the final forecasts. We can handle initial condition uncertainty but not model error. This study examines how to quantify predictability of complex models with an eye towards experimental design.
Majeed Simaan
Title: Estimation risk in asset allocation theory
Abstract: Assuming that the assets returns are normally distributed with a known covariance matrix, the paper derives a joint sampling distribution for the estimated efficient portfolio weights as well as for its mean and risk return. In addition, it shows that estimation error increases with the investor’s risk tolerance and the number of assets within the portfolio, while it decreases with the sample size. While large institutional investors allocate their funds over a number of classes, in practice, these allocation decisions are made in a hierarchical manner and involve adding constraints on the process. From a pure ex-ante perspective, such procedures are likely to result in sub-optimal decision making. Nevertheless, from an ex-post view as my results approve, the committed estimation risk increases with the number of assets. Therefore, the loss of ex-ante welfare in the hierarchical approach can be outweighed by lower estimation risk achieved by optimizing over a smaller number of assets.
Edward Wheatcoft (poster presentation)
Title: Will it rain tomorrow? Improving probabilistic forecasts
Abstract: Chaos is the phenomenon of small differences in the initial conditions of a process causing large differences later in time, often colloquially referred to as the “butterfly effect”. Perhaps the most well-known example though is in meteorology where small differences in the current conditions can have large effects later on. The effect is famously summed up by the notion that “when a butterfly flutters its wings in one part of the world, it can eventually cause a hurricane in another.” Of course this is only a fictional example but let’s suppose that we know this is true but we don’t know whether the butterfly has flapped its wings or not. Do we accept that we can’t predict what’s going to happen? Or can we gain some insight? Now suppose that we know from experience that the probability of the butterfly flapping its wings is 0.05, i.e. 5 percent. With this information we might conclude that the probability of a hurricane occurring is 0.05 also. This is of course an oversimplified and unrealistic example, but it illustrates the concept of ensemble forecasting in that a degree of belief about uncertainty of the initial conditions can give us a better idea of the probability of a future event.
Yang Yan (poster presentation)
Title: Efficient estimation of risk measures in a semiparametric GARCH model
Abstract: This paper proposes efficient estimators of risk measures in a semiparametric GARCH model defined through moment constraints. Moment constraints are often used to identify and estimate the mean and variance parameters and are however discarded when estimating error quantiles. In order to prevent this efficiency loss in quantile estimation, we propose a quantile estimator based on inverting an empirical likelihood weighted distribution estimator. It is found that the new quantile estimator is uniformly more efficient than the simple empirical quantile and a quantile estimator based on normalized residuals. At the same time, the efficiency gain in error quantile estimation hinges on the efficiency of estimators of the variance parameters.
You You Zhang
Title: Last passage time processes
Abstract: The survey of last passage times play an important role in financial mathematics. Since they look into the future and are not stopping times the standard theorems in martingale theory can not be applied and therefore they are much harder to handle. Using time inversion we relate last passage times of drifted Brownian motion to first hitting times. Using this argument we derive the distribution of the increments. We extend this to general transient diffusions. Work has been done by Profeta et al. making use of Tanaka’s formula. We introduce the concept of conditioned martingales and connect it to Girsanov’s theorem. Our main focus lies in relating the Brownian meander to the BES(3) process. This transformation proofs to be useful in deriving the last passage time density of the Brownian meander.
Previous PhD Presentation Events
Thursday 10 May 2012
|
11:40 - 11:50
|
Introduction
|
|
11:50 - 12:15
|
Sarah Higgins
How skilful are seasonal probability forecasts constructed from multiple models?
|
|
12:15 - 12:40
|
Mai Hafez
A latent variable model for multivariate longitudinal data subject to dropout
|
|
12:40 - 13:40
|
Lunch
|
|
13:40 - 14:05
|
Alex Jarman
Misleading estimates of forecast quality: quantifying skill with sequential forecasts
|
|
14:05 - 14:30
|
Na Huang
Precision matrix estimation via pairwise tilting
|
|
14:30 - 14:55
|
Yang Yan
Efficient estimation of conditional risk measures in a semiparametric GARCH model
|
|
14:55 - 15:25
|
Break
|
|
15:25 - 15:50
|
Karolos Korkas
Adaptive estimation for locally stationary autoregressions
|
|
15:50 - 16:15
|
Jia Wei Lim
Parisian option pricing: a recursive solution for the density of the Parisian stopping time
|
Friday 11 May 2012
|
12:40 - 13:55
|
Lunch
|
|
13:55 - 14:20
|
Baojun Dou
Sparse factor modelling for high dimensional time series
|
|
14:20 - 14:45
|
Joseph Dureau
A Bayesian approach to estimate time trends in condom use following a targeted HIV prevention programme
|
|
14:45 - 15:10
|
Yehuda Dayan
tbc
|
|
15:10 - 15:30
|
Poster session in the Leverhulme Library
|
|
|
Abstracts
|
Thursday 23 June 2011
|
10:45 - 11:00
|
Introduction
|
|
11:00 - 11:25
|
Roy Rosemarin
Projection pursuit conditional density estimation
|
|
11:25 - 11:50
|
Edward Wheatcroft
Forecasting the meridionial overturning circulation
|
|
11:50 - 12:15
|
Yehuda Dayan
Title and abstract tbc
|
|
12:15 - 12:40
|
Karolos Korkas
Adaptive estimation for piecewise stationary autoregressions
|
|
12:40 - 13:40
|
Lunch
|
|
13:40 - 14:05
|
Alex Jarman
Small-number statistics, common sense and profit: challenges and non-challenges for hurricane forecasting
|
|
14:05 - 14:30
|
Felix Ren
The methodology flowgraph model
|
|
14:30 - 14:55
|
Joseph Dureau
Capturing the time-varying drivers of an epidemic
|
|
14:55 - 15:25
|
Break
|
|
15:25 - 15:50
|
Daniel Bruynooghe
Differential cumulants, hierarchical models and monomial ideals
|
|
15:50 - 16:15
|
Jia Wei Lim
Distribution of the Parisian stopping time
|
|
16:15 - 16:40
|
Yang Yan
Co value-at-risk measure
|
|
|
Abstracts
|
Friday 24 June 2011
|
13:25 - 13:50
|
Zhanyu Chen
Put-call symmetry in stochastic volatility models
|
|
13:50 - 14:15
|
Ilaria Vannini
Multivariate regression chain graph models for clustered categorical data
Ilaria is a visiting research student from Università deglis Studi di Firenze
|
|
14:15 - 14:40
|
Dan Chen
Stochastic volatility of volatility
|
|
14:40 - 15:10
|
Break
|
|
15:10 - 15:35
|
Mai Hafez
Modelling dropout om longitudinal studies using latent variable models
|
|
15:35 - 16:00
|
Hongbiao Zhao
Risk process with dynamic contagion claims
|
|
16:00 - 16:25
|
Ilya Sheynzon
Multiple equilibria and market crashes
|
|
16:25 - 17:45
|
Poster session in the Leverhulme Library
|
|
|
Abstracts
|
Monday 14 June 2010
|
10:45 - 11:00
|
Introduction
|
|
11:00 - 11:25
|
Haeran Cho
High-dimensional variable selection via tilting
|
|
11:25 - 11:50
|
Xiaonan Che
Stochastic boundary crossing probability for Brownian motions
|
|
11:50 - 12:15
|
Sujin Park
Deformation estimation for high-frequency data
|
|
12:15 - 12:40
|
Sarah Higgins
Seasonal weather forecasting using multi models
|
|
12:40 - 13:40
|
Lunch
|
|
13:40 - 14:05
|
Alex Jarman
Quantitative information on climate change for the insurance industry
|
|
14:05 - 14:30
|
Felix Ren
Distributions and estimation in stochastic flowgraph models
|
|
14:30 - 14:55
|
Filippo Riccardi
A model for the limit order book
|
|
14:55 - 15:25
|
Break
|
|
15:25 - 15:50
|
Jia Wei Lim
Some distributions related to the number of Briownian excursions above and below the orgin
|
|
15:50 - 16:15
|
Dan Chen
A study on the commodity futures prices
|
|
|
Abstracts
|
Tuesday 15 June 2010
|
13:00 - 13:25
|
Flavia Giammarino
Pricing with uncertainty averse preferences
|
|
13:25 - 13:50
|
Malvina Marchese
Asymptotic properties of linear panel estimators in large panels with stationary and nonstationary regressors
|
|
13:50 - 14:15
|
Deniz Akinc
Pairwise likelihood inference for factor analysis type models
|
|
14:15 - 14:40
|
Roy Rosemarin
Dimension reduction in copula models for estimation of conditional densities
|
|
14:40 - 15:10
|
Break
|
|
15:10 - 15:35
|
Ilya Sheynzon
Continuous time modelling of market liquidity, hedging and crashes
|
|
15:35 - 16:45
|
Poster session in the Leverhulme Library
|
|
|
Abstracts
|
Wednesday 18 June 2009
|
10.45-11.00
|
Introduction
|
|
11.00 - 11.30
|
Felix Ren
An algebraic approach to moment methods for Stochastic Flowgraph models
|
|
11.30 - 12.00
|
Takeshi Yamada
Approximation of swaption prices with a moment expansions
|
|
12.00 - 12.30
|
Flavia Giammarino
A semiparametric model for the systematic factors of portfolio credit risk premia
|
|
12.30 - 13.30
|
Lunch
|
|
13.30 - 14.00
|
Deniz Akinc
Pairwise likelihood inference for factor analysis type models
|
|
14.00 - 14.30
|
Neil Bathia
Methodology and convergence rates for factor modelling of multiple time series
|
|
14.30 - 15.00
|
Noha Youssef
A 2-stage design procedure for computer experiments
A comparative study between space-filling design and Model based optimal design
|
|
15.00 - 15.30
|
Break
|
|
15.30 - 16.00
|
Young Lee
A brief review on the minimal entropy martingale measure
|
|
16.00-16.30
|
Roy Rosemarin
Dimension reduction in estimating conditional densities
|
|
|
Abstracts
|
Thursday 19 June 2009
|
13.00 - 13.30
|
Xiaonan Che
Markov-type models of the Real Time Gross Settlement payment system in the UK
|
|
13.30 - 14.00
|
Malvina Marchese
Asymptotic distribution of the pooled OLS estimator in large panels with mixed stationary and non stationary regressors
|
|
14.00 - 14.30
|
Sujin Park
Nonparametric prewhitened Kernel estimator of Ex-post variation
|
|
14.30-15.00
|
Break
|
|
15.00-15.30
|
Daniel Hawellek
How hot are climate models?
|
|
15.30-16.00
|
Hongbiao Zhao
Dynamic Contagion Process and Its Application in Credit Risk
|
|
16.00-16.30
|
James Abdey
Ménage à Trois Inference Style: The Unholy Trinity
|
|
16.30-18.00
|
Poster session in the Leverhulme Library
|
|
|
Abstracts
|
Thursday 19 June 2008
|
11.00 - 11.30
|
Sarah Higgins
Blending Ensembles from Multi Models
|
|
11.30 - 12.00
|
Yehuda Dayan
Finite population inference from online access panels- a model assisted framework
|
|
12.00 - 12.30
|
Daniel Hawellek
The shadowing concept
|
|
12.30 - 13.30
|
Lunch will be available in B212
|
|
13.30 - 14.00
|
Xiaonan Che
Markov-type model for the Real Time Gross Settlement payment system
|
|
14.00 - 14.30
|
Hai Liang Du
The Roles of Ensembles in Climate Modelling
|
|
14.30 - 15.00
|
Break
|
|
15.00 - 15.30
|
Edward Tredger
Can global mean temperatures inform science-based policy?
|
|
15.30 - 16.00
|
Takeshi Yamada
Pricing derivatives contracts in carbon emissions markets and approximation methods of interest rate derivatives
|
|
|
Abstracts
|
Friday 20 June 2008
|
13.00 - 13.30
|
Young Lee
The minimal entropy martingale measure for Multivariate Point Processes
|
|
13.30 - 14.00
|
Neil Bathia
Dimension reduction for functional time series
|
|
14.00 - 14.20
|
Break
|
|
14.20 - 14.50
|
Sandrine Toeblem
Portfolio Allocation Under Ambiguity
|
|
14.50 - 15.20
|
Flavia Giammarino
Econometric Modelling of Credit Risk
|
|
15.20 - 17.00
|
Poster session in the Leverhulme Library
|
|
|
Abstracts
|
Monday 4 June 2007
|
14.10 - 14.40
|
Pauline Sculli
Contagion in Affine Default Processes
|
|
14.40 - 15.10
|
Hai Liang Du
Nowcasting with Indistinguishable States
|
|
15.10 - 15.30
|
break
|
|
15.30 - 16.00
|
Limin Wang
MSE in Gaussian processes
|
|
16.00 - 16.30
|
Noha Youssef
Branch and Bound Algorithm for Maximum Entropy Sampling
|
|
|
Abstracts
|
Monday 5 June 2007
|
14.10 - 14.40
|
Oksana Savina
Pareto-optimality: beyond the one-period model
|
|
14.40 - 15.10
|
Shanle Wu
Parisian option pricing with Jump Processes
|
|
15.10 - 15.40
|
Edward Tredger
Current Issues in the Evaluation of Climate Models
|
|
15.40 - 16.00
|
break
|
|
16.00 - 16.30
|
Young Lee
Pricing and Hedging Call Option
|
|
16.30 - 17.00
|
Sandrine Tobelem
Optimal Portfolios Under Model Ambiguity
|
|
|
Abstracts
|
Tuesday 13 June 2006
|
14.10 - 14.40
|
James Abdey
Is Significance Significant? Assessing Differential Performance of Equity Fundamental Determinants
|
|
14.40 - 15.10
|
Pauline Sculli
Counterparty Default Risk in Affine Processes with Jump Decay
|
|
15.10 - 15.40
|
Sarah Higgins
Seasonal Forecasting using Multi-Models
|
|
15.40 - 16.00
|
break
|
|
16.00 - 16.30
|
Hai Liang Du
Nowcasting with shadows
|
|
16.30 - 17.00
|
Adrian Gfeller
Sensitivity analysis for exotic options in Levy process driven models
|
|
17.00 - 17.30
|
Young Lee
The optimal Föllmer-Sondermann hedging strategy for exponential Lévy models
|
|
|
Abstracts
|
Wednesday 14 June 2006
|
14.10 - 14.40
|
Billy Wu
Time series graphical models
|
|
14.40 - 15.10
|
Edward Tredger
An introduction to climate modelling
|
|
15.10 - 15.30
|
break
|
|
15.30 - 16.00
|
Sandrine Tobelem
Do Factor Models perform on European data?
|
|
16.00 - 16.30
|
Limin Wang
Introduction to K-L expansion and its application
|
|
|
Abstracts
|
Friday 10 June 2005
|
14.00 - 14.30
|
Billy Wu
An introductory presentation on graphical models
|
|
14.30 - 15.00
|
Hailiang Du
New approaches to estimation in nonlinear models
|
|
15.00 - 15.30
|
break
|
|
15.30 - 16.00
|
Miltiadis Mavrakakis
Signal extraction for long multivariate temperature series
|
|
16.00 - 16.30
|
Dario Ciraki
A unifying statistical framework for dynamic simultaneous equation models with latent variables
|
|
|
Abstracts
|