FM banner 1400x300

Joint Risk & Stochastics and Financial Mathematics Seminar

The following seminars have been jointly organised by the Risk and Stochastics Group and the Department of Mathematics. The Seminar normally takes place bi-weekly on Thursdays from 12.00 - 13:00, unless stated below. The series aims to promote communication and discussion of research in the mathematics of insurance and finance and their interface, to encourage interaction between practice and theory in these areas, and to support academic students in related programmes at postgraduate level. All are welcome to attend. Please contact Emily Jackson, the Research Manager, on E.Jackson2@lse.ac.uk for further information about any of these seminars.

Upcoming speakers:

Thursday 7 December - Andrew Allan (Durham University)

Efficient Itô rough paths: From stochastic portfolio theory to the Euler scheme for SDEs

It's well-known that rough path theory provides a pathwise counterpart to stochastic calculus, which yields a robust solution theory for SDEs, and comes with strong pathwise continuity results. On the other hand, the financial interpretation of a rough integral is typically unclear. In this talk we'll consider a class of Itô rough paths for which the theories of Itô, Föllmer and rough integration all marry up efficiently, with all the power of rough path theory, and with the financial interpretation of the Itô integral intact. Based on this framework, we'll then look at applications to model-free stochastic portfolio theory, and to establishing pathwise convergence of the Euler scheme for rough and stochastic differential equations. This talk is based on joint works with Christa Cuchiero, Anna Kwossek, Chong Liu and David Prömel.


 

Previous seminars in the series: 

Thursday 23 November - Xiling Zhang (University of Leicester)

Limit points of external DLA models in the plane 

We will explore particle systems closely related to recent work on systemic risk, wherein a default boundary grows as firms reach the boundary. More generally, Diffusion-Limited Aggregations (DLA) are random growth models on the lattice, where a set grows at the boundary sites when triggered by the absorption of a sequence of lattice-valued, diffusion-limited processes (particles); a typical example is the self-avoiding random walk. We are interested in the limiting behaviour of such sets as the grid size decreases, with the empirical law of the underlying particles converging to the Wiener measure. The main discovery of this work is that almost surely the planar Wiener process makes a loop around itself infinitely many times, and based on this geometric observation we show that the limit points of a certain class of external DLA models coincide with a Wiener process stopped upon hitting the limiting set. Furthermore I will talk about the connection between this problem and the super-cooled Stefan problem. In particular we will see that the latter in two dimensions cannot be approximated by external DLA models. This is a joint work with Sergey Nadtochiy and Mykhaylo Shkolnikov.


Thursday 9 November - Luca Galimberti (King's College London)

Pricing Options on Flow Forwards by Neural Networks in Hilbert Spaces

In commodity markets, options are typically written on forward and futures contracts. In some markets, like electricity and gas, as well as freight and weather markets on temperature and wind, the forwards deliver the underlying commodity or service over a contracted delivery period, and not at a specified delivery time in the future. Such forwards are sometimes referred to as flow forwards.  There is a large literature on neural networks and financial derivatives, mostly focusing on approximating option prices, which, as it is well-known, can be recovered as solutions of PDEs: typically, these equations are high-dimensional, and the main argument for intro- ducing deep neural networks is to overcome the curse of dimensionality.  In this talk, we bring this perspective to the “ultimate” high-dimensional case, considering deep neural networks approximating option prices on infinite-dimensional underlyings. Indeed, option prices on flow forwards are in general functions of functions, as the underlying will be a curve (i.e., the term structure) rather than a vector of points (i.e., prices of the underlying assets).  We propose to approximate this non-linear option price functional by a neural network in Hilbert space, appealing to the general neural nets in Frechet spaces and their universal approximation properties studied in an earlier work.  We test our methodology in some numerical case studies, and find that it works very well with high-dimensional noise which is in line with the general perception that numerical methods based on neural networks can often overcome the curse of dimensionality. This is a joint work with Fred Espen Benth (UiO) and Nils Detering (HHU).


Thursday 26 October - Aleksey Kolokolov (University of Manchester)

Cryptocrashes

This paper proposes a new nonparametric test for detecting short-lived locally explosive trends (drift bursts) in pure-jump processes. The new test is designed specifically to detect intraday flash crashes and gradual jumps in cryptocurrency prices recorded at a high frequency. Empirical analysis shows that drift bursts in bitcoin price occur on average at every second day. Their economic importance is highlighted by showing that hedge funds holding cryptocurrency in their portfolios are exposed to a risk factor associated with the intensity of bitcoin crashes. On average, hedge funds do not profit from intraday bitcoin crashes and do not hedge against the associated risk.


Thursday 12 October 2023 - Rishabh Gvalani (Max Planck Institute)

An SPDE for Stochastic Gradient Descent

We study an SPDE with conservative noise structure which arises naturally as a continuum description of the continuous-time analogue of the stochastic gradient descent (SGD) algorithm for a 2-layer neural network. We study well-posedness for this equation under a range of assumptions on the coefficients and initial data. We also provide sharp quantitative versions of a law of large numbers and central limit theorem for this SPDE in the joint limit of overparameterisation and vanishing learning rate. Combining our results with previous results in the literature, we show that the SPDE provides a higher-order approximation of the original discrete-time SGD algorithm than just the law of large numbers. This is joint work with Benjamin Gess (Bielefeld/MPI-MiS) and Vitalii Konarovskyi (Hamburg).


Previous Seminars

2022/232021/222019/202018/192017/18, 2016/17, 2015/16 

Archive