Home > Department of Mathematics > Seminars > Seminars on Discrete and Applicable Mathematics in 2012
How to contact us

Department of Mathematics
Columbia House
London School of Economics
Houghton Street
London WC2A 2AE, UK

 

Email: maths.info@lse.ac.uk|
Tel: +44(0)207 955 7732/7925

 

Click here| for information on how to get to LSE using public transport and maps of the campus

 

 

Seminars on Discrete and Applicable Mathematics in 2012

Below you'll find an overview of the past seminars in this series, specifically those held in 2012. The seminars are listed in reverse chronological order, most recent first.

 

Thursday 13 December - Omer Edhan (Manchester)
Representations of Positive Projections with Applications in Economics

Many problems of interest in economics (e.g., payoff in economies, cost sharing, ``power" in voting and other ``political games", taxation, bankruptcy, etc.) can be reduced to the study of certain positive projections on spaces of TU cooperative games. The value, the semi-value, and quasi-value are several examples of solution concepts which are positive projections. The study of such operators usually relies on representing them as an expected marginal contributions of agents to ``random" coalitions. We present a new representation theory for positive projections on spaces of Lipschitz nonatomic games. We shall discuss several applications of this theory in economics, and, if time allows, also some possible generalizations. 

 

Thursday 29 November – Robert Simon (Mathematics Department, LSE)
A proof of the Vieille result using a kind of discounting

There is an elementary proof of the existence of approximate equilibria for positive recursive stochastic games. It uses taboo probability in connection with a state dependent discount.

 

Thursday 22 November – Eran Shmaya (Kellogg School of Management)
Compressed Equilibrium in Large Repeated Games with Incomplete Information

Due to their many applications, large Bayesian games have been a subject of growing interest in game theory and related fields. But to a large extent, models have been (1) restricted to one-shot interaction, (2) are based on an assumption that player types are independent and (3) assume that the number of players is known.

The current paper develops a general theory of Bayesian repeated large games that avoid some of these difficulties. For a more robust analysis, it develops a concept of compressed optimization and equilibrium which is applicable to a special class of anonymous games.

This is a joint work with E. Kalai.

 

Thursday 15 November – Agelos Georgakopolous (Warwick)
Random walks on graphs, and the Kirchhoff and Wiener Index

It is well known that the vertices of any graph can be put in a linear preorder so that, for random walk on the graph, vertices appearing earlier in the preorder are “easier to reach but more difficult to get out of”. We exhibit further such preorders corresponding to various functions related to random walk, and study their relationships. These preorders coincide when the graph is a tree, but not necessarily otherwise.

In the case of trees, we prove a simple identity relating hitting times to the Wiener index.

The original motivation for this work was the study of cover time, and a connection will be shown.

This is a joint work with S. Wagner (Stellenbosch)

 

Thursday 8 November - Michael Schröder (Mathematics Department, LSE)
A working mathematician's view on Parisian-barrier options, and a bit beyond

We give a survey of the notion of Parisian barrier options, and of own research on these, which should be sufficiently audience-friendly to a non-specialist audience.

Mathematically, we look at excursions of Brownian motion, and demonstrate the potential of this notion by highlighting the concrete effects of its incorporation into barrier options, following work of Yor.  Here, we stress the consequent changes that they bring to hedging in qualitative as well as quantitative ways.

The second part of the talk focuses on the notion of `Parisian-style excursion-law' found to be at the heart of these applications, and summarise our main findings concerning its explicit structure; in this we once more stress the Brownian case.

 

Thursday 1 November – László Végh  (Management Science Department, LSE)
Approximating minimum cost k-node-connected spanning subgraphs

Given an undirected graph with costs on the edges and a positive integer k, consider the problem of finding a minimum cost spanning subgraph that is k-node-connected. We present a 6-approximation algorithm for this NP-complete problem, assuming that the number of nodes is at least k^3(k?1)+k. This gives the first constant factor approximation for the problem.

For edge-connectivity variants, constant factor approximation can be achieved using the iterative rounding of the linear programming relaxation, a powerful technique developed by Jain. Whereas it has been long known that iterative rounding cannot yield a constant factor approximation for node-connectivity on arbitrary input graphs, we show that it does give a 2-approximation for a restricted class of instances. We apply a combinatorial preprocessing, based on the Frank-Tardos algorithm for k-outconnectivity, to transform any input into such an instance. This is a joint work with Joseph Cheriyan.

 

Thursday 25 October –  David Ellis (Queen Mary, University of London)
Dictatorships, stability and quasi-stability in the symmetric group

Kalai proved in 2002 that if a neutral social choice function is almost surely rational for a random voter profile, then the social choice function is close to a dictatorship. The main tool was a Fourier stability result of Friedgut, Kalai and Naor, stating that a Boolean function on {0,1}^n whose Fourier transform is highly concentrated on low frequencies, must be close in structure to a dictatorship (a function determined by just one coordinate). Fourier stability results have been obtained for other Abelian groups, and have been useful for proving several combinatorial results. After sketching this background, I will discuss some Fourier stability results and `quasi-stability' results for the symmetric group, obtained recently by myself, Yuval Filmus and Ehud Friedgut. To our knowledge, these are the first such results for a non-Abelian group. I will also discuss some applications, including isoperimetric inequalities and combinatorial stability results.

 

Thursday 18 October – Dr Ron Peretz (Mathematics Department, LSE)
Learning Cycle Length through Finite Automata

Two learning problems concerning the cycle length of a periodic stream of input bits are concerned. One problem is to find the exact cycle length of a periodic stream of input bits provided that the cycle length is bounded by a known parameter n. The other problem is to find a large number k that divides the cycle length. By "large" we mean that there is an unbounded increasing function f(n), such that either k is greater than f(n) or k is the exact cycle length.

The main results include that finding a large divisor of the cycle length can be solved by an automaton in deterministic SPACE o(n) and TIME O(n), whereas finding the exact cycle length cannot be solved in deterministic TIME x SPACE smaller than \Omega(n^2). Results involving probabilistic automata are also derived. As time permits, we'll discuss applications to repeated games played through finite automata. 

 

Thursday 11 October – Dr Marie Frentz  (Mathematics Department, LSE)
An introduction to subellipticity

A short introduction to subelliptic equations structured on Hormander vector fields.  It will be less technical and more intuitive, and examples from everyday life to highlight the structural properties will be presented.

Kolmogorov equations are a special kind of subelliptic equations.  To solve such equations we may use stochastic representation and Monte Carlo techniques.  One aspect when doing so is that of controlling the error in the approximation.  We will look at a specific example from option pricing as well as how to approximate solutions to the associated equation with a prescribed accuracy (an adaptive weak approximation scheme), using Malliavin calculus.

 

Friday 21 September – Avinatan Hassidim (Google Research)
Finding a job and the two body problem

Every year, over 16,000 American medical students graduate from 130 medical schools, and look for a residency position. Since 1952, doctors are matched to residency programs by a central authority - the National Resident Matching Program (NMRP). With the increased participation of women in the medical labor market, a growing number of doctors join the residency market as couples. In 2010 the number of couples taking part in the Match exceeded 800.

One of the greatest future challenges the Match will face is handling the growing number of couples, in which both doctors wish to get a residency in the same geographical area. A stable matching in a market with couples need not exist, and determining if such a matching exists is NP complete.

We introduce a new matching algorithm for such markets - the Sorted Deferred Acceptance Algorithm - and show that for a general class of large random markets the algorithm will find a stable matching with high probability. Furthermore, truth-telling is an approximated equilibrium in the game induced by the new matching algorithm.

This is joint work with with Itai Ashlagi and Mark Braverman.

 

Friday 22 June – Zibo Xu (Hebrew University of Jerusalem)
Evolutionary stability in multiple-move games

We consider a basic dynamic evolutionary model with rare mutation and a best-reply (or better-reply) selection mechanism. We call a state evolutionarily stable if its probability in the invariant distribution is bounded away from zero as the mutation rate decreases to zero. We prove that, for all finite extensive-form games of perfect information, only Nash equilibria are evolutionarily stable. In the case that each player can only play at one node in the game, the combined results of Hart and Gorodeisky show that the backward induction equilibrium becomes in the limit the only evolutionarily stable outcome as the populations increase to infinity.

We show that, in games where a player may play at more than one node along some path, even when the populations increase to infinity, the backward induction equilibrium or even the whole backward induction equilibrium component is not always the only evolutionarily stable outcome. In particular, some Pareto-efficient equilibrium components may also be evolutionarily stable regardless of population size.

 

Thursday 21 June - David Saxton (University of Cambridge)
Hypergraph Containers

An independent set in a hypergraph is a subset of the vertices containing no edges. Although there are typically many independent sets in a hypergraph, one can in fact find a small collection C of vertex sets, where each member of C is almost independent, and such that every independent set is a subset of some member of C. This has many surprising consequences. In particular it implies lower bounds on the list chromatic number of a hypergraph, and allows one to prove statements in sparse random hypergraphs such as those recently shown by Schacht and by Conlon and Gowers.

 

Thursday 14 June - Bill Jackson (Queen Mary, University of London)
Radically solvable graphs

A 2-dimensional framework is a straight line realisation of a graph in the Euclidean plane. It is radically solvable if the set of vertex coordinates is contained in a radical extension of the field of rationals extended by the squared edge lengths. We show that the radical solvability of a generic framework depends only on its underlying graph and characterise which planar graphs give rise to radically solvable generic frameworks. We conjecture that our characterisation extends to all graphs.

This is joint work with J. C. Owen (Siemens).

 

Thursday 7 June - Eoin Long (University of Cambridge) 
Some new Sperner type results

A family $F$ of subsets of $\{1,...,n\}$ is a Sperner family if it does not contain two sets $A$ and $B$ with $A\subset B$. Sperner's theorem, a classic result in extremal combinatoricics, says that any Sperner families satisfies $|F| \leq \binom {n}{n/2}$. In particular, $|F| = o(2^n)$.

Kalai asked whether a similar conclusion holds for 'Sperner-like' conditions. In particular, he asked how large can a family $F$ of subsets of $\{1,...,n\}$ be if it does not contain two sets $A$ and $B$ with $|B\setminus A| = 2|A\setminus B|$? Viewed appropriately, this 'tilts' the usual Sperner condition. In this talk, we answer Kalai's question and for sufficiently large $n$ determine the unique extremal family. We will also discuss other results about the sizes of families satisfying general 'Sperner-like' conditions.

Some of this work is joint with Imre Leader.

 

Thursday 31 May - Doreen Thomas (University of Melbourne)

Relay augmentation for the lifetime extension of a wireless sensor network

In wireless sensor networks a key objective is to optimise the lifetime of individual nodes. In most wireless sensor networks the bulk of a sensor's energy consumption is attributable to communication and the energy consumption is proportional to some power of the communication distance. Hence those nodes communicating over the longest links are likely to die first and we may consider these links to be the bottlenecks.

This talk considers the problem of adding a fixed number of relays to a wireless sensor network so that the longest link in the associated transmission network is minimised. In addition, the network should also be highly connected in the sense that it can continue to operate if one or more of the nodes or links is compromised. This can be modelled as an optimisation problem known as the bottleneck c-connected Steiner network problem. We show that in the case c = 2 this problem can be exactly and efficiently solved using the combinatorial structure of the well-known block cut-vertex decomposition of graphs, and geometric properties of the 2-relative neighbourhood graphs and farthest colour Voronoi diagrams. We discover new properties of optimal bottleneck Steiner 2-connected networks, and produce polynomial-time exact algorithms when we add no more than two Steiner points. More generally, this talk demonstrates the important role graph theory and geometry can play in the efficient design of wireless sensor networks.

This work is done in collaboration with Marcus Brazil and Charl Ras at The University of Melbourne.

 

Friday 25 May - Jacob Fox| (MIT)
Extremal results in sparse pseudorandom graphs

Szemeredi's regularity lemma is a fundamental tool in extremal combinatorics. However, the original version is only helpful in studying dense graphs. In the 1990s, Kohayakawa and Rodl proved an analogue of Szemeredi's regularity lemma for sparse graphs as part of a general program toward extending extremal results to sparse graphs. Many of the key applications of Szemeredi's regularity lemma use an associated counting lemma. In order to prove extensions of these results which also apply to sparse graphs, it remained a well-known open problem to prove a counting lemma in sparse graphs.

The main advance of this talk lies in a new counting lemma, proved following the functional approach of Gowers, which complements the sparse regularity lemma of Kohayakawa and Rodl, allowing us to count small graphs in regular subgraphs of a sufficiently pseudorandom graph. We use this to prove sparse extensions of several well-known combinatorial theorems, including the removal lemmas for graphs and groups, the Erdos-Stone-Simonovits theorem and Ramsey's theorem. These results extend and improve upon a substantial body of previous work.

Joint work with David Conlon and Yufei Zhao. 

 

Friday 25 May - Benjamin Sudakov| (UCLA)  
Nonnegative k-sums, fractional covers, and probability of small deviations 

More than twenty years ago, Manickam, Miklos, and Singhi conjectured that for any integers $n \geq 4k$, every set of $n$ real numbers with nonnegative sum has at least $\binom{n-1}{k-1}$ $k$-element subsets whose sum is also nonnegative. In this talk we discuss the connection of this problem with matchings and fractional covers of hypergraphs, and with the question of estimating the probability that the sum of nonnegative independent random variables exceeds its expectation by a given amount. Using these connections together with some probabilistic techniques, we verify the conjecture for $n \geq 33k^2$. This substantially improves the best previously known exponential lower bound on $n$.

If time permits we will also discuss applications to finding the optimal data allocation for a distributed storage system and how our approach can be used to prove some theorems on minimum degree ensuring the existence of perfect matchings in hypergraphs.

Joint work with Noga Alon and Hao Huang.

 

Thursday 24 May - Penelope Hernandez (University of Valencia)
Strategic Sharing of a Costly Network

We study minimum cost spanning tree problems for a given set of users connected to a source. We propose a rule of sharing such that each user may pay her cost for such a tree plus an additional amount to the other users. A reduction of her cost appears as a compensation from the other users. Our first result states the existence of a sharing rule such that no agent is willing to choose a different tree from the minimum cost spanning tree (MCST) offered by Prim's algorithm. Therefore, the MCST emerges as both a social and individual optimal solution. Given a sharing system, we implement the above solution as a subgame perfect equilibrium of a sequential game where players decide sequentially with whom to connect. Moreover, the proposed solution is in the core of the monotone cooperative game associated with a minimal cost spanning tree problem.

 

Friday 18 May - Ryan Martin (Iowa State University)
Forbidden posets, the Boolean lattice and flag algebras

The Boolean lattice of dimension two, also known as the diamond, consists of four distinct elements with the following property: $A\subset B,C\subset D$. A diamond-free family in the $n$-dimensional Boolean lattice is a subposet such that no four elements form a diamond. Note that elements $B$ and $C$ may or may not be related.

There is a diamond-free family in the $n$-dimensional Boolean lattice of size $(2-o(1)){n\choose\lfloor n/2\rfloor}$. In this talk, we show that any diamond-free family in the $n$-dimensional Boolean lattice has size at most $(2.25+o(1)){n\choose\lfloor n/2\rfloor}$. Furthermore, we show that the so-called Lubell function of a diamond-free family in the $n$-dimensional Boolean lattice is at most $2.25+o(1)$, which is asympotically best possible.

This is joint work with Lucas Kramer and Michael Young, both of Iowa State University. 

 

Friday 11 May - Horst Martini (University of Technology, Chemnitz, Germany)
Some problems and results in Minkowski Geometry

The axiomatic foundations of the geometry of finite-dimensional real Banach spaces (Minkowski spaces) are due to Hermann Minkowski, and over a period of more than 100 years many results have been obtained in this direction. The more analytical part of this theory is summarized in the monograph "Minkowski Geometry" of A. C. Thompson (Cambridge University Press, 1996), and there is no analogous collection of related results in the spirit of Discrete and Computational Geometry. We will give some overview on results of this type, obtained recently and showing that also very elementary questions are still open and interesting as research in the geometric properties of Minkowski spaces. 

 

Thursday 3 May - Andreas Wuerfl (Technische Universität München, Munich)
Planar subgraphs - a threshold for the number of edges

A graph is called planar if it can be drawn in the plane in such a way that its edges do not cross. We define the planarity of G to be the maximum number of edges in a planar subgraph of G. Finding such a maximum planar subgraph is a hard problem. Instead we will present lower bounds on the number of its edges. In particular we will be interested in the connection between minimum degree and planarity.
This dependence has been studied before for various minimum degrees. So we were surprised to find a jump in the evolution of the planarity at minimum degree n/2. This is joint work with T. Luczak and A. Taraz. 

 

Tuesday 17 April - Massimo Gobbino (University of Pisa)
The Monopolist’s Problem: from an economic model to calculus of variation
No abstract.

 

Friday 16 March - Garth Dales (Leeds)
Second duals of  algebras of continuous functions and measures

We first look at algebras  of continuous functions on a compact space K, and their second duals. This is the algebra of continuous functions on the hyper-Stonean cover of K, which we shall describe.  Then we shall consider the second  dual of algebras of measures on a locally compact group (usually the circle group),   and see the extent to which there is a product on this second dual that throws light on the original algebra. The story involves Stone-Cech compactifications, Borel functions, and ultrafilters.

The talk is related to  a joint memoir:
H.  G.  Dales, A.  T.-M.  Lau, and  D. Strauss,  Second duals of measure algebras, Dissertationes Math., to appear soon

 

Thursday 8 March - Giacomo Zambelli  (LSE)
Cutting planes, lattice-free sets and sublinear functions

Cutting planes play a central role in integer programming. Gomory Mixed-Integer cuts (GMI) are arguably the most widely exploited among general purpose cutting planes. General frameworks encompassing GMI cuts have been proposed about forty years ago by Gomory and by Balas, where GMIs can be interpreted as "intersection cuts" for the so called "corner polyhedron". Recently, there has been a revived interest in the IP community for this approach, mostly focused on the study of so called "valid functions" - which give general formulas to generate cutting planes - and on the correspondence between the "best" valid functions and maximal lattice-free convex sets. In this talk we will discuss some of these recent developments.

The talk is based on joint works with Amitabh Basu, Manoel Campelo, Michele Conforti and Gerard Cornuejols.

 

Friday 2 March - Penny Haxell (Waterloo)
On packing and covering in hypergraphs

We discuss some recent developments on the following long-standing problem known as Ryser's conjecture. Let $H$ be an $r$-partite $r$-uniform hypergraph. A matching in $H$ is a set of disjoint edges, and we denote by $\nu(H)$ the maximum size of a matching in $H$. A cover of $H$ is a set of vertices that intersects every edge of $H$. It is clear that there exists a cover of $H$ of size at most $r\nu(H)$, but it is conjectured that there is always a cover of size at most $(r-1)\nu(H)$.  

 

Thursday 1 March - Peter Keevash (Queen Mary)
Matching and Packing

Matching theory is a large field with many directions of research, both in practical algorithms and combinatorial theory. In this talk I will aim to show some of the breadth of the subject, and some recent advances on matching theory for hypergraphs (joint work with Richard Mycroft). Informally speaking, we show that the obstructions to perfect matchings are geometric, and are of two distinct types: `space barriers' from convex geometry, and `divisibility barriers' from arithmetic lattice-based constructions. We apply our theory to the solution of two open problems on hypergraph packings: the minimum degree threshold for packing tetrahedra in 3-graphs, and Fischer's conjecture on a multipartite form of the Hajnal-Szemeredi Theorem.

 

Thursday 16 February - Chris Argasinski (University of Sussex)
In which currency are payoffs paid in evolutionary games?

The talk will be devoted to the problem of the mechanistic
interpretation of mathematical (game theoretic) notions to describe
particular phenomenon (by example of population dynamics and natural selection). In the standard approach to evolutionary games and replicator dynamics, differences in fitness can be interpreted as an excess from the mean Malthusian growth rate in the population. In the underlying reasoning, related to an analysis of "costs" and "benefits", there is a silent assumption that fitness can be described in some kind of "units". However, in most cases these units of measure are not explicitly specified. Then the question arises: are these theories testable? How can we measure "benefit" or "cost"? A natural language, useful for describing and justifying comparisons of strategic "cost" versus "benefits", is the terminology of demography, because the basic events that shape the outcome of natural selection are births and deaths. However, when we apply this approach then abstract "fitness" splits into two types of payoffs describing reproductive success (number of offspring) and individual survival. In effect an individual is playing simultaneously two types of games that can affect each other in few ways. This extension of mathematical structure used to describe natural process dramatically changes quantitative predictions and our understanding of this process. This shows how important a clear mechanistic interpretation of mathematical notions is to the proper understanding of natural phenomena and to avoid artefacts.

 

Thursday 9 February - Bharath K Sriperumbudur (UCL)
Reproducing Kernel Space Embeddings and Metrics on Probability Measures

In this talk, I will present a generalization to the characteristic function of a probability measure, by embedding measures into reproducing kernel Hilbert spaces (RKHSs). The motivation to consider such a generalization is that for an appropriate choice of RKHS, this provides a powerful and simple method of dealing with higher-order statistics of random variables (not necessarily be defined on R^n), which can then be exploited in many statistical applications like homogeneity/independence/goodness-of-fit testing, density estimation, etc. For this generalized notion to be useful in practice, it is critical that the above mentioned embedding is injective, i.e., the embedding should uniquely characterize all probability measures. I will discuss how this problem is related to the approximation properties of RKHS and then present novel and simple characterizations for the embedding to be injective. I will then introduce and address an important question on the choice of kernels, and conclude with my very recent results on embedding probability measures into reproducing kernel Banach spaces by discussing the advantages/disadvantages over its RKHS counterpart.

Joint work with Kenji Fukumizu (The Institute of Statistical Mathematics), Arthur Gretton (Gatsby Computational Neuroscience Unit, UCL), Gert Lanckriet (UC San Diego) and Bernhard Scholkopf (Max Planck Institute for Intelligent Systems)

 

2 February - Wei Yang (University of Warwick)
Mean field games and nonlinear Markov processes

We present mean field games methodology in a fully general setting, based on the theory of nonlinear Markov processes. The realization of this methodology includes three steps. First, we prove the well-posedness of general kinetic equations with coupled functional parameters. Second, we proved the well-posedness and sensitivity for the HJB equation. Finally, we justify the mean field games as a proper approximation of large population systems as N goes to infinity.

Mean field games appear in the papers by P. L. Lions and co-authors and P. Cains and co-authors, where the underlying processes are all Brownian motions. We develop this methodology for a fully general setting with arbitrary underlying Markov processes of interactions and prove convergence results with 1/N convergence rate. These estimates are new even for the diffusion case.  This is a joint work with Vassili Kolokoltsov and Jiajie Li.

 

26 January - Greg Sorkin|  (LSE)
3-Dimensional Random Assignment Problems

The 2-dimensional assignment problem (minimum cost matching) is solvable in
polynomial time, and it is known that a random instance of size n, with entries
chosen independently and uniformly at random from [0,1], has expected cost
tending to pi^2/6.  In dimensions 3 and higher, there are natural Axial and
Planar assignment generalizations.  Both are NP-complete, but what is the
expected cost for a random instance, and how well can a heuristic do?  The
asymptotic behavior remains an open question.  For 3-dimensional Planar
assignment, we give a ln n approximation algorithm, and for Axial assignment, an
unrelated n^eps approximation algorithm.  In higher dimensions, both algorithms
fail dismally.

Joint work with Alan Frieze

 

19 January - Julia Bõttcher (LSE)
Tight Hamilton cycles in random hypergraphs

I present a method for finding a k-uniform tight Hamilton cycle in a k-uniform random hypergraph with edge probability p=n^{eps-1} for every positive eps with high probability.
This gives a polynomial time randomised algorithm for this problem. The techniques we developed for this also have (more interesting) applications for finding powers of Hamilton cycles (and thus K_r-factors) in pseudorandom graphs, which I will also try to outline.

Joint work with Peter Allen, Hiep Han, Yoshiharu Kohayakawa, Yury Person.

 

12 January 2012 - Peter Allen (LSE)
The chromatic thresholds of graphs

Given a graph H, the chromatic threshold of H is the infimum of reals d>0 such that there exists C=C(d) with the following property. For every n, every H-free graph with n vertices and minimum degree dn has chromatic number at most C.
The problem of determining the chromatic thresholds of graphs was initiated by Erdős and Simonovits in 1973, but no non-trivial case was solved until Thomassen found the chromatic threshold of the triangle in 2002. In 2010, Łuczak and Thomassé, and Lyle, gave solutions for two different families of tripartite graphs. In this talk, we will give the complete solution.

This is joint work with Julia Böttcher, Simon Griffiths, Yoshiharu Kohayakawa and Robert Morris.

Share:Facebook|Twitter|LinkedIn|