Seminar and PhD Seminar on Combinatorics, Games and Optimisation

Below you'll find the programme for the Seminar and PhD Seminar on Combinatorics, Games and Optimisation.

This seminar series covers many of the research areas in the Department: discrete mathematics, algorithms, game theory and operational research.

Unless stated below, this Seminar normally takes place:

Questions, suggestions, etc., about the seminar can be forwarded to Enfale Farooq, the Research Manager, on

IMPORTANT - Due to the ongoing COVID-19 epidemic seminars will now take place online.

Upcoming speakers:

Thursday 9 July - Jesús De Loera (University of California, Davis)
5pm on Zoom

The Space of Monotone Paths of a Linear Program: Some geometric/topological meditations about the behavior of the Simplex Method
George Dantzig’s Simplex method is a work horse of modern optimization. But despite its popularity we still do not understand its complexity. To bound the number of iterations of the Simplex method we take a geometric point of view and investigate the possible lengths of monotone  paths inside the oriented graphs of polyhedra (oriented by the objective function). We consider both the  shortest and the longest monotone paths possible and estimate the (monotone) diameter and the height of some famous combinatorial polyhedra (such as TSP, fractional matching polytopes, and others).
But how far apart are two monotone paths of an LP? E.g., if I use two pivot rules the Simplex method traces two paths, is there a notion of distance between them? Surprisingly, as we look at all monotone paths together we see a metric space structure which can be used to count how many are there or to generate them randomly. Our main enumerative results include bounds on the number of monotone paths, and on the the diameter of the space of monotone paths (how far are two monotone paths from each other?) The picture is fairly complete in dimension three, but plenty of open  problems  remain for high dimensional polytopes.
The new theorems presented in my talk come from two papers (in Arxiv), the first joint work with Moise Blanchard (MIT) and Quentin Louveaux (U. Liege) and the second joint work with Christos Athanasiadis (U. Athens) and Zhenyang Zhang (UC Davis)

Previous seminars in the series:

Thursday 4 June - Ashwin Sah (MIT).

Lecture slides can be found here.
Seminar recording can be found here.
(Password: 5F@8.Qa^)

Diagonal Ramsey via effective quasirandomness
We improve the upper bound for diagonal Ramsey numbers to \[R(k+1,k+1)\le\exp(-c(\log k)^2)\binom{2k}{k}\] for $k\ge 3$. To do so, we build on a quasirandomness and induction framework for Ramsey numbers introduced by Thomason and extended by Conlon, demonstrating optimal ``effective quasirandomness'' results about convergence of graphs. This optimality represents a natural barrier to improvement.

Thursday 7 May - Daniel Korandi (University of Oxford)

Seminar slides can be found here.
Seminar recording can be found here
(Password: 2B*a0S5k)

Exact stability for Turán's theorem
Turán's theorem says that an extremal K_{r+1}-free graph is r-partite. The Stability Theorem of Erdős and Simonovits shows that if a K_{r+1}-free graph with n vertices has close to the maximal t_r(n) edges, then it is close to being r-partite. In this talk we determine exactly the K_{r+1}-free graphs with at least m edges that are farthest from being r-partite, for any m > t_r(n) - δn^2. This extends work by Erdős, Győri and Simonovits, and proves a conjecture of Balogh, Clemen, Lavrov, Lidický and Pfender. Joint work with Alexander Roberts and Alex Scott.

Friday 13 March - Xinyi Xu (LSE)

Correspondence colouring on multigraphs
In this talk we discuss a variant of vertex colouring of graphs called correspondence colouring. This concept generalises normal and list vertex colouring. Whilst multi-edges do not provide additional restrictions for normal or list colouring setting, as we always forbid the same colour on adjacent vertices, additional constraints in correspondence colouring appear when multiple edges are present. These observations also lead to interesting phenomena when we make small changes to a (multi)graph, such as edge or vertex deletion, identifying vertices, contracting edges, or increasing edge multiplicity.

Wednesday 4 March - Vera Traub (University of Bonn)

Reducing Path TSP to TSP
We present a black-box reduction from the path version of the Traveling Salesman Problem (Path TSP) to the classical tour version (TSP). More precisely, we show that given an α-approximation algorithm for TSP, then, for any ϵ>0, there is an (α+ϵ)-approximation algorithm for the more general Path TSP. This reduction implies that the approximability of Path TSP is the same as for TSP, up to an arbitrarily small error. This avoids future discrepancies between the best known approximation factors achievable for these two problems, as they have existed until very recently.
A well-studied special case of TSP, Graph TSP, asks for tours in unit-weight graphs. Our reduction shows that any α-approximation algorithm for Graph TSP implies an (α+ϵ)-approximation algorithm for its path version. By applying our reduction to the 1.4-approximation algorithm for Graph TSP by Sebő and Vygen, we obtain a polynomial-time (1.4+ϵ)-approximation algorithm for Graph Path TSP, improving on a recent 1.497-approximation algorithm of Traub and Vygen.
We obtain our results through a variety of new techniques, including a novel way to set up a recursive dynamic program to guess significant parts of an optimal solution. At the core of our dynamic program we deal with instances of a new generalization of (Path) TSP which combines parity constraints with certain connectivity requirements. This problem, which we call Φ-TSP, has a constant-factor approximation algorithm and can be reduced to TSP in certain cases when the dynamic program would not make sufficient progress.
This is joint work with Jens Vygen and Rico Zenklusen.

Friday 28 February - Coulter Beeson (LSE)

The Empirical Core of the Multicommodity Flow Game Without Side Payments
Policy makers focus on stable strategies as the ones adopted by rational players. If there are many such solutions an important question is how to select amongst them. We study this question for the Multicommodity Flow Coalition Game, used to model cooperation between autonomous systems (AS) in the Internet. In short, strategies are flows in a capacitated network. The payoff to a node is the total flow which it terminates. Markakis-Saberi show this game is balanced and hence has a non-empty core by Scarf's Theorem. In the transferable utility (TU) version this gives a polynomial-time algorithm to find core elements, but for ASs side payments are not natural. Finding core elements in NTU games tends to be computationally more difficult. For this game, the only previous result gives a procedure to find a core element when the supply graph is a path. We extend their work with an algorithm, Incorporate, which produces many different core elements.

Thursday 27 February - Oliver Roche-Newton (Linz)

Hypergraph containers with applications in discrete geometry and additive combinatorics
In recent years, the newly developed theory of hypergraph containers has resulted in several remarkable results in graph theory and extremal combinatorics. Essentially, this theory gives detailed information about the independent sets of hypergraphs, provided that the edges are distributed reasonably well. I will discuss recent joint works with Audie Warren and Cosmin Pohoata, in which these tools were applied to problems in discrete geometry and additive combinatorics. In particular, an upper bound for the number of subsets of the finite plane with no collinear triples is given.

Wednesday 26 February - Nathanaël Fijalkow (CNRS)

Parity games, the quasipolynomial era
Parity games is a model of zero-sum two-player games played in graphs which play a central role in logic, automata, and verification, among others. Whether there exists a polynomial time algorithm for solving parity games is a long standing open question. In 2017 a quasipolynomial time algorithm was constructed, quickly followed by some more algorithms with the same complexity. In this talk I will present a combinatorial notion called universal trees and explain that all existing quasipolynomial time algorithms for solving parity games exploit universal trees in some way. I will then discuss how to extend this notion to universal graphs for the study of mean payoff games.

Friday 21 February - Edin Husic (LSE)
Approximating Nash Social Welfare for asymmetric agents with Rado-valuations

The Nash Social Welfare problem asks for an allocation of a set of indivisible goods to the agents that maximises the geometric mean of their utilities, a well-studied objective for fair resource allocation. Finding an optimal allocation is NP-complete already for two agents with additive valuation functions. Constant-factor approximation algorithms have been known for additive valuations and some simple extensions, and under the assumption that the agents are symmetric: every agent is considered with the same weight in the objective. We present the first constant-factor approximation algorithm for agents with Rado-valuations. Rado-valuations form a general class of valuation functions that arise from maximum weight independent matching problems, including as special cases assignment valuations as well as weighted matroid rank functions. The algorithm generalises to a constant-factor approximation algorithm for asymmetric agents (i.e., with different entitlements), provided the ratio between the weights is bounded by a constant. This is joint work with Jugal Garg and László Végh.

Thursday 20 February - Heinrich Nax (ETH Zürich)

Dynamic matching in assignment games
We examine two-sided markets where players arrive stochastically over time and are drawn from a continuum of types. The cost of matching a client and provider varies, so a social planner is faced with two contending objectives: a) to reduce players' waiting time before getting matched; and b) to form efficient pairs in order to reduce matching costs. We show that such markets are characterized by a quick or cheap dilemma: Under a large class of distributional assumptions, there is no `free lunch', i.e., there exists no clearing schedule that is simultaneously optimal along both objectives. We further identify a unique breaking point signifying a stark reduction in matching cost contrasted by an increase in waiting time.
Generalizing this model, we identify two regimes: one, where no free lunch exists; the other, where a window of opportunity opens to achieve a free lunch. Remarkably, greedy scheduling is never optimal in this setting.

Wednesday 19 February - Jim Orlin (MIT)

On the Design of Fully Polynomial Time Approximation Schemes
A fully polynomial time approximation scheme (FPTAS) is an algorithm that guarantees the following: for any fixed positive , the FPTAS finds an -optimal solution while running in time polynomial in the size of the problem and in 1/. Many of the FPTASes that have been developed over the past 45 years have relied on dynamic programming. 

In this talk, we briefly review some of the classical FPTASes. We then describe two frameworks. The first is the Calculus of K-approximation sets, which helps guide the development of FPTASes. The second is a a framework for the development of new FPTASes. The framework uses a relatively small set of subroutines that operate on non-negative monotone functions of a single variable. This framework permits the development of computer codes for FPTASes with the following properties:

- The FPTASes can be expressed simply and concisely.
- The output of each FPTAS includes a proof that the solution is within a factor of 1+ of optimal.

We illustrate the framework by developing an FPTAS for the stochastic lot-size problem.

This research on FPTASes has been joint with many collaborators including Nir Halmon, CL Li, Giacomo Nannicini, and David Simchi-Levi.

Thursday 13 February - Hubie Chen (Birkbeck)

One Hierarchy Spawns Another: Graph Deconstructions and the Complexity Classification of Conjunctive Queries
We study the classical problem of conjunctive query evaluation. This problem admits multiple formulations and has been studied in numerous contexts; for example, it is a formulation of the constraint satisfaction problem, as well as the problem of deciding if there is a homomorphism from one relational structure to another (which transparently generalizes the graph homomorphism problem).
We here restrict the problem according to the set of permissible queries; the particular formulation we work with is the relational homomorphism problem over a class of structures A, wherein each instance must be a pair of structures such that the first structure is an element of A. We present a comprehensive complexity classification of these problems, which strongly links graph-theoretic properties of A to the complexity of the corresponding homomorphism problem. In particular, we define a binary relation on graph classes and completely describe the resulting hierarchy given by this relation. This binary relation is defined in terms of a notion which we call graph deconstruction and which is a variant of the well-known notion of tree decomposition. We then use this graph hierarchy to infer a complexity hierarchy of homomorphism problems which is comprehensive up to a computationally very weak notion of reduction, namely, a parameterized form of quantifier-free reductions. We obtain a significantly refined complexity classification of left-hand side restricted homomorphism problems, as well as a unifying, modular, and conceptually clean treatment of existing complexity classifications, such as the classifications by Grohe-Schwentick-Segoufin (STOC 2001) and Grohe (FOCS 2003, JACM 2007).
After presenting these advances, we will compare this line of research with another that aims to classify the complexity of the homomorphism problem where the second (target) structure is fixed, and that is currently being studied using universal-algebraic methods. We will also present and discuss two intriguing variants, injective homomorphism (also called embedding) and surjective homomorphism.

This talk is mostly based on joint work with Moritz Muller.

Thursday 6 February - Rida Laraki (Paris-Dauphine and Liverpool)

Stable Games of Matching
Gale and Shapley defined in 1962 a matching problem between two finite sets of distinct agents M and W (men/women, workers/firms, students/universities, doctors/hospitals) who should be paired or matched. They call a matching is stable if no unmatched couple can strictly improve its utility by being together and constructed a stable matching for every instance (thanks to an algorithm). Our article proposes a new extension by assuming that a matched couple obtains its payoff as the outcome of a strategic interaction. Agents, beside choosing partners, play now a non-cooperative game in which they choose a strategy that maximises their payoff under the constraint that their partner does not want to quit the relationship. We call a matching externally stable if no unmatched couple can find a strategy profile which provides both of them a strictly higher payoffs than the ones they have. It is internally stable if no agent can, by deviating and changing her strategy inside her couple, increases her payoff without breaking the external stability of her couple. We provide a sufficient condition on a strategic game (feasibility) under which there exists matching profiles that are externally and internally stable. We prove that the class of feasible games includes zero-sum, potential, infinitely repeated and perfect information games. We also provide a 3 by 3 game in mixed strategies which is not feasible. Finally, we show how our model extends the Shapley-Shubik's model with monetary transferts and the Milgrom-Hatfield???s model with contracts.

Joint work with my PhD student Felipe Garrido. 

Thursday 30 January - Christina Pawlowitsch (Université Panthéon-Assas)

Evolutionary dynamics of costly signaling
Costly-signaling games have a remarkably wide range of applications (education as a costly signal in the job market, handicaps as a signal for fitness in mate selection, politeness in language). The formal analysis of evolutionary dynamics in costly-signaling games has only recently gained more attention. In this paper, we study evolutionary dynamics in two basic classes of games with two states of nature, two signals, and two possible reactions in response to signals: a discrete version of Spence's (1973) model and a discrete version of Grafen's (1990) formalization of the handicap principle. We first use index theory to give a rough account of the dynamic stability properties of the equilibria in these games. Then, we study in more detail the replicator dynamics and to some extent the best-response dynamics. (Joint work with Josef Hofbauer.)

20192018,  2017, 2016

The Seminar on Combinatorics, Games and Optimisation started in 2016. It is a combination of two previous seminar series:

Seminar on Discrete Mathematics and Game Theory: 

2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 2005, 2004, 2003

Seminar on Operations Research: 

2016, 2015, 2014, 2013, 2012, 2011, 2010

PhD Seminar on Combinatorics, Games and Optimisation:

2019, 2018, 2017, 2016, 2015, 2014, 2013, 2012