Browser does not support script.

- Apply
- Got an admissions question? Check our Admissions Knowledge Base
- Study at LSE
- Undergraduate
- Graduate
- Executive Education
- Online courses
- Summer School
- Study abroad
- Student life
- Accommodation

- Research
- Discover our research
- LSE Blogs
- Research for the World online magazine
- LSE Press
- Find LSE academics
- Research centres
- Understanding Global Politics
- Our REF 2021 results

- News & events
- Latest news
- Upcoming events
- Films & podcasts
- Media queries

Browser does not support script.

Browser does not support script.

Browser does not support script.

**Martin Grohe - A Survey on Graph Isomorphism**

TBC

**Jakub Tetek - Better Sum Estimation via Weighted Sampling**

Given a large set %%U%% where each item %%a\in U%% has weight %%w(a)%%, we want to estimate the total weight %%W=\sum_{a\in U} w(a)%% to within factor of %%1 \pm \varepsilon%%. Since %%n=|U|%% is large, we want to do this without looking at the entire set %%U%%.

In the traditional setting in which we are allowed to sample elements from %%U%% uniformly, sampling %%\Omega(n)%% items is necessary to provide any non-trivial guarantee on the estimate. Therefore, we investigate this problem in different settings: in the proportional setting we can sample items with probabilities proportional to their weights, and in the hybrid setting we can sample both proportionally and uniformly.

Sum estimation in the proportional and hybrid setting has been considered before by Motwani, Panigrahy, and Xu [ICALP, 2007]. In their paper, they give both upper and lower bounds in terms of %%n%%. Their bounds are near-matching in terms of %%n%%, but not in terms of %%\varepsilon%%. In our paper, we improve both their upper and lower bounds.

Our bounds are matching up to constant factors in both settings, in terms of both %%n%% and %%\varepsilon%%. No lower bounds with dependency on %%\varepsilon%% were known previously. In the proportional setting, we improve their %%\tilde{O}(\sqrt{n}/\varepsilon^{7/2})%% algorithm to %%O(\sqrt{n}/\varepsilon)%%. In the hybrid setting, we improve %%\tilde{O}(\sqrt[3]{n}/ \varepsilon^{9/2})%% to %%O(\sqrt[3]{n}/\varepsilon^{4/3})%%.

We also investigate the previously unexplored setting where %%n%% is unknown to the algorithm. Finally, we show how our techniques apply to the problem of edge counting in graphs.

**Soheil Behnezhad - Time-Optimal Sublinear Algorithms for Matching and Vertex Cover**

We study sublinear time algorithms for estimating the size of maximum matching and minimum vertex cover. Denoting the number of vertices by n and the average degree in the graph by %%d%%, I’ll present algorithms that for any constant %%\varepsilon > 0%% obtain:

- A multiplicative %%(2 + \varepsilon)%%-approximation in %%\tilde{O}(n)%% time using adjacency list queries.
- A multiplicative-additive %%(2, \varepsilon n)%%-approximation in %%\tilde{O}(d)%% time using adjacency list queries.
- A multiplicative-additive %%(2, \varepsilon n)%%-approximation in %%\tilde{O}(n)%% time using adjacency matrix queries.

All three results are provably time-optimal up to polylogarithmic factors culminating a long line of work on these problems. Our main contribution and the key ingredient leading to the bounds above is a new and near-tight analysis of the average query complexity of the randomized greedy maximal matching algorithm which improves upon a seminal result of Yoshida, Yamamoto, and Ito [STOC’09].

**Yelena Yuditsky - Integer Programs with Bounded Subdeterminants and Two Nonzeros Per Row**

We give strongly polynomial-time algorithms for integer linear programs defined by integer coefficient matrices whose subdeterminants are bounded by a constant and that contain at most two nonzero entries in each row or column.

The core of our approach is the first polynomial-time algorithm for the maximum weight stable set problem on graphs that do not contain more than %%k%% vertex disjoint odd cycles, where %%k%% is any constant. Previously, polynomial-time algorithms were only known for %%k=0%% (bipartite graphs) and for %%k=1%%.

This is joint work with Samuel Fiorini, Gwenaël Joret and Stefan Weltge.

**William Kuszmaul - Stochastic and Worst-Case Generalized Sorting Revisited**

The \emph{generalized sorting problem} is a restricted version of standard comparison sorting where we wish to sort %%n%% elements but only a subset of pairs are allowed to be compared. Formally, there is some known graph %%G = (V, E)%% on the %%n%% elements %%v_1, \dots, v_n%%, and the goal is to determine the true order of the elements using as few comparisons as possible, where all comparisons %%(v_i, v_j)%% must be edges in %%E%%. We are promised that if the true ordering is %%x_1 < x_2 < \cdots < x_n%% for %%\{x_i\}%% an unknown permutation of the vertices %%\{v_i\}%%, then %%(x_i, x_{i+1}) \in E%% for all %%i%%: this Hamiltonian path ensures that sorting is actually possible.

In this work, we improve the bounds for generalized sorting on both random graphs and worst-case graphs. For Erd\H{o}s-Renyi random graphs %%G(n, p)%% (with the promised Hamiltonian path added to ensure sorting is possible), we provide an algorithm for generalized sorting with an expected %%O(n \log (np))%% comparisons, which we prove to be optimal for query complexity. This strongly improves over the best known algorithm of Huang, Kannan, and Khanna (FOCS 2011), which uses %%\tilde{O}(\min(n \sqrt{np}, n/p^2))%% comparisons. For arbitrary graphs %%G%% with %%n%% vertices and %%m%% edges (again with the promised Hamiltonian path), we provide an algorithm for generalized sorting with %%\tilde{O}(\sqrt{mn})%% comparisons. This improves over the best known algorithm of Huang et al., which uses %%\min(m, \tilde{O}(n^{3/2}))%% comparisons.

This is joint work with Shyam Narayanan.

**Julia Chuzhoy - New Approximation Algorithms for Graph Crossing Number**

In the classical Graph Crossing Number problem, the input is an n-vertex graph %%G%%, and the goal is to draw %%G%% in the plane while minimizing the number of crossings between the images of its edges. This is a fundamental and extensively studied problem, with connections to many different areas, whose approximability status remains widely open. In all currently known approximation algorithms, the approximation factor depends polynomially on %%\Delta%% -- the maximum vertex degree in %%G%%. Even for the special case where maximum vertex degree is bounded by a constant, until recently, the best approximation algorithm achieved a factor-%%O(\sqrt{n})%% approximation, while the best current negative results do not rule out a constant-factor approximation. In this talk we survey several recent results and techniques that led to the first subpolynomial approximation algorithm for the Graph Crossing Number problem in low-degree graphs.

**Rahul Savani - The Complexity of Gradient Descent: CLS = PPAD ∩ PLS**

We consider the problem of computing a Gradient Descent solution of a continuously differentiable function f on a bounded convex domain, i.e., finding a "point where Gradient Descent terminates". Equivalently, this corresponds to computing a so-called KKT (Karush-Kuhn-Tucker) stationary point of f. We study this problem in the "white box" model, where the function f is given in the input, e.g., as an arithmetic circuit. By some standard arguments, it follows that the problem lies in the intersection of the complexity classes PPAD and PLS. PPAD is mostly known for capturing the complexity of computing a Nash equilibrium, while PLS has been successful in characterizing the complexity of various local optimization problems, such as Local-Max-Cut. We prove that the Gradient Descent problem is complete for PPAD ∩ PLS. In particular, this implies that the class CLS ("Continuous Local Search") which was previously thought to be a strict subset of PPAD ∩ PLS, is in fact equal to it.

This is joint work with John Fearnley, Paul Goldberg, and Alexandros Hollender.

**Greg Bodwin - Restorable Shortest Path Tiebreaking for Edge-Faulty Graphs**

Suppose we have a huge graph G and we have computed its shortest paths. But then an edge fails, rendering some shortest paths unusable. How can we efficiently restore these paths, without having to recompute from scratch? The Restoration Lemma [Afek, Bremler-Barr, Kaplan, Cohen, Meritt '01] is a classic structural result that provides a way forward: it promises that every broken shortest path can be fixed by concatenating two original shortest paths.

Unfortunately, the restoration lemma is tiebreaking-sensitive: if we select a single canonical shortest path for each node pair, then it is not generally true that a broken path can be restored by concatenating two selected shortest paths. We provide the first construction of a "restorable tiebreaking scheme," proving that this desirable restoration property can in fact be achieved if we use a particular algorithm to select canonical shortest paths. We then show applications of our tiebreaking scheme in fault-tolerant network design.

Joint work with Merav Parter.

Browser does not support script.

Browser does not support script.

Browser does not support script.

Browser does not support script.

Browser does not support script.

Browser does not support script.