banner green v2

Abstracts Day 2

Amir Abboud - A Survey on Fine Grained Complexity

TBC

Joseph Mitchell - Approximating Maximum Independent Set for Rectangles in the Plane

We give a polynomial-time constant-factor approximation algorithm for maximum independent set for (axis-aligned) rectangles in the plane. Using a polynomial-time algorithm, the best approximation factor previously known is %%O(\log\log n)%%.  The results are based on a new form of recursive partitioning in the plane, in which faces that are constant-complexity and orthogonally convex are recursively partitioned into a constant number of such faces.

Merav Parter - New Diameter Reducing Shortcuts: Breaking the %%O(\sqrt{n})%% Barrier

For an %%n%%-vertex digraph %%G=(V,E)%%, a \emph{shortcut set} is a (small) subset of edges %%H%% taken from the transitive closure of %%G%% that, when added to %%G%% guarantees that the diameter of %%G \cup H%% is small. Shortcut sets, introduced by Thorup in 1993, have a wide range of applications in algorithm design, especially in the context of parallel, distributed and dynamic computation on directed graphs. A folklore result in this context shows that every %%n%%-vertex digraph admits a shortcut set of linear size (i.e., of %%O(n)%% edges) that reduces the diameter to %%\widetilde{O}(\sqrt{n})%%. Despite extensive research over the years, the question of whether one can reduce the diameter to %%o(\sqrt{n})%% with %%\widetilde{O}(n)%% shortcut edges has been left open.

In this talk, I will present the first improved diameter-sparsity tradeoff for this problem, breaking the %%\sqrt{n}%% diameter barrier. Specifically, we show an %%O(n^{\omega})%%-time randomized algorithm for computing a linear shortcut set that reduces the diameter of the digraph to %%\widetilde{O}(n^{1/3})%%. We also extend our algorithms to provide improved %%(\beta,\epsilon)%% hopsets for %%n%%-vertex weighted directed graphs.

Joint work with Shimon Kogan.

Shay Moran - A Theory of Universal Learning

How quickly can a given class of concepts be learned from examples? It is common to measure the performance of a supervised machine learning algorithm by plotting its "learning curve", that is, the decay of the error rate as a function of the number of training examples. However, the classical theoretical framework for understanding learnability, the PAC model of Vapnik-Chervonenkis and Valiant, does not explain the behavior of learning curves: the distribution-free PAC model of learning can only bound the upper envelope of the learning curves over all possible data distributions. This does not match the practice of machine learning, where the data source is typically fixed in any given scenario, while the learner may choose the number of training examples on the basis of factors such as computational resources and desired accuracy.

In this work, we study an alternative learning model that better captures such practical aspects of machine learning, but still gives rise to a complete theory of the learnable in the spirit of the PAC model. More precisely, we consider the problem of universal learning, which aims to understand the performance of learning algorithms on every data distribution, but without requiring uniformity over the distribution. The main result of this paper is a remarkable trichotomy: there are only three possible rates of universal learning. More precisely, we show that the learning curves of any given concept class decay either at an exponential, linear, or arbitrarily slow rates. Moreover, each of these cases is completely characterized by appropriate combinatorial parameters, and we exhibit optimal learning algorithms that achieve the best possible rate in each case.
For concreteness, we consider in this paper only the realizable case, though analogous results are expected to extend to more general learning scenarios.

Jan Vondrak - A Constant-factor Approximation Algorithm for Nash Social Welfare with Submodular Valuations

I will discuss the problem of allocating indivisible goods to agents in order to optimize a certain welfare objective. Various objectives can be considered, the most natural being the summation of valuations of the participating agents. The "Nash social welfare" is an alternative objective which goes back to John Nash's work in the 1950s; it is the geometric average rather than the sum of valuations, which has several desirable properties such as balancing total welfare with fairness. On the technical side, it presents a significantly different problem, with connections to areas such as matching theory, computation of the permanent, and stable polynomials.

Our new result is a constant-factor approximation for Nash social welfare, whenever the valuation functions are submodular.

(Joint work with Wenzheng Li.)

Anna Karlin - A Survey on Metric TSP

We survey some of the recent progress on metric TSP. Most of the talk will focus on explaining the key technical ideas underlying the recent results by Nathan Klein, Shayan Oveis Gharan and the speaker obtaining a 3/2 - epsilon approximation algorithm and upper bound on the integrality gap of the subtour elimination LP.

Sayan Bhattacharya - Deterministic Rounding of Dynamic Fractional Matchings

We present a framework for deterministically rounding a dynamic fractional matching. Applying our framework in a black-box manner on top of existing fractional matching algorithms, we derive the following new results: (1) The first deterministic algorithm for maintaining a %%(2-\delta)%%-approximate maximum matching in a fully dynamic bipartite graph, in arbitrarily small polynomial update time. (2) The first deterministic algorithm for maintaining a %%(1+\delta)%%-approximate maximum matching in a decremental bipartite graph, in polylogarithmic update time. (3) The first deterministic algorithm for maintaining a %%(2+\delta)%%-approximate maximum matching in a fully dynamic general graph, in small polylogarithmic (specifically, %%O(\log^4 n)%%) update time. These results are respectively obtained by applying our framework on top of the fractional matching algorithms of Bhattacharya et al. [STOC'16], Bernstein et al. [FOCS'20], and Bhattacharya and Kulkarni [SODA'19].

Prior to our work, there were two known general-purpose rounding schemes for dynamic fractional matchings. Both these schemes, by Arar et al. [ICALP'18] and Wajc [STOC'20], were randomized.

Our rounding scheme works by maintaining a good matching-sparsifier with bounded arboricity, and then applying the algorithm of Peleg and Solomon [SODA'16] to maintain a near-optimal matching in this low arboricity graph. To the best of our knowledge, this is the first dynamic matching algorithm that works on general graphs by using an algorithm for low-arboricity graphs as a black-box subroutine. This feature of our rounding scheme might be of independent interest.

This is joint work with Peter Kiss.

Aviad Rubinstein - The Randomized Communication Complexity of Randomized Auctions

We study the randomized communication complexity of revenue maximizing auctions. Ironically, even though optimal auctions are randomized, prior work had only considered their deterministic communication complexity. 

On the positive side, we obtain surprisingly large (in fact infinite) improvements over deterministic protocols. 

On the negative side, we show nearly tight communication lower bounds, even against approximately optimal mechanisms. As a corollary, we resolve an open problem in theoretical economics by [Fadel and Segal, '09].

The work was done with Junyao Zhao, and inspired by an open problem of [Babaioff, Gonczarowski, Nisan '17].