Below you'll find an overview of the past seminars in this series from 2016. The seminars are listed in reverse chronological order, most recent first.
Wednesday 6 April - Kent Andersen (Aarhus University)
Using software from algebraic geometry to identify non-linear cuts in integer optimization
In integer optimization, a key issue is how to find good (convex) approximations of the set of feasible points. Such approximations can reduce the amount of enumeration needed to solve a given problem. In general, this requires identifying polynomials (called cuts) that describe the boundary of bodies that contain the set of feasible points. These polynomials can often only be described implicitly as (prime) implications of a (larger) set of polynomials (called an ideal) that involve further (artificial) variables. Software has been developed in algebraic geometry that can compute all prime implications of a set of polynomials, and this software can also compute polynomial implications that involve only a subset of the variables. In this talk we illustrate how to use such software to identify a (general) formula for a (non-linear) cut in integer conic optimization. We believe similar approaches can be used to identify formulas for useful surfaces in other non-linear optimization settings.
Wednesday 16 March - Thomas Dueholm Hansen (Aarhus)
An improved version of the Random-Facet pivoting rule for the simplex algorithm
The Random-Facet pivoting rule of Kalai and of Matousek, Sharir, and Welzl is an elegant, randomized pivoting rule for the simplex algorithm, the classical combinatorial algorithm for solving linear programs. The expected running time of the simplex algorithm when using this rule is subexponential in the combinatorial size--the number of variables and inequalities--of the linear program. This is currently the best known combinatorial bound for solving general linear programs. Other polynomial time algorithms are known, but their running time depends also on the number of bits in the representation of the linear program.
We present a slightly improved version of the Random-Facet pivoting rule, thus obtaining the fastest known combinatorial algorithm for solving linear programs, the first improvement in over 20 years. Our results apply not only to linear programs, but also to more general, abstract LP-type problems. In particular we also obtain the fastest known algorithm for solving two-player turn-based stochastic games.
Joint work with Uri Zwick.
Wednesday 17 February - Daan Wierstra (Google DeepMind)
How to Build an Artificial General Intelligence
At Google DeepMind, we focus on developing general learning algorithms to push the frontier of artificial general intelligence (AGI) research. Endeavouring to eventually construct a full AGI, agent architecture design and the non-trivial matter of `how to put it all together' figures prominently in our mission. As such, DeepMind operates at the crossroads between neuroscience, machine learning and engineering. The first part of this talk will provide an overview of some of our recent research, ranging from advances in (deep) reinforcement learning and advanced game play to (deep) variational inference methods and (deep) memory techniques. In the second part of this talk, I will elaborate on some of our general design philosophy, confront some of the profound dilemmas inherited from disparate fields such as machine learning and computational neuroscience, and explain why a great deal of work is sure to still be ahead of us.
Biography:
Daan Wierstra leads the `Frontiers' research team at Google DeepMind. He did his PhD with Juergen Schmidhuber at IDSIA, the Swiss AI lab, where he worked on reinforcement learning methods, recurrent deep networks and global optimisation. After a postdoc in computational neuroscience at the Brain Mind Institute at EPFL in Lausanne, Switzerland, he was the first scientific employee to join DeepMind, which then was a recently founded startup with the explicit aim of solving Artificial General Intelligence (AGI). DeepMind was incorporated into Google last year, and now his current work focuses on developing algorithms relevant to strong AI, which includes generative models, dealing with approximate uncertainty, temporal abstraction, planning, and attentional mechanisms.
Seminar jointly hosted by Discrete Mathematics and Game Theory and Operations Research
Thursday 11 February - Christoph Dürr (Paris)
Online Algorithms for Multi-Level Aggregation
Online algorithms receive their input in form of request sequences and need to make irrevocable decisions at each request. A classical example is the paging problem. We introduce standard techniques used to design online algorithms and their analysis, for the ski rental problem, the cow path problem and load balancing. Finally we talk about a personal work on the multi-level aggregation problem.
This is a joint work with Marcin Bienkowski, Martin Böhm, Jaroslaw Byrka, Marek Chrobak, Lukáš Folwarczný, Łukasz Jeż, Jiří Sgall, Nguyen Kim Thang, Pavel Veselý.
Wednesday 20 January - Ruth Misener (Imperial)
Implementing algorithmic advances in mixed-integer nonlinear optimisation
ANTIGONE (Algorithms for coNTinuous / Integer Global Optimisation of Nonlinear Equations), is a computational framework for the deterministic global optimisation of mixed-integer nonlinear programs (nonconvex MINLP).
This presentation highlights how ANTIGONE integrates algorithmic advances in: reformulating expression graphs, finding high-dimensional vertex polyhedral cuts, and detecting convexity. We also discuss new directions in finding special structure via pattern matching.
Wednesday 13 January - Giacomo Zambelli (LSE)
Rescaled coordinate descent methods for Linear Programming
Simple coordinate descent methods such as von Neumann's algorithm or Perceptron, both developed in the 50s, can be used to solve linear programming feasibility problems. Their convergence rate depends on the condition measure of the problem at hand, and is typically not polynomial. Recent work of Chubanov (2012, 2014), related to prior work of Betke (2004), has gathered renewed interest in the application of these methods in order to obtain polynomial time algorithms for linear programming. We present two algorithms that fit into this line of research. Both our algorithms alternate between coordinate descent steps and rescaling steps, so that either the descent step leads to a substantial improvement in terms of the convergence, or we can infer that the problem is ill conditioned and rescale in order to improve the condition measure. In particular, both algorithms are based on the analysis of a geometrical invariant of the LP problem, used as a proxy for the condition measure, that appears to be novel in the literature.
This is joint work with Daniel Dadush (CWI) and László Végh (LSE).