Home > Department of Mathematics > Seminars > Archive - past years > Past Lunchtime Seminars in 2007
How to contact us

Department of Mathematics
Columbia House
London School of Economics
Houghton Street
London WC2A 2AE, UK

 

Email: maths.info@lse.ac.uk
Tel: +44(0)207 955 7732/7925
Follow us on Twitter: Twitter
Read our reserach blog:  http://blogs.lse.ac.uk/maths/
Connect with us on LinkedIn: LinkedIn
Watch our videos on YouTube: Icon of the YouTube logo

 

Click here for information on how to get to LSE using public transport and maps of the campus

Past Lunchtime Seminars in 2007

 

14 December 2007

Fares AL-Azemi (LSE)

An investment model with impulse stochastic control

Abstract
We consider a stochastic system whose uncontrolled state dynamics are modelled by a general one-dimensional Ito diffusion. The control effort that can be applied to this system takes the form of monotone impulse stochastic control. The control problem that we address aims at maximising a performance criterion that rewards high values of the utility derived from the system's controlled state. This problem has been motivated by applications such as the so-called goodwill problem in which the system's state is used to represent the image that a product has in a market, while control expenditure is associated with raising the product's image, e.g., through advertising.

 

 

7th December 2007

seminar cancelled

30th November 2007

Julian Merschen (LSE)

A Value Improvement Algorithm with Strategy Verification for Simple Stochastic Games

Abstract
In this talk, we introduce the notion of Simple Stochastic Games (SSG) and discuss two different approaches for computing the value of a SSG. The first algorithm uses value improvement while the second approach uses strategy improvement. We then alter the value improvement algorithm in order to check whether an interim strategy is optimal. This is motivated by the fact that the rate of convergence can be exponentially slow, given the size of a SSG, but the optimal strategy can be found after fewer iterations in some cases.

 

 

23rd November 2007

Zibo Xu (LSE)

Overview of conjecture of stochastic Ramsey's theorem

Abstract
Shmaya and Solan proposed a simple stochastic variation of Ramsey's theorem, based on the definition of F-consistent NT function. Although this version is enough for their paper about stochastic games, they suspect the stronger but natural stochastic generalization of Ramsey's theorem exists in this stopping time context. In this talk, we will discuss the difficulties lies in possible proofs or counterexamples of this conjecture and shows some tentative positive results.

 

 

16th November 2007

Stefanie Gerke (Royal Holloway, University of London)

Scheduling in a probabilistic setting

Abstract
We consider the problem of scheduling a set of n jobs on m parallel machines without preemption, so as to maximize the minimum load. This problem is known as the Santa Claus problem as Santa wants to give n presents to m children such that the least happy child is as happy as possible.

Here we study a version of that problem with the additional restriction that Santa Claus has only a guess of how valuable a present is for a child, i.e., the value of a present respectively the processing time in the scheduling terminology, is a random variable. Under this assumption we evaluate the performance of Santa Claus, and show the following results. If the expected values of the presents do not deviate too much from each other (in a well-defined sense), then there exists an algorithm with expected competitive ratio arbitrarily close to one, i.e., Santa Claus can perform in expectation almost as good as an algorithm who knows the actual values of the presents in advance. On the other hand, if the expected values of the presents deviate much from each other, then the expected performance becomes arbitrarily bad for all algorithms.

 

 

 

9th November 2007

Anne Balthasar (LSE)

The van den Elzen-Talman algorithm

Abstract
The van den Elzen-Talman algorithm is a pivoting algorithm for the computation of Nash equilibria of bimatrix games. In this talk we discuss relations of this algorithm to the Lemke-Howson algorithm and the global Newton method. This will lead us to the question of whether every equilibrium of index +1 is tracible by the van den Elzen-Talman algorithm.

 

 

 

2nd November 2007

Matthias Reitzner (Institut für Diskrete Mathematik und Geometrie, TU Wien)

Random polytopes

Abstract
Let K be a convex set in Rd. Choose n random points in K, and denote by Pn their convex hull. We call Pn a random polytope. Investigations concerning the expected value of functionals of Pn, like volume, surface area, and number of vertices, started in 1864 with a problem raised by Sylvester and now are a classical part of stochastic and convex geometry.

The last years have seen several new developments about distributional aspects of functionals of random polytopes. In this talk we concentrate on these recent results such as central limit theorems and tail inequalities, as the number of random points tends to infinity.

 

 

 

26th October 2007

Bob Simon (LSE)

Overlapping partitions and incomplete information

Abstract
A partition corresponds to the knowledge of a player. If there are at least two players then one can develop complex statements of interactive knowledge, such as "A knows that B doesn't know whether A knows that ...." We examine the degree to which statements of interactive knowledge can represent the system of overlapping partitions.

 

 

 

5th October 2007

Elchanan Mossel (UC Berkeley)

Sorting Noisy and Partial Orders

 

22nd June 2007

Adam Ostaszewski (LSE)

Weak and Strong No Trumps: Generic regular variation

Abstract
The theory of regular variation (launched by Karamata in 1930) imposes (rather weak) regularity conditions on the functions in play, namely (Lebesgue) measurability or (the property of) Baire. It has been a long-standing open problem to find a common generalization of these. This has now been achieved (joint work with N. H. Bingham) using a new combinatorial principle, called No Trumps or NT, by analogy with the principles Jensen's Diamond and Ostaszewski's Club. I will briefly discuss fundamentals of regular variation and review recent research showing how essentially one (unifying) kind of infinite combinatorics may be used to derive a variety of theorems. Three Examples of applications:

  1. Steinhaus type theorems on difference sets;
  2. Theorems on automatic properties (e.g. automatic continuity of additive functions in the class of measurable functions);
  3. Extension of the core theory of regular variation to a wider combinatorial setting -- we call this {\it generic} regular variation by analogy with usage in analysis (Poincare Recurrence Theorem, etc.) and mathematical logic (forcing/generic extensions).

 

 

25th May 2007

Prasad Tetali (GaTech)

Information Inequalities and Applications

Abstract
Upper and lower bounds are obtained for the joint entropy of a collection of random variables in terms of an arbitrary collection of subset joint entropies. These inequalities generalize Shannon's chain rule for entropy as well as inequalities of Han, Fujishige and Shearer. The new inequalities are applied to obtain new results in combinatorics, such as bounds on the number of independent sets in an arbitrary graph, matchings in regular graphs, and the number of zero-error source-channel codes, as well as new determinantal inequalities in matrix theory.

18th May 2007

Tugkan Batu (LSE)

A Sublinear-Time Approximation Scheme for Bin Packing

Abstract
The bin-packing problem is defined as follows: given a set of n items with sizes 0 < w1, w2,..., wn< 1, find a packing of these items into minimum number of unit-size bins possible. We present a sublinear-time asymptotic approximation scheme for the bin-packing problem; that is, for any ε>0, we present an algorithm Aε that has sampling access to the input instance and outputs a value k such that Copt< k< (1+ε) Copt+2, where Copt is the cost of an optimal solution. It is clear that uniform sampling by itself will not allow a sublinear-time algorithm in this setting; a small number of items might constitute most of the total weight and uniform samples will not hit them. In this work we use weighted samples, where item i is sampled with probability proportional to its weight: that is, with probability wii wi. In the presence of weighted samples, the approximation algorithm runs in Õ(n½poly(1/ε)) + g(1/ε) time, where g(x) is an exponential function of x. When both weighted and uniform sampling are allowed, Õ(n1/3poly(1/ε)) + g(1/ε) time suffices.

Joint work with Petra Berenbrink and Christian Sohler.

11th May 2007

Jason D. Hartline (Microsoft Research, Silicon Valley)

Money Burning and Implementation

Abstract
In response to public disapproval for any electronic mail system that includes a charge for sending email, computational spam fighting, which was originally proposed in [2], has reemerged as an economic approach that could be used to prevent spam in email systems [1]. This approach requires email senders to solve a challenge that expends their computational resources but otherwise has no direct social benefit. This challenge is tantamount to requiring that the sender burn money. This paper explores the extent to which money burning may be useful in broader implementation contexts. We consider the general problem of designing socially optimal, single round, sealed bid mechanisms when transfers made from the agents to the mechanism must be burnt (i.e., are a social loss). In these settings the socially optimal outcome is the one that maximizes the marginal surplus, that is, the sum of the agents' valuations minus the agents' payments (minus any social cost of the outcome). In the Bayesian setting where the agents' valuations are drawn independently, but not necessarily identically, from a known distribution, we give a concise characterization of the mechanism that maximizes the expected marginal surplus. From this characterization we observe that the socially optimal way to allocate a single item to agents with i.i.d. valuations is, depending on the distribution, to either assign the item to an arbitrary agent or to run a second-price-like auction. Furthermore, for non-identical distributions that satisfy the monotone hazard rate condition, the socially optimal mechanism (for any given social cost function) is the one that chooses the allocation to maximize the expected social surplus ex ante and requires no payments.

This is a joint work with Tim Roughgarden.

[1] C. Dwork, A. Goldberg, and M. Naor. On memory-bound functions for fighting spam. In 23rd Annual International Cryptology Conference, pages 426-444, 2003.

[2] C. Dwork and M. Naor. Pricing via processing or combatting junk mail. In 12th Annual International Cryptology Conference, pages 139-147, 1992.

 

4th May 2007

Mark Herbster (UCL)

Prediction on a graph

Abstract
We will discuss the problem of robust online learning over a graph. Consider the following game for predicting the labelling of a graph. "Nature" presents a vertex v1; the "learner" predicts the label of the vertex û1; nature presents a label u1; nature presents a vertex v2; the learner predicts û2; and so forth. The learner's goal is minimize the total number of mistakes. If nature is adversarial, the learner will always mispredict; but if nature is regular or simple, there is hope that a learner may make only a few mispredictions. Thus, a methodological goal is to give learners whose total mispredictions can be bounded relative to the "complexity" of nature's labelling. In this talk, we consider the "label cut size" as a measure of the complexity of a graph's labelling, where the size of the cut is the number of edges between disagreeing labels. We will give bounds which depend on the cut size and the (resistance) diameter of the graph.

16th March 2007

Klas Markström (Umeå universitet)

The guessing number of a Graph.

Abstract
Consider the following problem. We are given a graph and for each vertex random value from a finite set. Next every vertex tries to guess its own value based on the values of its neighbours. What kind of strategy can the vertices use in order to maximise the probability that everyone guesses correctly, and how large can this probability be?

9th March 2007

Luis Cereceda (LSE)

The Connectivity of Boolean Satisfiability

Abstract
We examine some recent results of Gopalan, Kolaitis, Maneva and Papadimitriou concerning structural and connectivity-related properties of the space of solutions of Boolean Satisfiability problems. For a Boolean formula f over n variables, the space of solutions of f can be viewed as the subgraph G(f) of the n-dimensional hypercube induced by the satisfying assignmets of f.

GKMP obtain dichotomy theorems for the computational complexity of the following two decision problems:

  • given a Boolean formula f, is G(f) connected?
  • given a Boolean formula f together with two satisfying assignments s and t, is there a path between s and t in G(f)?

They find the intractable side of both these questions to be PSPACE-complete.

They also establish a dichotomy for the possible diameter of the connected components of the graphs G(f), finding a remarkable correspondence between this and the tractability of the above decision problems: for the intractable cases the diameter of G(f) can be exponential, while in all other cases it is linear.

 

2nd March 2007

Viresh Patel (LSE)

Partitioning Posets

Abstract
Let P=(X,<) be a poset. A down-set of P is a subset, A of X such that if a is in A and b<a, then b is also in A. For A a down-set of P and B its complement, define

E(A,B) = {(a,b): a is in A, b is in B, and a<b}.

This defines an analogue of cuts for posets. In this talk we address the following natural questions: how large can |E(A,B)| be and how difficult is it to maximise |E(A,B)|?

 

23rd February 2007

Polly Lon (LSE)

A Singular Stochastic Control Model with Application to the Goodwill Problem

Abstract
Aim at maximizing a performance criterion that rewards high values of the utility derived from the system~Rs controlled state, but penalize any expenditure of control effect.

 

16th February 2007

Anne Balthasar (LSE)

The structure theorem of Kohlberg and Mertens

Abstract
The structure theorem of Kohlberg and Mertens (1986) analyses the topology of Nash equilibria. It says that for a finite normal form game, the graph of the Nash equilibrium correspondence is homeomorphic to the space of games. We present the proof of this theorem and show its usefulness by giving an easy corollary.

 

9th February 2007

David Ramsey (University of Limerick)

On a large population mate choice problem with continuous time

Abstract
Johnstone (1997) considered a game in which each member of a large population of animals searches for a breeding partner. Once formed, a pair remains together for the rest of the breeding season. Each individual has some value, say x, to any member of the opposite sex (i.e. preferences are uniform). The ratio of the number of males to females is assumed to be one and at the beginning of the mating season the distribution of the values of prospective mates is assumed to be uniform on the interval [0,1] (i.e. the problem is symmetric with respect to sex). Each individual can observe at most n prospective mates (i.e. time is discrete). The aim of each individual is to maximise his/her expected reward from search, which is taken to be the value of the mate obtained minus the costs of searching (these costs are taken to be c per prospective mate observed). On obtaining a mate an individual stops searching and is removed from the population of prospective mates. Hence, the distribution of the values of prospective mates changes as time progresses according to the decisions made by each member of the population. Numerical results are presented for this problem and it is shown that individuals of low value may become choosier over some period of time. Johnstone's approach was purely numeric and analytic results for similar problems (e.g. regarding the existence and uniqueness of equilibria). Alpern and Reyniers (1999) and (2005) derive numerical results for similar problems under the assumption of homotypic (individuals prefer partners who are similar to them) and uniform preferences.

This talk considers a similar model in which time is continuous and preferences are uniform. Special attention is paid to the simplest form of such a problem, in which individuals have one of two values. The general form of an equilibrium for such problems is derived. As in Johnstone's model low value individuals may become choosier over some interval of time. It is shown that multiple equilibria can occur for given realisations of the problem.

 

2nd February 2007

Peter Allen (LSE)
Talk Slides (70Kb)

Partitioning permutations into increasing and decreasing subsequences

Abstract
Some permutations can be partitioned into r increasing and s decreasing subsequences. It is clear that this is a hereditary property, therefore a permutation can be so partitioned if and only if it does not contain one of a set of forbidden sub-permutations. It would seem obvious that this set should be finite, yet actually proving it seems to be non-trivial. This talk will discuss the problem and its solution by Kezdy, Snevily and Wang.

 

26th January 2007

Ben Veal (LSE)

Similarity measures for binary classification

Abstract
We apply similarity measures to the problem of binary classification in the hope that we can get some measure of confidence in our classifications. I will describe a few similarity measures that I have been exploring and some experimental and theoretical results.

 

19th January 2007

Raju Chinthalapati (LSE)

Optimal Thermal Causal Path Method for Directional Trading

Abstract
We consider the problem of identifying similarities in a given set of financial time-series data streams. We use "Optimal Thermal Causal Path" method in order to identify the similarities and lead-lag structure between two time-series. Stock market trader could consider this is an useful tool for directional trading to spot arbitrage opportunities. We add curvature energy term to the method for getting accuracy and propose approximation techniques to reduce the computational time. We apply the method and approximation techniques on FTSE100 stocks. We extract the highly correlated stocks and do extensive experiments. We show how traders could exploit arbitrage opportunities by using the method.

 

 

Share:Facebook|Twitter|LinkedIn|