2009-2010

Summer Term 2010

Wednesday, 28 April, 5.30pm – 7pm

Kasper Lippert-Rasmussen (Aarhus)

Being in a Position to Complain

It is sometimes assumed that if no one has a complaint, there is no injustice. However, being in a position to complain depends on factors that do not bear of the justice of the situation (or factors that do so bear, but also bear independently thereof on one's being in a position to complain), and sometimes one might not be in position to complain even if one is being treated unjustly. In the talk I will explore some of the factors that determine whether one is in a position to complain drawing on recent work on being in a position to blame.

Download draft paper|

Wednesday, 5 May, 5.30pm – 7pm

Roberto Veneziani (Queen Mary)

The Paradoxes of the Liberal Ethics of Noninterference

(joint work with Marco Mariotti)

We analyse the liberal ethics of non-interference applied to social choice. Two liberal principles capturing non-interfering views of society, inspired by J.S. Mill's conception of liberty are examined, which capture the idea that society should not penalise agents after changes in their situation that do not affect others. Two paradoxes of liberal approaches are highlighted. First, it is shown that a restricted view of non-interference, as reflected in the Individual Damage Principle, together with some standard axioms in social choice leads straight to welfare egalitarianism. Second, it is proved that every weakly paretian social welfare ordering that satisfies a general principle of noninterference must be dictatorial. Both paradoxes raise important issues for liberal approaches in social choice and political philosophy.

Friday, 7 May and Saturday, 8 May

Conference in Honour of Brian Barry (Government Department|

Wednesday, 12 May, 2pm

Samir Okasha (Bristol)

Lakatos Award Seminar

Evolution, Risk and Rational Decision

Wednesday, 19 May, 5.30pm – 7pm

Shepley Orr (UCL) and Robert Sugden (East Anglia)

Taste Uncertainty and Status Quo Effects in Consumer Choice

We use reference-dependent expected utility theory to develop a model of status quo effects in consumer choice. We hypothesise that, when making their decisions, individuals are uncertain about the utility that will be yielded by their consumption experiences in different 'taste states' of the world. If individuals have asymmetric attitudes to gains and losses of utility, the model entails acyclic reference-dependent preferences over consumption bundles. The model explains why status quo effects may vary substantially from one decision context to another and why some such effects may decay as individuals gain market experience.

Paper Download|

Wednesday, 26 May

Nick Baigent (Graz and LSE)

AHRC-sponsored Master Class: Topological Social Choice

13.30-14.45 Topological methods in social choice l

15.00-16.15 Topological methods in social choice lI

17.30-19.00 Choice Group Talk: Topological Social Choice Theory

More details here|

Wednesday, 2 June, 5.30pm – 7pm

Andreas Jarvstad (Cardiff)

Source Reliability and the Conjunction Fallacy

Bovens and Hartmann (2003) have shown that when statements in conjunction fallacy problems (Kahneman & Tversky, 1982) are considered as coming from partially reliable sources, the "fallacy" is sometimes the normative response. Here, the descriptive validity of their Bayesian source reliability model was assessed. The model predicts that component statements added to standard conjunction problems will change the incidence of the fallacy. It also predicts that statements from reliable sources yield an increase (relative to unreliable sources) in fallacy rates. Neither the former prediction (Experiment 1), nor the latter prediction (Experiment 3), was confirmed. Experiment 2 showed that people derive source reliability estimates from the likelihood of statements in a manner consistent with the tested model. This finding rules out an alternative explanation for the results of Experiment 1. Furthermore, the model fit the empirical data worse than two competing models with no free parameters (Wyer, 1976; a simple averaging model). A sensitivity analysis showed that the model, given plausible model parameters, is incapable of generating differences between ratings of the unlikely statements and the conjunctions that match those produced by participants committing the fallacy. Overall, little evidence in favour of the source reliability model, as an explanation of the conjunction fallacy, was found.

Wednesday, 9 June

No seminar due to Decisions, Games & Logic (DGL10)| conference at ENS Paris.

Wednesday, 16 June, 5.30pm – 7pm

Katie Steele (LSE)

Testimony as Evidence

This paper concerns how beliefs should be updated in response to a special kind of evidence—the testimony of others. That is, when an agent learns the beliefs of others on some issue, how should this affect their own beliefs on that issue? Averaging models (linear/geometric) for updating on testimony are popular in both mainstream and formal epistemology circles, but a large question mark remains vis-à-vis the normative acceptability of these models. Here we investigate the relationship between averaging models and the Bayesian model for updating on testimony, the latter being taken as the normative standard. Some criticisms of averaging can be avoided by positioning it as an extra-Bayesian process: a method for deciding new probabilities across some partition, to be followed by Jeffrey conditioning. Ultimately, however, averaging methods for updating on testimony are seriously undermined by Wagner's (2002) general characterization of evidence, and how it should impact on belief. 

Wednesday, 23 June, 5.30pm – 7pm

Brian Hill (HEC Paris)

Beyond Probabilities: Belief, Confidence and Decision-Making

The standard representation of beliefs in decision theory and much of formal epistemology, by probability measures, is incapable of representing an agent's confidence in his beliefs. However, as shall be argued in this talk, the agent's confidence in his beliefs plays, and should play, an central role in many of the most difficult decisions which we find ourselves faced with - and indeed, in several sorts of decisions which have been largely ignored in the Bayesian literature. The aim of this talk is to formulate a representation of agents' doxastic states and a (axiomatically grounded) theory of decision which recognises and incorporates confidence in belief. Time-permitting, attitudes to choosing in the absence of confidence, applications and further directions will be discussed.

Friday, 25 June and Saturday, 26 June

CPNSS Graduate Conference Philosophy of Probability III|

Wednesday, 30 June

John Dryzek (ANU)

AHRC-sponsored Master Class: Deliberative Democracy

more details here|

Thursday, 1 July and Friday, 2 July

Workshop on Deliberative Democracy

more details here|

Tuesday, 6 July, 5.30pm – 7pm

Luc Bovens (LSE) and Laura Smead (LSE)

Fairness and Equal Burden Sharing in EU Asylum Policies

Tuesday, 13 July

Kevin Zollman (CMU) and Simon Huttegger (UC Irvine)

AHRC-sponsored Master Class:

Game Theory, Evolution and Communication

more details here|

Wednesday, 14 July

Workshop on Networks, Signalling, Social Epistemology

more details here| 

Lent Term 2010

Wednesday, 13 January, 5.30-7pm

The Bounded Strength of Weak Expectations

Jan Sprenger (Tilburg) and Remco Heesen (LSE and Tilburg)

The rational price of the Pasadena Game, a game introduced by Nover and Hájek (2004), has been the subject of considerable discussion. Easwaran (2008) has suggested that weak expectations (the value to which the average payoffs converge in probability) can give the rational price of the game. We argue against the normative force of weak expectations in the standard framework. Furthermore, we propose to replace this framework by a bounded utility perspective: this shift renders the problem more realistic and accounts for the role of weak expectations. In particular, we demonstrate that in a bounded utility framework, all agents, even if they have different utility functions and disagree on the price of an individual Pasadena Game, will finally agree on the rational price of a repeated, averaged game. Thus, we provide a realistic and comprehensive account of the Pasadena Game that explains the intuitive appeal of weak expectations, while avoiding both trivialization of the game and the drawbacks of previous approaches. (Download draft paper|)

Wednesday, 20 January, 5.30-7pm

Is the Brain a Bayesian

Nick Chater (UCL)

The brain and cognitive sciences have been swept by a Bayesian revolution. But people are notoriously poor at reasoning about probability. So is the brain really a Bayesian, or not? This talk considers recent experimental and theoretical work attempting to provide an answer.

Wednesday, 27 January, 5.30-7pm

Priors and Desires - A Model of Payoff-Dependent Beliefs

Guy Mayraz (Oxford)

In this paper I explore the possibility that what people believe to be true is affected by what they want to be true. Formalising this notion, I show that beliefs have a one parameter likelihood representation, in which payoffs play the role of evidence. Depending on this parameter, high-payoff functions as evidence for or against an event, with optimists (pessimists) more (less) likely to believe A relative to B if the payoff consequences of A are better. Changes in payoffs are equivalent to new evidence, providing a possible model for cognitive dissonance. Dynamic choice leads to path dependence, as early choices affect later beliefs, and hence later choices. Belief distortion is greatest when events are subjectively important, and normative evidence is weak.

Wednesday, 3 February, 5.30-7pm 

Decision-Making with Climate Models

Lenny Smith (LSE), Roman Frigg (LSE), Seamus Bradley (LSE)

Climate models are widely used to make forecasts, which provide the basis for far-reaching policy decisions. However, upon closer examination it turns out that climate models do not actually warrant the probabilistic forecasts that are commonly derived from them: due to their intrinsic imperfection and nonlinearity, they cannot be used to calculate decision-relevant probabilities. Although the IPPC has recognised this fact, no research in to other methods of prediction has been carried out. It is the aim of an ongoing project address this issue by first investigating how and why exactly probabilistic predictions break down in climate models, and then develop alternative methods to get around the problem. The proposal is that probabilistic reasoning should be given up altogether. Models should be used to calculate non-probabilistic odds for certain events, and these should be used to guide decision making. We introduce both the problem and the proposal and illustrate it with a simple example.

Wednesday, 10 February, 5.30-7pm

The Information Limit of Interpersonal Communication

Bahador Bahrami (UCL)

Many of us believe that 'two heads are better than one'. Indeed, our ability to work together towards common goals seems fundamental to the current dominance of the human species. But "how much better" are two heads compared to one? I will propose a quantitative framework inspired by research in cognitive/perceptual psychology to address this question. I will show that collective decision-making between two individuals significantly improves sensitivity for even the most elementary threshold-level visual detection compared to isolated individuals. Moreover, I will try to convince you that two heads are more than just better than one: interpersonal communication is sufficiently rich that individuals can share subjective estimates of confidence and use them to achieve Bayes optimal integration.

Wednesday, 24 February, 5.30-7pm

Double session: James Wong (LSE) and Ittay Nissan (LSE)

The Concept of 'Demos' in Environmental Democracy

James Wong (LSE)

This paper sets the stage for my PhD project which looks into the design of democratic institutions for environmental decision-making from a social-choice-theoretic perspective. Environmental democracy, a widely discussed idea in green political theory literatures, serves as a relevant and workable starting point. While the past decade saw fruitful discussion on how environmental democracy should be justified, less attention has been paid to a more fundamental question as to how the demos in such democracy can be constituted. Drawing on the theory of group agency, I consider the latter question through two approaches which concern the composition and the performance of demos respectively. The upshot is that each of these approaches gives rise to a different conception of demos in environmental democracy, but the tension between both conceptions can be reconciled such that the demos so defined is not only practicable and normatively appealing. I conclude by outlining the implications for institutionalizing environmental democracy under such definition of demos.

Can an Irrational Agent Reason Himself to Rationality? A Triviality Result

Ittay Nissan (LSE)

When an agent that accepts transitivity of preferences as a principle of rationality finds himself expressing intransitive preferences, he has to change some of his expressed preferences so that transitivity will be restored. When such an agent also believes in the existence of some independent betterness relation among the alternatives over which he forms his preferences, it is reasonable to demand that the way he changes his intransitive expressed preferences will be sensitive to his beliefs regarding this betterness relation. It is shown that under two natural conditions for such sensitivity, in case there are infinitely many alternatives, the agent must end up being indifferent between all alternatives except two. Some implications of this result for ethics are discussed.

Wednesday, 3 March, 5.30-7pm

Detecting outliers in categorical data using latent variable models and covariates

Irini Moustaki (LSE)

I will discuss different approaches for detecting outliers in categorical responses using a latent variable model. Outliers are considered to be those response patterns that are not fitted by the hypothesized model and they are expected to be generated by secondary response strategies such as guessing. For the first part of my talk I will discuss the forward search algorithm for detecting outliers and for the second part I will present a model that accounts  for outliers or over-represented response patterns. The proposed model is an extended latent trait model that models the guessing mechanism through an unobserved pseudo-item. Both methods will be illustrated with simulated and real examples.

Wednesday, 10 March

No seminar due to the Comte lectures on March 9| and March 10| by Prof Allen Buchanan

Wednesday, 17 March, 5.30-7pm

Double session with Esha Senchaudhuri (LSE) and Chris Thompson (LSE)

Liberal Procedural Legitimacy -- Two Criticisms

Esha Senchaudhuri (LSE)

A standard account of liberal procedural legitimacy aims to reconcile the ideal of reasonable consensus with the fact of reasonable pluralism, by assuming that consensus on the procedure by which a collective decision is made is sufficient to justify to all members of the collective the contents of the procedural outcome. In this justificatory formulation, consensus on procedure may be equated with consensus on the procedural outcome, although without the procedure in place reasonable disagreement would ensue. I criticize this account of procedural legitimacy on two grounds: first, it relies on a form of weighing reasons that is at odds with the justificatory requirement it sets for itself; secondly, it assumes a degree of connectedness between procedure and procedural outcome that is ultimately unwarranted.

A Search as a Social Epistemic Mechanism

Chris Thompson (LSE)

One of the core tasks of social epistemology is to identify the mechanisms by which groups of agents can track the truth.  To do this we need to have a clear understanding of the informational environment implied by our models.  The sheer number of signals or possible alternatives in an informational environment can make it difficult for an individual agent to identify the uniquely best alternative or most truth-conducive signal.  I  argue that one of the reasons that groups can track the truth is that groups can operate as a coordinated search, that by increasing our group size we increase the probability that the group will identify the best alternatives and most truth-conducive signals.  I conclude by discussing four problems which must be overcome if a search mechanism is to operate successfully.

Michaelmas Term 2009

19th-20th September

LSE-Groningen Workshop II at the University of Groningen

Wednesday, 23 September, 5.30-7pm

Brian Skyrms (UC Irvine)

Inventing New Signals

A model of inventing new signals is introduced in the context sender-receiver games with reinforcement learning. If the invention parameter is set to zero, it reduces to basic Roth-Erev learning applied to acts rather than strategies, as in Argiento et. al. (2009). If every act is uniformly reinforced in every state it reduces to the Chinese Restaurant Process - also known as the Hoppe- Pólya urn - applied to each act. The dynamics can move players from one signaling game to another during the learning process. Invention helps agents avoid pooling and partial pooling equilibria.

Wednesday, 7 October, 5.30-7pm

Hannes Leitgeb (Bristol)

A Probabilistic Semantics for Counterfactuals

We suggest a semantics for (a class of) counterfactuals which is probabilistic in the sense that the truth condition for counterfactuals refers to some probability measure. The semantics is made precise and studied in different versions which are related to each other by means of representation theorems. Despite its probabilistic nature, we are going to show that the semantics, and the resulting system of logic, may be regarded as a naturalistically defendable version of David Lewis' truth-conditional semantics and logic of counterfactuals. At the same time, the semantics may be seen as extending Ernest Adams' non-truth-conditional semantics and logic for conditionals. The results of our investigation are used to assess a claim considered recently by Hawthorne and Hajek, that is, the thesis that most ordinary counterfactuals are false.

Wednesday, 14 October, 5.30-7pm

Ken Binmore (LSE), Lisa Stewart (Harvard) and Alex Voorhoeve (LSE)

An Experimental Test of the Hurwicz Criterion for Decision-Making Under Uncertainty based on the Ellsberg Paradox

Knight (1921) distinguished between risk and uncertainty. You are in a risky situation if sound reasons are available for attaching probabilities to events. Otherwise you are in an uncertain situation. Bayesian decision theory applies in the case of risk, but the theory of rational decision under uncertainty remains largely undeveloped. An obvious approach to the problem of uncertainty replaces the single probabilities assigned to events in Bayesian decision theory by upper and lower probabilities. But how are such upper and lower probabilities to be manipulated? The axiom system of Milnor (1954) suggests that the first focus of attention should be a criterion proposed by Hurwicz (1951). In the case in which the outcome of a decision can be reduced to either winning or losing, the Hurwicz criterion says that you should maximize (1-h)p^+hp* where p^ and p* are the lower and upper probabilities of winning, and h (0≤h≤1) is the Hurwicz "optimism-pessimism coefficient". We have carried out an experimental test of the Hurwicz criterion based on the Ellsberg paradox. We will present a first, rough analysis of the results of the first round of this experiment. These results seem to bear out Huxley's dictum that 'science is organized common sense where many a beautiful theory was killed by an ugly fact.' We will also discuss the changes we plan to make to further rounds to see if we can (in a legitimate manner, of course) find prettier and less murderous facts.

Wednesday, 20 October, 5.30-7pm

Carl Wagner (Tennessee)

Masterclass on Rational Consensus in Science and Society

(more details|)

11.30 - 13.00 The French-DeGroot-Lehrer Model of Consensus

14.30 - 16.00 The Axiomatics of Aggregation

17.30 - 19.00 Choice Group Seminar Talk: "Independence Preservation in Expert Judgment Synthesis"

Wednesday, 4 November, 5.30-7pm

Luc Bovens (LSE)

Error Statistics versus Bayesian Statistics: a Simple Case

Let there be two medicines, M1 and M2. We randomly assign patients to equal-sized groups treated by respectively M1 and M2 and conduct a double blind study. The outcome of the experiment is either recovery or non-recovery for each patient. The results of our experiment are expressed in 2-by-2 contingency tables. I construct all the possible pairs of evidence on which there are more recoveries in the M1 group than in the M2 group, i.e. all the pairs {i, j} with i being the number of recoveries on M1, j being the number of recoveries on M2 and i > j. I first analyse these pairs of evidence by means of Fisher's Exact Test, determine the p-value for each set of evidence, and construct an ordering over the pairs of evidence based on these p-values. Subsequently, I analyse these pairs of evidence by determining the posterior probability that M1 is more effective than M2 by means of Bayesian updating, starting from uniform priors, and construct an ordering over these pairs based on these posterior probabilities. My question is: How do these orderings compare?  

Wednesday, 11 November, 5.30-7pm

Chris Starmer (Nottingham) 

Market Experience Eliminates Some Anomalies - And Creates New Ones

There is a large literature demonstrating the existence of 'anomalies' in individual choice behaviour. One example is the preference reversal phenomenon; another is the disparity between willingness to pay and willingness to accept valuations. Taken at face value, such anomalies collectively pose a major challenge to choice theorists, applied economists and policy analysts. But much of the anomaly evidence comes from experiments involving one-off decisions in non-market settings, and several economists have questioned their economic significance by raising doubts about whether they will arise, or at least persist, in real market environments. Indeed, there is some evidence that specific anomalies may decay in repeated experimental markets. One interpretation of this evidence is that people have underlying preferences that are 'better-behaved' than many choice experiments suggest, and that repeated market environments generate more accurate data on these underlying preferences. Another possibility is that preferences are to some extent 'shaped' or formed through market experience. The latter interpretation, if true, has some potentially far-reaching consequences for the interpretation of market behaviour. In this talk I discuss recent experiments which explore these issues.

Wednesday, 18 November, 5.30-7pm

Krister Bykvist (Oxford)

Objective Oughts versus Subjective Oughts

It is common in normative ethics to abstract away from any epistemic shortcomings of the agent. In this highly idealized debate, virtue ethics will simply tell you to do what the virtuous person would do (or what would display the most virtuous motive), whereas Kantian ethics will tell you to do what is based on a universalizable maxim, and utilitarianism, what would maximize general happiness. But is it right to ignore the epistemic situation of the agent? The obvious option is to reformulate moral theories so that they take into account the epistemic limitations of the agent. Virtue ethics will now tell you to do what you have good reason to believe a virtuous person would do. Similarly, Kantianism will now tell you to act on what you have good reason to think is a maxim that could be universalized. Utilitarianism will tell you to do what you have good reason to think would maximize happiness (or, more plausibly, what would maximize expected happiness.) The aim of this paper is to critically examine this epistemic reformulation of standard moral theories. I will pay especially close attention to Michael Zimmerman's epistemic account, since it is by far the best-developed account in the literature. His account is presented in his recent book Living with Uncertainty (CUP 2008). Zimmerman's main reason to move to an epistemic account has to do with the famous 'Jackson-case' that are often seen as serious challenge for the objective view of rightness. In a Jackson-case, the intuitively reasonable option is something the agent knows to be objectively wrong, no matter what happens. Zimmerman thinks this shows that the primary notion of moral rightness should be epistemically constrained. I will be arguing that this is a mistake. We need to distinguish between what is rational to do, given the agent's beliefs and preferences, and what is morally right. Objective moral rightness should be retained as the primary concept of moral rightness.

Wednesday, 25 November, 5.30-7pm

Franz Dietrich (LSE and Maastricht)

Bayesian Group Belief (Paper|)

Suppose a group is interested in whether a given hypothesis H is true. If every individual assigns a probability of 70% to H, what probability should the group as a whole assign to H? Is it exactly 70%, or perhaps more since different persons have independently confirmed H? The answer, I will show, crucially depends on the informational states of the individuals. If they have identical information, the collective has good reasons to adopt people's unanimous 70% belief, following the popular (probabilistic) Pareto principle. Under informational asymmetry, by contrast, a possibly much higher or lower collective probability may be appropriate, and the Pareto principle becomes problematic, or so I argue. The above question is an instance of the classic opinion pooling/aggregation problem, with applications for instance in expert panels. In general, individual probabilities need of course not coincide, and also more than one hypothesis may be of interest. The goal is to merge a profile of n individual probability measures (on an algebra of events) into a single collective probability measure. I propose an axiomatic model that connects group beliefs to beliefs of group members, and that models each individual and also the collective as a whole as Bayesian agents. Individuals may have different information. They may also have different prior beliefs and different domains (algebras) on which they hold beliefs, to account for differences in awareness and conceptualisation. As is shown, group beliefs can incorporate all information spread across individuals without individuals having to communicate their information (which may be complex, hard-to-describe, or not describable in principle due to language restrictions); individuals should instead communicate their prior and posterior beliefs. The group beliefs derived here take a simple multiplicative form if people's information is independent (and a more complex form if information overlaps arbitrarily), which contrast with familiar linear or geometric opinion pooling and the (Pareto) requirement of respecting unanimous beliefs.

Wednesday, 3 December, 5.30-7pm

Masterclass on Harsanyi's Theorem (Gajdos, Fleurbaey, Bradley)

4th-5th December

Workshop on Risk and Social Decisions|

(organised by Marc Fleurbaey and the LSE Choice Group)

 

 

 

Share:Facebook|Twitter|LinkedIn|