2014-2015

Summer 2015

Wednesday, 29 April, 5.30-7pm

Jennifer Carr (Leeds)

Accuracy without Consequences

Veritism is the claim that the fundamental source of epistemic value of doxastic states is accuracy. I present some puzzles that show that in order for epistemic utility theory to vindicate veritism, its decision rules must be revised. But the revisionary form of epistemic utility conflicts with evidentialism. So epistemic utility theorists face a dilemma: they must give up either evidentialism or veritism. I argue that we should reject both traditional and revisionary epistemic utility theory as decision theories, and I provide a non-normative interpretation of epistemic utility theory's mathematical machinery.

THURSDAY, 7 May, 2-3.30 pm

Joint seminar with the Managing Severe Uncertainty Project

Magda Osman (Queen Mary University)

What processes enable us to control uncertainty?

The presentation will start with a round trip of psychological research in the domain of dynamic control. For the most part the literature has focused heavily, and unnecessarily, on weak empirical demonstrations of unconscious processes that are thought to underpin control behaviours that are recruited to manage uncertain situations. The presentation will then go on to discuss the latest advances in the field. This work suggest that the way we are able to exert control in situations in which information is highly impoverished is when we: (1) can assert a sense of agency over our circumstances, and (2) when we identify signals in the environment that we judge to be reliable indicators of past performance (e.g., reward information), that can, in turn, be used to determine desirable future outcomes.

Wednesday, 13 May, 5.30-7pm

Campbell Brown (Glasgow)

Is Close Enough Good Enough?

Suppose that we can spare someone a great harm, but only at the cost of allowing a lesser harm to befall a larger group of people. Does what we should do in this situation depend on the number of people in this group? According to the Close Enough View, the numbers matter only if the harm to the group is not much less (i.e., is 'close enough'). This
paper evaluates the Close Enough View by, first, reviewing some problems faced by various alternatives, and, second, raising some problems for this view.

Wednesday, 20 May, 5.30-7pm

No seminar due to Alan Hajek's talk on "A Puzzle about Partial Belief" at the Institute of Philosophy.

Wednesday, 27 May, 5.30-7pm

No seminar this week.

Wednesday, 3 June, 5.30-7pm

TBD

Wednesday, 10 June, 5.30-7pm

Christopher Hitchcock (Caltech)

Updating on the Credences of Others

How should you update your credences upon learning the credences of others? Because of the complexity of Bayesian conditionalization in this context, there has been considerable interest in developing simple heuristics, the most popular being linear averaging. However, linear averaging has a number of drawbacks: it does not commute with itself, nor with conditionalization; it does not preserve independence; and it is not always compatible with conditionalization. In addition, we argue that a further drawback with linear averaging is that it lacks a property we call 'synergy'. We propose a new heuristic that is just is simple as linear averaging but doesn't have these drawbacks.  (joint work with Kenny Easwaran, Luke Fenton-Glynn, and Joel Velasco)

Wednesday, 17 June, 5.30-7pm

No seminar due to the Eighth Workshop in Decisions, Games and Logic  (17-19 June) and the Workshop on Decision Making under Severe Uncertainty (19-20 June).

Wednesday, 24 June, 5.30-7pm

Brian Skyrms (UC Irvine)

Wednesday, 1 July, 5.30-7pm

Matthew Adler (Duke)

Extended preferences and interpersonal comparisons

In this talk, I present an account of interpersonal comparisons-the so-called "extended preferences" account-- for the case where individuals have heterogeneous preferences.   The term "extended preferences" was introduced by John Harsanyi.  My account builds from Harsanyi's pioneering work concerning interpersonal comparisons, and in recognition of his contribution I use that term.  The key insights are, first, that we should model individual lives as "histories," i.e., hybrid bundles of attributes and preferences; and, second, that a measure of well-being levels and differences (if it incorporates a respect for individuals' preferences) must be such that the number assigned to a given history is an increasing function of the utility assigned to the history's attributes by a vNM utility function representing the history's preferences.
The talk will explain the extended-preference account; discuss important open questions; and contrast it with a competing approach to interpersonal comparisons with heterogeneous preferences, the "equivalent income" framework.


Lent 2015

Wednesday, 14 January, 5.30-7pm

Andrew Ellis (Economics, LSE) 

Complexity, Correlation, and Choice 

Often a profile of actions rather than a single action determines payoffs. The former case is more complex, as the correlations among outcomes across actions are payoff-relevant. Existing choice-theoretic models do not allow this complexity to affect behavior. To construct such a model, we introduce a framework that explicitly models choice of an action profile. We show that the axiom “Monotonicity” precludes a role for complexity and propose a weakening thereof that accounts for the possibility that the agent perceives correlations incorrectly. When the agent satisfies the other subjective expected utility axioms, the axiom ensures she acts as if she assigns probabilities to all potential correlations between actions and then maximizes expected utility. The model accommodates, and provides insight into the behavioral connection between, phenomena such as correlation neglect, cursed behavior, and home bias. We also identify the behavior that characterizes the events and actions the agent understands and relate it to our representation. (joint with Michele Piccione)

Wednesday, 21 January, 5.30-7pm

No seminar.

Wednesday, 28 January, 5.30-7pm

No seminar.

Wednesday, 4 February, 5.30-7pm

Kai Spiekermann (LSE)

Identity Choice in the Lab: How Social Identification Interacts with Norms

Akerlof and Kranton (2000) famously suggest that “choice of identity may be the most important ‘economic’ decision people make” because one’s identity determines how one sees oneself, but also which behavior others expect. In an innovative design we put individuals in a situation in which they can identify with one of two truly existing ‘small-scale societies’ with different sharing norms. By announcing the different giving behavior in the two reference groups, we emphasize two different norms of appropriate behavior and put our subjects into a situation in which the norm of their group exerts compliance pressure. This allows us to test the impact of identity on norm-compliance. We also elicit subjects’ willingness to pay to be part of the preferred group. Our project is possibly the first attempt to investigate the link between identity and norms by experimental method. Interestingly, our preliminary results do not lend support to the thesis that identity is chosen strategically to avoid high compliance costs caused by a more demanding norm. Rather, we find that the preference to be in the group with the more egalitarian (or selfish) norm depends on one’s pro- (or anti-)social value orientation. This suggests that identities are chosen to fit with one’s values, even if that choice is costly in monetary terms. (Joint with Arne Weiss)

THURSDAY, 12 February

Saamdu Chetri (Bhutan Gross National Happiness Centre), Paul Dolan (LSE) and Paul Anand (Open University)

Wednesday, 18 February, 5.30-7pm

Nick Baigent (LSE)

Revealed Preference – or should it be Revealed Choice?

This paper reconsiders the rationalisability issue in the theory of rational choice by insisting on “observability” as do revealed preference theorists. The two main roles played by rationalisability results in the theory of rational choice are providing “completeness” and a basis for welfare judgements. Both roles are disturbed by the results in this paper. The argument begins by drawing attention to the requirement that the alternatives ranked and chosen are mutually exclusive, so that if several alternatives are top ranked, not all can possibly be revealed (observed) in any straightforward sense. Thus, observed or revealed choices can only reveal one top ranked alternative. This implies that a revealed or observed choice function will be a “refinement” of the usual choice function. So the usual choice function in fact reveals a maximally refined choice function – indeed a multiplicity of them! The results characterise (1) those choice functions that reveal at least one rationalisable choice function and (2) those choice functions that only reveal rationalisable choice functions. Rigor is available in circulated notes but is completely shunned in the presentation which only uses intuition from simple examples.

Technical Note

Wednesday, 25 February, 5.30-7pm

No seminar this week.

Wednesday, 4 March, 5.30-7pm

Shaun Hargreaves-Heap (KCL)

An experiment on the stability of social preferences

This paper examines the robustness of individual pro-sociality and individual discriminatory behaviour, as measured by the in-group bias in pro-sociality. Discriminatory behaviour is not robust in two respects. First, there is discrimination in aggregate in the Trust and Public goods game but it disappears in the market framed decision over whether to compete. Second, individual discrimination in one decision problem does not help predict individual discrimination in another. This contrasts with correlation in individual pro-sociality in the Trust and Public Goods games. These insights are reinforced by an examination of the personality predictors of pro-sociality and the in-group bias.

Wednesday, 11 March, 5.30-7pm

No seminar due to the Auguste Comte Memorial Lectures.

Wednesday, 18 March, 5.30-7pm

Antony Millner (Grantham, LSE)  

Resolving intertemporal conflicts: Economics vs. Politics 

​Intertemporal conflicts occur when a group of agents with heterogeneous time preferences must make a collective decision. How should this be done? We examine two methods: an ‘Economics’ approach that emphasizes efficiency, and a ‘Politics’ approach in which agents vote over plans. If the group can commit to intertemporal plans Economics Pareto dominates Politics, regardless of whether agents’ preferences are public or private information. Without commitment however, Politics often yields higher group welfare, as it is more robust to outlying preferences. Our results have implications for social discounting, and decision-making by families, firms, and coun-tries. (joint with Geoffrey Heal)

Wednesday, 8 April, 5.30-7pm

Elliot Sober (Wisconsin)

The Philosophical Significance of Stein's Paradox

Charles Stein discovered a paradox in 1955 that many statisticians think is of fundamental importance.  Here we explore its philosophical implications.  We outline the nature of Stein’s result and of subsequent work on shrinkage estimators; then we describe how these results are related to Bayesianism and to model selection criteria like AIC.  We also discuss their bearing on scientific realism and instrumentalism.  We argue that results concerning shrinkage estimators underwrite a surprising form of holistic pragmatism. 


Michaelmas 2014

Wednesday, 8 October, 5.30-7pm 

No seminar due to cancellation.

Wednesday, 15 October, 5.30-7pm

Jason Alexander (LSE)

Game Theory and the Evolution of Language

(LSE Inaugural Lecture)

Wednesday, 22 October, 5.30-7pm

No seminar due to the Lakatos Award lectures

Wednesday, 29 October, 5.30-7pm

No seminar due to Philip Pettit's lecture on The Infrastructure of Democracy  

Wednesday, 5 November, 5.30-7pm 

No seminar due to Philosophy of Science Association Meeting.

Wednesday, 12 November, 5.30-7pm

Mike Otsuka (LSE)

How to guard against the risk of living too long: A Hobbesian voluntarist case for socialized pensions

I defend the view that a defined benefit pension plan can be justified as a social union of social unions, where each social union is a Hobbesian Leviathan of our cohorts that it is to the mutual benefit of each to contract into, to pool and tame the longevity risks that we face as individuals by taking advantage of the law of large numbers. The different cohorts in turn will find it rational to enter into covenants with one another in order to pool and tame the investment risks that remain. The immortal corporate body that arises can, given realistic assumptions, remain forever invested in high risk, high expected yield assets, in order to provide each of the individual cells (i.e., workers) that constitute it a better pension than she could hope to generate through her own private defined contribution pension pot. Although the defined benefit pension is derided as an obsolete, collectivist, socialist relic, it is in fact a voluntary mutual association that it would be rational for each to contract into, were it not for the fact that it is now being regulated out of existence, to the mutual disadvantage of employers and employees, in a manner contrary to the free market ideals of those who champion the replacement of defined benefit by defined contribution. The assumptions of prudence and risk aversion that underpin such regulation actually imply that the defined contribution pensions that these regulations are forcing us to adopt are a very bad for us to have, so long as we assume that individual workers should be at least as risk averse and prudent as the large institutions that form our employers and our pension funds.

Wednesday, 19 November, 5.30-7pm

Shlomi Segall  (Hebrew University of Jerusalem)

Bad for Whom? On the Disvalue of Inequality

Suppose inequality is bad as such, what kind of bad is it? Is inequality bad in a general (or impersonal) way or in a personal way? Is inequality bad for someone in particular, or just bad in general? Some (e.g. Larry Temkin) believe that in so far as inequality is non-instrumentally bad, its badness must be impersonal, while others (e.g. John Broome) hold that it must be bad for someone (predictably, the worse off). In this paper I want to show that both accounts are inadequate and offer a third, hybrid position. With Temkin and the ‘impersonal’ camp, I want to say that the badness of inequality is impersonal in that it denies the person-affecting view. That is, inequality is bad even when it does not harm anyone. But unlike the impersonal account I want to claim that that impersonal badness can and should be parcelled out, as it were, and identified with specific individuals. In that respect my position is obviously closer to Broome’s Personal account. Like Broome I want to say that the badness of inequality resides with particular individuals. But unlike him I want to say that the overall badness of outcomes is larger than the sum of personal bads. This difference (between Broome’s position and mine), we shall see, has concrete implications for the way in which egalitarians should rank different scenarios, and particularly those entailing uncertainty.

My argument proceeds as follows. In the first section I will examine and dismiss some initial arguments one may find in support of the more dominant of the views, namely the impersonal account. An upshot of that initial discussion is to make explicit a distinction about two dimensions of badness that is implicit in the literature, but often not paid sufficient attention to. In section II I discuss how things can be bad in a way that affects welfare (or not), and how they can be bad in a way that either resides with particular individuals, or with no one in particular. The stage is then set to examine the two most worked-out accounts of the respective views. Section III examines Temkin’s impersonal view, attempting to expose both that it is unconvincing as well as in tension with other tenets of his egalitarianism. In section IV I look at Broome’s personal account, and expose its shortcomings for telic egalitarians. I should stress that I offer there no argument against Broome’s personal view, but rather show why it is unappealing to a certain kind of telic egalitarianism (namely that which is unmoved by the person-affecting view). The final section defends the hybrid view (between the personal and impersonal account), and I will close by looking at some potential objections to that account.

Wednesday, 26 November, 5.30-7pm 

Brian Hill  (HEC, Paris)

Dynamic Choice: A Problem for Imprecise Probabilities or Imprecise Probabilists?

One common dynamic-choice-based argument against decision rules diverging from expected utility purports to show that it is incompatible with the conjunction of two prima facie plausible principles: dynamic consistency and consequentialism. Dynamic consistency demands that a decision maker's preferences over contingent plans agree with his preferences in the planned-for contingency. However, what counts are the contingencies the decision maker envisages - and plans for - rather than contingencies selected by a theorist, as is standardly used in discussions of the principle. We show how this simple point resolves the purported incompatibility. Moreover, it provides a reconceptualisation of dynamic choice under non-expected utility that neutralises many other dynamic-choice-based arguments against imprecise probabilities proposed in philosophy and economics. On the one hand the perspective provides a principled justification for the restriction to certain families of beliefs in the analysis of dynamic choice problems, which blocks several standard dynamic-choice-based arguments. On the other hand, the issue of value of information under imprecise probability is revealed to have been mis-analysed in standard treatments; proper analysis shows that it is non-negative as long as the information offered does not compromise information that the decision maker had otherwise expected to receive.

Wednesday, 3 December, 5.30-7pm 

Orri Stefansson  (Collège d'études mondiales, Paris)

Desiring what one believes to be good

The Desire-as-Belief thesis (DAB) states that a rational person desires a proposition exactly to the degree that she believes or expects the proposition to be good. Many people take David Lewis, the originator of the thesis, to have shown it to be inconsistent with Bayesian decision theory. However, as we show, Lewis' argument was based on an Invariance assumption that itself is inconsistent with the Bayesian decision theory he assumed in his arguments against DAB. The aim of this paper is to explore whether arguments can be made against DAB without assuming Invariance. We first refute the standard version of DAB, which entails that there are only two levels of goodness. We next consider two theses according to which rational desires are intimately connected to expectations of multi-levelled goodness, and show that these are consistent with Bayesian decision theory as long as we assume that the contents of `value propositions' are not fixed. We explain why this conclusion is independently plausible, and show how to construct such propositions.

Wednesday, 10 December, 5.30-7pm

Harold Nax (ETH, Zurich)

Meritocracy and the efficiency-equality tradeoff: the case of public goods

One of the fundamental tradeoffs underlying society is that between equality and efficiency. The challenge for institutional design is to strike the right balance. Game-theoretic models of public goods provision under assortativity succinctly capture this tradeoff: under complete non-assortativity (society is randomly formed), theory predicts maximal inefficiency but perfect equality; higher levels of assortativity (society matches contributors with contributors) are predicted to improve efficiency but come at the cost of growing inequality. In this talk, I will discuss theoretical analysis of such situations and present findings from an experiment we conducted to test this tradeoff. An important element driving behavior will be fairness considerations, with the meaning of fairness shown to depend on the regime context.

Share:Facebook|Twitter|LinkedIn|