2010-2011

  

Summer Term 2011

Wednesday, 4 May, 5.30pm – 7pm

Ken Binmore (Bristol, UCL)

Wednesday, 11 May, 5.30pm – 7pm

No Seminar

Wednesday, 18 May, 5.30pm – 7pm

Alan Hájek (ANU)

A Poisoned Dart for Conditionals

Suppose I throw at random an infinitely thin dart at a representation of the [0, 1] interval of the real line. Here are two propositions concerning the landing point:

L (for "left"): [0, ½]

(In words: the dart lands on a point in the left half of the interval, endpoints included.)

C (for "conditional"): [½, 1] à ½

(In words: if the dart lands on a point in the right half of the interval, endpoints included, then it lands exactly on ½.)

I will present two paradoxes concerning how L, C, and their probabilities relate to each other. They will add different claims about how L and C are inferentially related. I hope that my discussion of various ways of solving my paradoxes will shed some light on the semantics of the indicative conditional. I will target the material conditional analysis, the 'Or-to-If' inference, two 'Export' principles for iterated conditionals, and McGee's 'counterexample to modus ponens'. I will trace their downfall to a common source. So one of my goals is to unify a number of seemingly disparate phenomena.

Wednesday, 25 May, 5.30pm – 7pm

Ned McClennan (LSE, Syracuse)

Rational Cooperation

Friday, 3 June – Saturday, 4 June

Third Meeting of the Rationality and Decision Network| 

Friday, 3 June, 5.45pm – 7.45pm

Brian Skryms (UC Irvine/Stanford)

AHRC sponsored Masterclass: From Dynamics of Rational Deliberation to Signalling|

Wednesday, 15 June, 5.30pm – 7pm

Gabriella Pigozzi (Université Paris Dauphine )

On Judgment Aggregation in Abstract Argumentation

(Based on joint work with Martin Caminada & Mikolaj Podlaszewski)

Given an argumentation framework and a group of agents, the individuals may have divergent opinions on the status of the arguments. If the group needs to reach a common position on the argumentation framework, the question is how the individual evaluations can be mapped into a collective one. In particular, we are interested in defining a social outcome 'compatible' with the individuals' judgments. The key notion that we want to capture is that any individual member has to be able to defend the collective decision. We introduce three aggregation operators that satisfy the condition above, and we offer two definitions of compatibility. Not only does our proposal satisfy a good number of standard judgment aggregation postulates, but it also avoids the problem of individual members of a group having to become committed to a group judgment that is in conflict with their own individual positions. We also investigate the behaviour of two of such operators from a social choice-theoretic point of view. In particular, we study under which conditions these operators are Pareto optimal and whether they are manipulable.

Wednesday, 22 June, 5.30pm – 7pm

Double Session with Irem Bozbay (Maastricht) and James Wong (LSE)

Irem Bozbay (Maastricht)

Judgment Aggregation in Search for the Truth

(Based on joint work with Franz Dietrich)

We analyse the problem of aggregating judgments over multiple issues from the perspective of efficient aggregation of voters' private information. While new in judgment aggregation theory, this perspective is familiar in a different body of literature about voting between two alternatives when voters' disagreements stem (fully or partly) from conflicts of information rather than interests. Combining the two literatures, we consider a simple judgment aggregation problem and model the private information underlying voters' judgments. We analyse the resulting strategic incentives and determine which voting rules lead to collective judgments that efficiently use all private information, assuming that voters share a preference for true collective judgments. We find that in many, but not all cases a quota rule should be used, which decides on each issue according to whether the proportion of `yes' votes exceeds a particular quota.

James Wong (LSE)

Normative Ends of Deliberation and the Discursive Dilemma: Lessons for Institutionalising Deliberative Democracy

This paper discusses the normative implications of the problem of the 'discursive dilemma' in social choice theory for institutionalizing deliberative democracy. In the past two decades, the emphasis of deliberative democracy has been on spelling out the ideal properties of a deliberative process, while the matter of collective choice and decision rule, an essential aspect of democracy, has been generally ignored. Meanwhile, the latter issue is particularly important when: (1) deliberative democracy is operationalized as part of the process of collective decision-making; and (2) deliberation alone cannot guarantee the availability of substantive consensus for decision-making across different agendas. In response to such a practical constraint, John Dryzek and Simon Niemeyer (2006/2007) propose two deliberative ends which we should aim at, i.e., meta-consensus and inter-subjective rationality. They argue that these deliberative ends are not only consistent with an ideal deliberative procedure but also produce stable collective decisions. I examine both ends under the context of a 'deliberation-then- aggregation' (DTA) institution of deliberative democracy. I argue that meta-consensus and inter-subjective rationality – as specified by Dryzek and Niemeyer – pave the way for the discursive dilemma, and hence may, contrary to their claim, generate unstable collective decisions. This problem may be avoided by redefining the notion of meta-consensus in a less general and more precise form, as in Christian List's (2002) meta-agreement which consists of demands such as single-peakedness and unidimensional alignment. Taking this theoretical issue seriously can avert the possible loopholes in institutionalizing deliberative democracy in practice.

Wednesday, 29 June, 4.30pm – 7pm (followed by reception)

Book Launch Event: Christian List & Philip Pettit, Group Agency: The Possibility, Design and Status of Corporate Agents

Venue: The Alumni Theatre, New Academic Building, NAB.LG09 (Please note change in venue)

Speakers:

Natalie Gold (Edinburgh)

Fabienne Peter (Warwick)

Kai Spiekermann (LSE)

Reply from Christian List (LSE)

Wednesday, 27 July, 5.30pm – 7pm

Kotaro Suzumura (Hitotsubashi)

AHRC-Sponsored Masterclass|

Rationality as Rationalizability and the Concept of Suzumura Consistency

Based on joint work with Walter Bossert (Montreal)

Workshop on Choice and Rationalizability|

Under what conditions can an agent's choice behaviour be rationalized, and what count as a "rationalization"? This topic is of great relevance to our understanding and modelling of agents. Indeed, it concerns the very notion of a "rational agent". Given its inherently interdisciplinary character, the topic matters to economists as well as to philosophers, psychologists, and social scientists more generally. The topic is central to classical rational choice theory, where several natural accounts of rationalization have been developed, most of which are based on explaining behaviour as the pursuit (or maximization) of certain stable preferences. But the topic also meets new challenges in the light of developments in behavioural economics and the theory of bounded rationality.

This workshop aims to explore classical and new approaches to rationalization. It brings together some of the most active researchers in the field.

Lent Term 2011

Wednesday, 12 January, 5.30pm – 7pm

Joe Halpern (Cornell)

Constructive Decision Theory: Decision Theory with Subjective States and Outcomes

(based on joint work with Larry Blume and David Easley)

The standard approach in decision theory (going back to Savage) is to place a preference order on acts, where an act is a function from states to outcomes. If the preference order satisfies appropriate postulates, then the decision maker can be viewed as acting as if he has a probability on states and a utility function on outcomes, and is maximizing expected utility. This framework implicitly assumes that the decision maker knows what the states and outcomes are. That isn't reasonable in a complex situation. For example, in trying to decide whether or not to attack Iraq, what are the states and what are the outcomes? We redo Savage viewing acts essentially as syntactive programs. We don't need to assume either states or outcomes. However, among other things, we can get representation theorems in the spirit of Savage's theorems; for Savage the agent's probabilities and utility are subjective; for us in addition to the probabilities and utility being subjective, so is the state space and the outcome space. I discuss the benefits, both conceptual and pragmatic, of this approach. As I show, among other things, it provides an elegant solution to framing problems.

Wednesday, 19 January, 5.30pm – 7pm

Double session with Chris Thompson (LSE) and Esha Senchaudhuri (LSE)

Chris Thompson (LSE)

A General Model of a Group Search Procedure, Applied to Epistemic Democracy

I provide a general model of a search procedure involving groups of agents. A single agent searching for an object of interest may only have a small probability of finding it. But if we employ a group to search for the object the probability that at least one of the group members will find it can be significantly higher. Under certain conditions the probability that a group will find a particular object will be increasing in group size and in the limit reach certainty. I present the results of computer simulations that confirm this assertion. In political contexts, the model of a search procedure provides an epistemic justification for inclusiveness. More particularly the search procedure can fill two gaps in current epistemic accounts of democracy: it can provide an account for how agents set an agenda before a vote is taken; and it can provide an account for how agents search for evidential information such that when they cast their votes the competence and independence assumptions of the Condorcet Jury Theorem hold. 

Esha Senchaudhuri (LSE)

A Problem of Many Hands for Political Liberalism

Political liberals often define political legitimacy as the collective exercise of political power by all reasonable members of a polity. Assuming reasonable pluralism, the view that reasonable people can disagree on judgments formed after evaluating vague concepts or weighing difficult evidence, political liberals must explain how an exercise of power can be collective when individual members of the collective can disagree with each other on any particular use of political power.  A variety of accounts have been offered to this end, beginning with the Rawlsian attempt to embody reasonable consensus in public institutions supported by public reason, to more recent accounts in which formal and decentralized deliberative practices are seen as pivotal to legitimacy. I argue that a suitable synthesis of political legitimacy with reasonable pluralism must view each citizen as contributing to the authorship of a collective decision, such that a particular use of political power is only legitimate if a threshold level of personal responsibility may be attributed to each reasonable citizen for its construction. Given the different ways in which citizens contribute to political decision-making, this leads to a variation of Dennis Thompson's famous Problem of Many Hands. The original Problem of Many Hands provides a framework with which to place blame on public officials for political decisions made by an entire bureau or administration, through formal procedures and on behalf of democratic citizens. In this paper I given an account of how to extend this framework to the political participation of every democratic citizen, involved not only in formal political processes, but formal and informal socio-political practices. 

Wednesday, 26 January, 5.30pm – 7pm

Alex Voorhoeve (LSE)

Egalitarianism and the Separateness of Persons

The difference between the unity of the individual and the separateness of persons requires that there is a shift in the moral weight that we accord to increases in utility when we move from making intrapersonal tradeoffs to making interpersonal tradeoffs. Michael Otsuka and Alex Voorhoeve have recently argued that the "pure" version of the Priority View must be rejected because it cannot account for this shift. Here, we argue that the same goes for both "pure" and "pluralist" versions of brute luck egalitarianism. More precisely, we argue for the following claims: (i) the aforementioned shift cannot always be explained by the intrinsic badness of brute luck inequality; (ii) it follows that an appeal to the intrinsic badness of such inequality does not ensure adequate respect for the difference between the unity of the individual and the separateness of persons; and (iii) familiar forms of pluralist egalitarianism, which give some weight to both the badness of brute luck inequality and to either total or priority-weighted utility, also violate the requirements imposed by this difference.

Wednesday, 2 February, 5.30pm – 7pm

Niko Kolodny (UC Berkeley)

AHRC-sponsored Masterclass: Reasons & Rational Choice|

Thursday, 3 February – Saturday, 5 February

Workshop on Reasons & Rational Choice|

Wednesday, 9 February, 5.30pm – 7pm

Ashley Piggins (NUI Galway)

A Model of Deliberative and Aggregative Democracy

The difference between the unity of the individual and the separateness of persons requires that there is a shift in the moral weight that we accord to increases in utility when we move from making intrapersonal tradeoffs to making interpersonal tradeoffs. Michael Otsuka and Alex Voorhoeve have recently argued that the "pure" version of the Priority View must be rejected because it cannot account for this shift. Here, we argue that the same goes for both "pure" and "pluralist" versions of brute luck egalitarianism. More precisely, we argue for the following claims: (i) the aforementioned shift cannot always be explained by the intrinsic badness of brute luck inequality; (ii) it follows that an appeal to the intrinsic badness of such inequality does not ensure adequate respect for the difference between the unity of the individual and the separateness of persons; and (iii) familiar forms of pluralist egalitarianism, which give some weight to both the badness of brute luck inequality and to either total or priority-weighted utility, also violate the requirements imposed by this difference.

Tuesday, 15 February, 5.30pm – 7pm

Marcus Pivato (Trent University, Ontario)

Social Choice with Approximate Interpersonal Comparisons of Utility

Some social choice models assume that precise interpersonal comparisons of utility (either ordinal or cardinal) are possible, allowing a rich theory of distributive justice.  Other models assume that absolutely no interpersonal comparisons are possible, or even meaningful; hence all Pareto-efficient outcomes are equally socially desirable.  We compromise between these two extremes, by developing a model of `approximate' interpersonal comparisons of well-being, in terms of an incomplete preorder on the space of psychophysical states.  We then define and characterize `approximate' versions of the classical egalitarian and utilitarian social welfare orderings.  We show that even very weak assumptions about interpersonal comparability can yield preorders on the space of social alternatives which, while incomplete, are far more complete than the Pareto preorder.

The talk is based on the following three papers:

Wednesday, 16 February, 5.30pm – 7pm

No seminar due to Annual Royal Institute of Philosophy Lecture

Wednesday, 23 February, 5.30pm – 7pm

Double Session with Roberto Fumagalli (LSE) and Mareile Drechsler (LSE)

Roberto Fumagalli (LSE)

The Futile Search for True Utility

In traditional decision theory, utility is regarded as a mathematical construct to be inferred from agents' observed choices. In the recent literature at the interface between economics, psychology and neuroscience, several authors argue that by investigating the neuro-psychological underpinnings of agents' hedonic experiences, economists could develop more predictive and explanatorily insightful models of choice. In particular, some go as far as to contend that agents' utility is literally computed by specific neural areas and urge economists to substitute their notion of utility with some neuro-psychological constructs. In this paper, I distinguish three notions of utility which are frequently mentioned in debates over decision theory and examine some critical issues regarding their definition and measurability. Moreover, I provide various reasons to doubt that economists should replace the notion of utility which lies at the core of decision theory.

Mareile Drechsler (LSE)

Axiomatizing Bounded Rationality: The Priority Heuristic

(based on joint work with Konstantinos Katsikopoulos and Gerd Gigerenzer)

Violations of expected utility theory have been typically accounted for by adding adjustable parameters, such as non-linear transformations of probabilities and different value functions for gains and losses. This paper deals with an alternative approach to bounded rationality based on empirical evidence on heuristics for risky choice. The priority heuristic logically implies major violations of expected utility theory, including common-consequence effects, common-ratio effects, reflection effects, and the four-fold pattern of risk taking. The heuristic does not use adjustable parameters and explains these violations without assuming any non-linear transformations of values and probabilities. Its logic is based on rules for search, stopping, and decision making that reflect the psychological processes of sequential limited search and aspiration levels for terminating search. We provide an axiomatization of a class of heuristics that has the priority heuristic as a special case, and representation theorems for two and three attributes (outcomes and probabilities). In this approach, heuristic rules that generate preferences, rather than preferences per se, are revealed from choice. We see this axiomatization as a contribution to developing a theory of bounded rationality in terms of Selten (2001), which focuses on the actual decision processes and how these lead to the observed choice patterns.

Wednesday, 2 March, 5.30pm – 7pm

Menahem Yaari

Markets and Justice

Tuesday, 8 March, 6.30pm – 8pm (Hong Kong Theatre)

Auguste Comte Memorial Lectures given by Frances Kamm (Part 1|)

Wednesday, 9 March, 6.30pm – 8pm (Hong Kong Theatre)

Auguste Comte Memorial Lectures given by Frances Kamm (Part 2|)

Wednesday, 16 March, 5.30pm – 7pm

Ralph Wedgewood (Oxford)

Utility and Choice-worthiness

Michaelmas Term 2010

Wednesday, 29 September, 9.45am – 7pm

Workshop on Decision Theory|

with David Etlin (Leuven), James Joyce (Michigan, Ann Arbor), Jason Alexander (LSE), Richard Bradley (LSE), Franz Dietrich (LSE), Christian List (LSE) 

Wednesday, 6 October, 5.30pm – 7pm

Alec Walen (Maryland)

A Moral Ground for the Means Principle

Monday, 11 October, 4.30pm

Celebration of the 40th Anniversary of the Publication of Collective Choice and Social Welfare by Amartya Sen

Wednesday, 13 October, 5.30pm – 7pm

Richard Bradley (LSE) and Chris Thompson (LSE)

Multiple Vote Majority Rule

Multiple-vote majority rule is a procedure for making group decisions in which individuals weight their votes on issues in accordance with how competent they are on them. When individuals are motivated by the truth and know their relative competence on different issues, multiple-vote majority rule performs nearly as well, epistemically speaking, as rule by an expert oligarchy, but is still acceptable from the point of view of equal participation in the political process.

Wednesday, 20 October, 5.30pm – 7pm

Lieven Decock (Amsterdam)

Vagueness: A Conceptual Spaces Approach

(based on joint work with Igor Douven, Richard Dietz and Paul Egré)  

A central question in the debate about vagueness is the question of what a borderline case is. Answers to this question are commonly stated in terms of how people respond to particular cases. In this paper, we aim at explaining why people respond to borderline cases the way they do. Our proposal draws on recent work in cognitive psychology, in particular on work on conceptual spaces. We argue for two plausible extensions of this work and show how these help to accommodate the phenomena related to vagueness in the conceptual spaces approach. The result will be an explanation of people's responses to borderline cases in terms of how the human mind conceptualizes the world.

Wednesday, 27 October, 5.30pm – 7pm, H102

Franz Dietrich (LSE) and Kai Spiekermann (LSE)

Epistemic Democracy with Defensible Premises

Wednesday, 3 November, 5.30pm – 7pm

Prasanta Pattanaik (UC Riverside)

Choice, Internal Consistency, and Rationality

The classical theory of rational choice is built on several important internal consistency conditions. In recent years, the reasonableness of those internal consistency conditions has been questioned and criticized, and several responses to accommodate such criticisms have been proposed in the literature. This paper develops a general framework to accommodate the issues raised by the criticisms of classical rational choice theory, and examines the broad impact of these criticisms from both normative and positive points of view.

Wednesday, 10 November, 5.30pm – 7pm

John Howard (LSE)

Significance Testing with No Alternative Hypothesis: A Measure of Surprise

A pure significance test would check the agreement of a statistical model with the observed data even when no alternative model was available. The paper proposes the use of a modified p-value to make such a test. The model will be rejected if something surprising is observed (relative to what else might have been observed). It is shown that the relation between this measure of surprise (the s-value) and the surprise indices of Weaver and Good is similar to the relationship between a p-value, a corresponding odds-ratio, and a logit or log-odds statistic. The s-value is always larger than the corresponding p-value, and is not uniformly distributed. Difficulties with the whole approach are discussed.

Monday, 15 November, 6.00pm – 7.30pm, NAB.1.15

Brian Skyrms (UC Irvine)

Naturalizing the Social Contract

All social contracts that exist, or that could come to exist, must arise by some kind of natural process. This talk is about using game theory and evolutionary dynamics as tools for a naturalistic investigation of the social contract.

Wednesday, 17 November, 5.30pm – 7pm

Zacharay Ernst (Missouri)

Beating Two Dead Horses: Newcomb's Problem and Frankfurt-style Counterexamples

Newcomb's Problem and Frankfurt-style counterexamples are both intended to show that a conflict exists among a set of highly plausible principles. A common response is to defend the principles by arguing that both thought-experiments are incoherent. The responses to both problems are flawed, however, and for the same reasons -- in fact, both examples are perfectly coherent. Worse yet, contradictions among rational principles are far more common than is generally appreciated. I argue for the pessimistic conclusion that these conflicts cannot be resolved.

Wednesday, 1 December, 5.30pm – 7pm

Oliver Gossner (LSE)

A reasoning approach to introspection and unawareness

We study the knowledge of a reasoning agent who assumes consciousness of all primitives: for each primitive proposition, the agent believes that he knows whether he knows if this proposition is true. If the agent is really conscious of all primitive propositions, we show that the agent is actually conscious of all propositions, in which case positive and negative introspection hold for every proposition. This result provides a foundation for introspection based on the assumptions that 1) the agent can derive knowledge using a reasoning process 2) in this reasoning process, the agent assumes that he is conscious or primitive propositions and 3) the agent is indeed conscious these primitive propositions. If the agent is not conscious of all primitive propositions, but thinks he is, we show that the agent is necessarily either unaware of some primitive proposition, or unaware about his knowledge of a primitive proposition, or exhibits delusion about his own knowledge. In this case, bounded rationality arises as the outcome of the agent making an unfounded assumption on the structure of his own knowledge, assuming consciousness of primitive propositions when this property doesn't hold. What distinguishes the rational agent's knowledge from the boundedly rational one isn't their mental processes, but rather the level of familiarity that these agents have with their environments. Finally, we show that the complexity of the state space we study is low in the sense that each state can be described through the value of primitive propositions and the knowledge of the agent on a limited number of propositions at that state. This shows that our model, while encompassing both the rational agent and the unaware one, remains tractable.

Wednesday, 8 December, 5.30pm – 7pm

Michael Morreau (Maryland)

Comparison, Aggregation and Measurement

Comparisons of overall similarity lie at the basis of a lot of recent metaphysics and epistemology. Other aggregate comparisons figure in value pluralism, in best-system accounts of laws and probabilities, and in interpretationist treatments of intentionality. There is reason to be skeptical about all of them. At root, the trouble is that very often there is no saying how much better a thing would have to be in the one respect in order to make up for being worse in another respect. An adaptation and slight generalization of Arrow's theorem of social choice will show that when this is so, there can be no combining the various dimensions of comparison into useful aggregate comparisons. 

Monday, 13 December, 6pm – 7.30pm, NAB 1.07, New Academic Building

Forum for European Philosophy Event

Daniel Hausman (University of Wisconsin-Madison)

Some Mistakes about Preferences

Preferences are the central notion in mainstream economic theory, yet economists say little about what preferences are. This talk argues that preferences in mainstream positive economics are comparative evaluations with respect to everything relevant to value or choice, and it argues against three mistaken views of preferences: (1) that they are matters of taste, concerning which rational assessment is inappropriate, (2) that preferences coincide with judgments of expected self-interested benefit, and (3) that preferences can be defined in terms of choices.

 

Share:Facebook|Twitter|LinkedIn|