Summer Term 2013
Wednesday, 12 June
No Choice Group meeting due to LSE Workshop on Free Will
Wednesday, 19 June, 5.30pm - 7pm
Harold Nax (CNRS Paris, Economics)
Learning in a Black Box
Many interactive environments can be represented as games, but they are so large and complex that individual players are in the dark about what others are doing and how their own payoffs are affected. This paper analyzes learning behavior in such 'black box' environments, where players' only source of information is their own history of actions taken and payoffs received. Specifically we study repeated public goods games, where players must decide how much to contribute at each stage, but they do not know how much others have contributed or how others' contributions affect their own payoffs. We identify two key features of the players' learning dynamics. First, if a player's realized payoff increases he is less inclined to change his strategy, whereas if his realized payoff decreases he is more inclined to change his strategy. Second, if increasing his own contribution results in higher payoffs he will tend to increase his contribution still further, whereas the reverse holds if an increase in contribution leads to lower payoffs. These two effects are clearly present when players have no information about the game; moreover they are still present even when players have full information. Convergence to Nash equilibrium occurs at about the same rate in both situations. (joint work with M.N. Burton-Chellew, S.A. West, H.P. Young)
Wednesday, 26 June, 5.30pm - 7pm
Anna Mahtani (Oxford)
The Reflection Principle states that a coherent agent defers to his or her own future credence function. I consider an inter-personal version of this principle (similar to the ‘Group Reflection’ principle discussed by Luc Bovens) according to which a coherent agent defers to any credence function that meets certain criteria: roughly – the agent must recognise the credence function as ‘an improvement’ on his or her own. I argue that the relation ‘… defers to…’ should be understood as an intensional relation –i.e. an agent might defer to a credence function designated in one way, but not under another. With this in mind, I refine the Group Reflection Principle and argue that the refined Principle is compelling. I show that it holds in a range of puzzle cases, including the Cable Guy Paradox and the Puzzle of the Hats.
Wednesday, 3 July, 5.30pm - 7pm
Kristof Madarasz (Economics, LSE)
Projection Equilibrium: Bargaining and Communication
Evidence from psychology and economics shows that the typical person exaggerates the extent to which an opponent conditions his choice on her private information. This paper incorporates such informational projections into a class of Bayesian games and applies it to bargaining and communication settings.
Lent Term 2013
Wednesday, 5 June, 5.30pm - 7pm
(Department of Economics, University of Exeter)
Randomization and Dynamic Consistency
Raiffa (1961) has suggested that ambiguity-aversion will cause a strict preference for randomization. We show, however, that dynamic consistency implies that individuals will be indifferent to ex ante randomizations. On the other hand, it is possible for a dynamically-consistent ambiguity-averse preference relation to exhibit a strict preference for certain ex post randomizations. We argue that our analysis throws some light on the Reflection paradox and the paradoxes for the smooth model of ambiguity. We show that these rest on whether the randomizations implicit in the set-up are viewed as being resolved before or after the (ambiguous) uncertainty. (joint work with Jurgen Eichberger and Simon Grant)
Wednesday, 29 May, 5.30pm - 7pm
Nick Baigent (Graz and LSE)
Total Violence: Concept and Measurement
Official statistics on the extent of violence nearly always focus on deaths. Yet, in most contexts it is total or aggregate violence that is required. This includes intervention in violent conflicts, conflict prevention, peace keeping, violent crime as well as policy towards domestic and gang violence. Typically, one wants to know the prospects for reducing total violence, not just deaths, with attention given to the risk of well intentioned measures actually increasing the level of total violence. Thus, this paper analyses a weighted sum ranking of all violent acts committed in a conflict and discusses issues that may be arise in applications.
Wednesday, 22 May, 5.30pm - 7pm
Bertil Tungodden (Department of Economics, Norwegian School of Economics)
Choice and the over-attribution of responsibility
The idea that people are personally responsible for the choices they make is often used to justify economic inequalities. The paper reports the results from an economic experiment designed to study whether people attach too much importance to choice, a phenomenon we refer to as over-attribution of individual responsibility. More precisely we study whether people attach importance to the choice of a risky alternative even in situations where people have no real alternative. We find strong evidence for over-attribution of individual responsibility: the willingness to accept inequalities due to luck dramatically increases when choice is introduced, even when all alternatives are equally risky or when the safe alternative is very bad. We also find a close relationship between political party affiliation and over-attribution of responsibility, with liberals and right-wing voters being significantly more prone to over-attribution.
Wednesday, 15 May, 5.30pm - 7pm
Erik Schokkaert (KU Leuven, Economics)
Behavioral Welfare Economics and Redistribution
Behavioral economics has shaken the view that individuals have well-defined, consistent and stable preferences. This raises a challenge for welfare economics, which takes as a key postulate that individual preferences should be respected. We argue, in agreement with Bernheim (2009) and Bernheim and Rangel (2009) that behavioral economics is compatible with consistency of partial preferences. While Bernheim and Rangel have focused on how to incorporate insights from behavioral economics into traditional concepts of welfare economics (Pareto optimality, compensation tests), we explore how the approach can be extended to deal with distributive issues. We revisit some key results of the theory in a framework with partial preferences and show how one can derive partial orderings of individual and social situations. (joint work with Marc Fleurbaey)
Wednesday, 8 May
No Choice Group due to Lakatos Award week
Wednesday, 1 May, 5.30pm - 7pm
Elias Tsakas (Maastricht University, Economics)
Pairwise epistemic conditions for Nash equilibrium
We provide an epistemic foundation for Nash equilibrium in terms of pairwise epistemic conditions locally imposed on only some pairs of players. Our main result considerably weakens not only the standard sufficient conditions by Aumann and Brandenburger (1995), but also the subsequent generalization by Barelli (2009). Surprisingly, our conditions do not require nor imply mutual belief in rationality. (joint work with Christian Bach)
Wednesday, 20 March, 5.30pm - 7pm
(organised in collaboration with STICERD)
Marion Ott (RWTH Aachen University, Economics)
Hawk-dove games on networks: an experimental study
How do people behave when interacting with several others in a network? We investigate this question for an experimental Hawk-Dove game in a network. Each subject is in a group of six players. The experiment is run in continuous time. Every subject can at any time change her links to others and her (single) action in a Hawk-Dove game, which she plays bilaterally with each of her linked partners. A subject incurs a cost per unit time for each (current) link she has established. Do subjects act myopically, maximising their short-term gains, or do they play in a forward-looking manner, anticipating how others will respond to their choices? The results indicate that forward-looking behaviour generally prevails, but that better-response (short-term) behaviour sometimes occurs. The prevailing forward-looking behaviour appears to be consistent with a choice rule which assumes that norms exist regarding who establishes and thus pays for links, and that players take these norms into account when deciding on their action and links.
Wednesday, 13 March, 5.30pm - 7pm
No Choice Group due to Comte Lectures
Wednesday, 6 March, 5.30pm - 7pm
Special double session for PhD students
Conceptual relation between the agent and her action: three incompatible claims
This paper clarifies the conceptual relation between action and agency. I argue for the possibility of proxy agency and group agency. I present a set of commonly held conditions and I show that they are inconsistent. I argue that one of the conditions needs to be abandoned. As soon as one of the conditions is abandoned, the possibility for proxy agency and group agency opens up, or so I will argue. I will present cases of individual and collective action to discuss the argument.
An Imposing Innocent Threat and You: Why It's a Toss-Up
Imagine that, through no fault of your own, you find yourself at the bottom of a deep well. Thugs have picked up an innocent person -- call him Bob -- and have thrown him down the well. Bob is now falling towards you. If you do nothing, your body will cushion Bob's otherwise lethal fall. This will guarantee his survival, but it will kill you. If you shoot your ray gun, you vaporize and kill Bob, thereby saving your life. Are you morally permitted to shoot your ray gun?
While most people believe that self-defence against an innocent threat such as Bob is permissible, moral philosophers remain deeply divided on this issue. In this paper, I contend that arguments both for and against the permissibility of shooting Bob rest on a misconception, in that they take the existence of an important asymmetry between Bob and you for granted. That way, they miss the fundamental symmetry that in my view characterizes the situation: both Bob and you happen to find yourselves, through no fault of your own, in a situation in which you can survive only if you kill an innocent person.
In this paper, I argue that despite certain differences, we are justified in focusing on the similarities between Bob and you, and that the right to do when deciding who should live is thus to flip a coin. By flipping a coin, the decision-maker ensures that an indivisible good -- continuing one's life -- is distributed as fairly as is possible without wasting it. The central idea of this paper is that flipping a coin is the only solution to the problem of self-defence against innocent threats that is respectfully considerate of both the threat and his potential victim.
Wednesday, 27 February, 5.30pm - 7pm
David Etlin (Department of Philosophy, University of Groningen)
Affective Beliefs, Cognitive Desires
Wednesday, 20 February, 5.30pm - 7pm
(special session with CPNSS)
Stathis Psillos (Department of History and Philosophy of Science, University of Athens)
Realism as a Historical Thesis
In his widely circulated and discussed, but still unpublished, manuscript Realism and Scientific Epistemology, Richard Boyd (1971) viewed scientific realism as an historical thesis about the “operation of scientific methodology and the relation between scientific theories and the world”. As such, realism is not a thesis only about current science; it is also a thesis about the historical record of science: it claims that there has been convergence to a truer image of the world. History, however, became a serious player in the scientific realism debate in the 1980s with the advent of the pessimistic induction, which aimed to undermine realism.
It was not always like this! The realism battle has been fought twice over, as it were. The first time, it took place mostly in the European continent in the beginning of the twentieth century. The battlefield back then concerned the prospects of the atomic conception of matter and it took shape with the ‘bankruptcy of science’ debate. The major philosophical views that emerged were—to a large extent—responses to historical challenges to the operation and the limits of scientific methodology.
In this talk, I review the realism in the beginning of the twentieth century, looking in detail into the ‘bankruptcy of science’ controversy that took place in France towards the end of the nineteenth century, and examine the role of history in it.Wednesday, 13 February, 5.30pm - 7pm
Peter Sozou (CPNSS, LSE)
When common interest and competition collide
Where individuals have a common interest, they may be expected to help each other. In biology, relatives have a common (genetic) interest in each other's reproductive success. In economics, a common interest occurs where payoffs are structured in such a way that.success of one individual (or agent) leads to a positive payoff for another. Conversely, where there is a contest for a limited resource, an individual may benefit from harming its competitors. In this talk, I will consider situations in biology and economics in which individuals with a common interest are in a competitive contest. It is shown that this can lead to outcomes where they tend to help each other, outcomes where they tend to harm each other, and to asymmetric outcomes in which A helps B while at the same time B harms A.
FRIDAY, 8 FEBRUARY, 2pm - 3.30pm
Franz Dietrich (CNRS, Paris, and UEA, U.K.) and Christian List (LSE, U.K.)
We introduce a new way to rationalize an agent's choice behaviour. A 'reason-based rationalization' explains behaviour in terms of the properties of the options and/or the context which are motivationally salient. This allows one to rationalize several kinds of 'paradoxical' behaviour. Behaviour is to some extent able to reveal which properties are motivationally salient, and to distinguish between two kinds of context-sensitivity. Under one kind, the agent has context-related motivation, i.e., cares about the context; this kind has nothing ‘irrational’ per se. Under the other kind, the agent has context-dependent motivation, i.e., has motivation which changes with the context. We axiomatically characterize the behavioural implications of several forms of reason-based rationalization, including reason-based rationalization with arbitrary motivation, with context-unrelated and/or -independent motivation, and with 'revealed' motivation.
Wednesday, 30 January, 5.30pm – 7pm
Ivan Moscati (University of Insubria, Varese, Italy, and Bocconi University, Milan, Italy, Economics)
How cardinal utility entered economic analysis during the Ordinal Revolution
The paper shows that cardinal utility entered economic analysis during the Ordinal Revolution initiated by Pareto and not, as many popular histories of utility theory assume, before it. Cardinal utility was the outcome of a discussion began by Pareto about the capacity of ranking transitions among different combinations of goods. The discussion simmered away during the 1920s and early 1930s, underwent a decisive rise in temperature between 1934 and 1938, and continued with some final sparks until 1944. The paper illustrates the methodological and analytical issues and the measurement-theoretic problems, as well as the personal and institutional aspects that characterized this debate. Many eminent economists of the period contributed to it, with Samuelson in particular playing a pivotal role in defining and popularizing cardinal utility. Based on archival research in Samuelson’s papers at Duke University, the paper also addresses an issue of priority associated with the mathematical characterization of cardinal utility.
Wednesday, 23 January, 5.30pm – 7pm
Darren Bradley (City College New York, Philosophy)
Four problems about self-locating belief
In this article I defend the Doomsday Argument, the Halfer Position in Sleeping Beauty, the Fine-Tuning Argument, and the applicability of Bayesian confirmation theory to the Everett interpretation of quantum mechanics. I will argue that all four problems have the same structure, and I give a unified treatment that uses simple models of the cases and no controversial assumptions about confirmation or self-locating evidence. I will argue that the troublesome feature of all these cases is not selflocation but selection effects.
Wednesday, 16 January, 5.30pm – 7pm
No Choice Group due to other departmental activities
Wednesday, 12 December, 5.30pm – 7pm
Don Fallis (University of Arizona, School of Information Resources)
Kant versus Skyrms on Universal Deception
In the Groundwork, Immanuel Kant famously argued that universal deception is impossible. More precisely, he argued that it would be self-defeating for everyone to follow a maxim of lying whenever it is to her advantage. According to Kant, if everyone followed such a maxim, we would not trust what anybody said and there would be no point in lying.
In his recent book Signals, Brian Skyrms makes use of David Lewis’s notion of a signaling game in an attempt to show that universal deception is not always futile. First, he argues that there are signaling games in which, whenever it would be beneficial to deceive the receiver, the sender sends a signal that deceives the receiver. In addition, Skyrms argues that there are even signaling games in which the sender always sends a signal that deceives the receiver.
In this talk, I argue that Skyrms fails to show that Kant was wrong about the impossibility of universal deception. Since his analysis of deception is too broad, his purported counter-examples to Kant are not actually instances of universal deception. However, utilizing a more plausible analysis of deception, I suggest that there are indeed signaling games in which the sender always sends a signal that deceives the receiver.
Wednesday, 5 December, 5.30pm – 7pm
Till Grüne-Yanoff (KTH) and Sven Ove Hansson
Exclusion Criteria for Adaptive Preferences
A preference is said to be adaptive if it is regimented in response to an agent's set of feasible options. That a preference is adaptive is often taken to be a sufficient criterion for it not to count towards any welfarist account of the good. We disagree. According to our intuition, while some preferences should be excluded because of their adaptiveness (for example, Stockholm syndrome preference), other preferences might be good candidates for such a welfarist account exactly because they are adapted to the agent's circumstances. Examples of the latter include preferences adapted to a one's talents and abilities.
In order to make this intuition precise, we offer an account that determines which adaptive preferences must be excluded on procedural grounds. In particular, based on a framework that relates preference over states to preferences over worlds, we argue that adaptive preferences that do not satisfy certain reconstructability conditions within this framework should be excluded. Preferences that satisfy these conditions are good candidates for a welfarist account of the good, subject to substantial constraints.
Friday, 30 November, 2.00pm – 7pm
Choice group workshop with James M. Joyce
2.00pm – 3.00pm: Seamus Bradley and Katie Steele "Sophisticated Imprecise Choice"
3.10pm – 4.10pm: Foad Dizadji-Bahmani "The Objection from Verismilitude: A Challenge for Joyce"
4.20pm – 5.20pm: Christian List "TBC".
5.30pm – 7.00pm: James M. Joyce "TBC"
Wednesday, 28 November, 5.30pm – 7pm
No seminar due to workshop on Friday.
Wednesday, 21 November, 5.30pm – 7pm
Stefan Schubert (Lund, Philosophy)
Interpreting radical coherentism
According to a radical version of coherentism, testimonies that lack any justification or warrant individually are nevertheless justified to at least some degree if they cohere to a sufficient degree with each other. During recent years, a number of Bayesian coherence theorists (e.g. Michael Huemer) have given probabilistic interpretation of this claim. The standard view is that these endeavours show that radical coherentism is false: coherence cannot create justification out of thin air, as it were, but can only amplify pre-existing justification or warrant.
In a recent article, Michael Huemer has criticized this conclusion, arguing that his previous probabilistic interpretation of the radical coherentist’s claim was wrong and that given the interpretation that he now prefers, radical coherentism is at least possible (he stops short of claiming it to be true). In this talk, I offer a detailed critique of Huemer’s argument. It is argued that Huemer’s new probabilistic interpretation is misguided and must be replaced by an interpretation similar, but not identical, to his old position. Building on this interpretation, I give a defence of the received view which, as I will argue, is considerably stronger than Huemer’s previous defence of that view. I will also give some general remarks on Bayesian coherence theorists’ interpretations of various coherentist notions and claims.
Wednesday, 14 November, 5.30pm – 7pm
Joe Mazor (LSE Philosophy)
Rescue, Fair Play, and Global Justice
In his famous article "Famine, Affluence, and Morality" Peter Singer argues for an obligation to help the global poor. He gives the example of a drowning person in a pond and asks the following question: If we are obligated to aid the person in the pond, despite a significant cost to ourselves of muddying our clothes, don't we have a similar obligation to help the starving foreigner hundreds of miles away? Although Singer concedes that we are more psychologically motivated to help the drowning person, he argues that there is no morally significant difference between the two cases. In my talk, I intend to call Singer's analogy into question.
I argue that we have an obligation of fair play to help the drowning person that we do not have in the case of the distant foreigner. The psychological fact that humans are more motivated to rescue those who are suffering in their sight creates a reasonable expectation on the part of the potential rescuer that the drowning victim would have helped the rescuer if the tables were turned. In effect, potential rescuers benefit from a type of implicit insurance scheme with those with whom they share certain psychologically potent ties. Benefiting from these implicit insurance schemes generate moral obligations of fair play to reciprocate. I consider the the issue of the moral arbitrariness of these psychological motivations, and argue that they nevertheless can serve as a basis for significant moral duties. Given that such psychologically potent ties are absent between us and foreigners, we do not have a reasonable expectation that they would help us if the tables were turned, and thus our obligations of rescue are not as strong in the context of global justice.
Wednesday, 7 November, 5.30pm – 7pm
Richard Bradley (LSE Philosophy)
Ellsberg's Paradox and the Value of Chance
In this talk I will explore two ideas. First I will consider whether the typical pattern of choices exhibited by agents facing Ellsberg's problem can be rationalised by the hypothesis that they are risk averse with respect to the chances of winning money, a possibility that arises from a natural reframing of the problem they face. Secondly, I will explore the consequences of the more general hypothesis that chances matter, not just instrumentally in virtue of the outcomes with which they are associated, but also intrinsically. In particular I will consider the possibility that the hypothesis allows for a broadly welfarist explanation for the widely observed fact that we prefer to distribute an indivisible good between equally deserving individuals by using a lottery to determine who gets it (over simply giving it to one of them).
Wednesday, 31 October, 5.30pm – 7pm
Philippe Mongin (CNRS, Paris)
The Utilitarian Relevance of the Aggregation Theorem
(Based on joint work with Marc Fleurbaey, Princeton University)
Harsanyi invested his Aggregation Theorem and Impartial Observer Theorem with utilitarian sense, but Sen redescribed them as "representation theorems" with little ethical import. This negative view has gained wide acquiescence among economists. Against it, we support the utilitarian interpretation by a novel argument relative to the Aggregation Theorem. We suppose that an exogeneously defined utilitarian observer evaluates social states by the sum of individual utilities and we apply the assumptions of the Aggregation Theorem to this observer. Adding technical conditions from microeconomics, we conclude that any social observer who is subjected to the assumptions of the Aggregation Theorem evaluates social states in terms of a weighted variant of the utilitarian sum. Hence, pace Sen, utilitarianism and the Aggregation Theorem are mutually relevant. The argument is conveyed by means of a main theorem, an algebraic refinement of this theorem, and a variant in which the utilitarian sum is unconventionally defined on lotteries. Each result encapsulates Harsanyi's original one as a particular step.
Wednesday, 24 October, 5.30pm – 7pm
Special Double Session with Soroush Rafiee-Rad (CPNSS Visiting Researcher) and Orri Stefansson (LSE, Philosophy)
Soroush Rafiee-Rad (CPNSS Visiting Researcher)
Reasoning From Inconsistency: A first order account
Dealing with inconsistency has always been an issue for mathematical logic and there has been several attempts to address this either in the literature on belief revision or paraconsistent logics. Although the trivialisation of the consequence relation in the presence of inconsistency can be considered a virtue when working in the mathematical universe, there are certain aspects of reasoning where it is a vice. This is the case in particular, when inconsistencies are not associated with the world but rather with the agent's knowledge of the world. And in such cases one might expect the pathological scope of inconsistency to be limited to the part of the agent's knowledge relevant to that inconsistency. In this talk we investigate a probabilistic consequence relation for the first order languages that can enable us to derive meaningful logical consequences from an inconsistent knowledge base. We adopt a probabilistic semantics for the consequence relation and provide a sound and complete proof system for this semantics.
Orri Stefansson (LSE, Philosophy)
The desirability of outcomes often depends not only on what is but also what could have been. These dependencies between facts and counter-facts create well known problems for standard decision theories. In particular, they are inconsistent with state-independence axioms of these theories. In this paper I show that if we extend the Boolean algebra over which Richard Jeffrey's desirability function is defined such that it includes counterfactual propositions, then we can model the aforementioned dependencies in a way that makes preferences based on them consistent with the maximisation of an expected value: desirability. I also argue that the advantage of my suggested way of modelling these preferences, compared with earlier attempts, is that the earlier attempts have either failed to give a unified account of what I call "counterfactual desirability", and/or to explicitly model the importance of counterfactuals in the reasoning underlying these preferences.
Wednesday, 17 October, 5.30pm – 7pm
Ira Kiourti (Institute of Philosophy)
Paths Toward Genuine Impossible Worlds
Motivated by a rising demand for impossible worlds in philosophical theorising, I propose two alternative logical frameworks for genuine impossible worlds, alongside the possible worlds proposed by David Lewis. To do so, I must counter Lewis' reductio against such worlds on the basis that a contradiction in the scope of the modifier ‘at w’ amounts to a plain contradiction tout court – an unacceptable consequence. I propose two alternative solutions: (1) The first simply abandons classical logic when talking about the pluriverse. What remains to be shown is whether this move is justified. I show that it can be justified using Lewis’ very own methodology and that the resulting inconsistencies can be (a) systematised and (b) quarantined for most usual intents and purposes. (2) The second path presses the point that Lewis begs the question against impossibilia in his assumption that the classical truth-at-w conditions for negation 'at w (–A) iff –(at w A)’ hold in the extended theory. Having pointed this out, the challenge is to find an impossibilia-friendly conception of negation at impossible worlds, which nonetheless preserves Lewis' classical, extensional theoretical framework. I close by showing that further challenges against genuine impossibilia that turn on questions of consistency are special instances of Lewis’ argument, hence admit similar treatment.
Wednesday, 10 October, 5.30pm – 7pm
Alexandra Hill (University of Manchester, Philosophy)
The Chinese Principle of Analogy in Inductive Logic
Attempts by Carnap and others to incorporate reasoning by analogy into inductive logic have met with little success. In this talk I will present joint work with Jeff Paris in which we propose a new formalisation of reasoning by analogy inspired by ancient Chinese rational thought. Whereas previous work has only considered analogies that derive from the sharing of properties, ours is based on the idea of structural similarity. We show that the Chinese Principle follows from certain symmetry requirements on our language, hence is widely satisfied and consistent with many other popular principles of rational belief formation.
Wednesday, 3 October, 5.30pm – 7pm
Gloria Origgi (CNRS, Paris)
Kakonomics: The strange preference for low quality and its norms
Standard game-theoretical approaches posit that, whatever people are trading (ideas, services, or goods), each one wants to receive High-quality work from others. Let's stylize the situation so that goods can be exchanged only at two quality-levels: High and Low.
Kakonomics describes cases where people not only have standard preferences to receive a High-quality good and deliver a Low-quality one (the standard sucker's payoff) but they actually prefer to deliver a Low-quality good and receive a Low-quality one, that is, they connive on a Low-Low exchange. We posit that this kind of interaction is sustained by an unusual, yet possible, preference ranking which differs from that associated with the Prisoners’ Dilemma and similar games, whereby self-interested rational agents prefer to dish out low quality in exchange for high quality. While equally ‘lazy’, agents in our low-quality worlds are oddly ‘pro-social’: for the advantage of maximizing their raw self-interest, they prefer to receive low-quality goods and services, provided that they too can in exchange deliver low quality without embarrassment.