2011-2012

Summer Term 2012

Wednesday, 27 June, 5.30pm – 7pm

Natalie Gold (Kings College, Philosophy)

Moral Judgments and Decisions in Trolley Problems

Hypothetical dilemmas, such as trolley problems, are used by philosophers and psychologists in order to probe intuitions about whether it is morally permissible to harm one person in order to prevent a greater harm to others. The dilemmas usually involve life and death decisions. I report the results of an experiment using hypothetical dilemmas in which the domain and severity of harm are varied and an experiment using a real life trolley problem, involving economic harms and including a cross-cultural comparison of British and Chinese behaviour and judgments. Philosophical ramifications of these experiments include implications for methodology in ethics, and implications for varieties of moral internalism.

Wednesday, 20 June, 5.30pm – 7pm

Federico Picinali (LSE, Law)

Two meanings of 'reasonableness': explaining the fixity of the reasonable doubt standard

The 'reasonable doubt standard' is the controlling standard of proof for criminal fact finding in several common-law and civil-law countries. Based on decision theory, some scholars have argued that the stringency of this standard should vary according to the circumstances of the case, most notably, the gravity of the crime and the likely punishment thereof. Although acknowledging that the adoption of the standard is generally justified on decision theory grounds both in the academic literature and in the judicial decisions, the paper contends that the standard does not lend itself to the 'sliding-scale' approach mandated by this theory. The basis of this claim lies in the investigation of the concept of 'reasonableness'. While such concept is mostly employed – and studied as it operates – in the domain of 'practical-conative reasoning', scant attention has been given to its functioning in the domain of 'theoretical-cognitive reasoning'. Differently from what happens in the former domain, in the latter 'reasonableness' does not depend on value judgments regarding the outcomes of a decisional process, since the undertaking is purely descriptive. Thus, given that criminal fact finding involves theoretical reasoning, the question whether in this enterprise a doubt is reasonable is indifferent to a decision analysis. The upshot is that, due to the janus-faced concept of 'reasonableness', some scholars mistakenly understand the reasonable doubt standard as a practical, rather than a theoretical, rule.

Wednesday, 13 June, 5.30pm – 7pm

David W. Chambers (University of the Pacific)

Explorations in the Game of Life

The normative approach to ethics, the dominant view, has featured a 2500-year argument over which set of principles everyone should follow and interprets bad actions as a failure of one or another mechanism. All normative approaches begin from the point of unique individuals claiming to have access to moral truth, often assumed to be universal truth. As an alternative, I have been exploring a naturalistic approach based on game theory, where moral agents search for actions that mutually maximize expected future worlds. I have used traditional philosophical analysis and Markov-chain, replicator simulations modeling. Preliminary results suggest:

  1. Moral choice in concrete situations is a satisfactory alternative to rational reflection on ethical theory as a basis for advancing human flourishing.
  2. Equilibrium, which always exists (per Nash), is a workable alternative to norms, agreement on which may very well not be possible (per Arrow).
  3. Equilibrium has a strong claim to being self-enforcing in communities.
  4. The standard Common Knowledge assumption of game theory can be replaced with a social empathy rule.
  5. "Cheating" involves contempt, deception, coercion, reneging, and collusion (each of which can be defined operationally as various corruptions of the normal play of games, the sapient feature that distinguishes humans from other animals).
  6. Normative theories that allow individual agents to decide what is right for others or in general is excessively paternalistic in denying moral agency to others.
  7. Individual moral agency appears to always produce inferior results in the general and long run as opposed to systems built on mutual moral agency.
  8. Some level of cheating seems to be characteristic of human communities.
  9. Communities with the possibility of cheating flourish generally and in the long run better than communities where cheating is not permitted.

Significant issues to be addressed include understanding of how agents frame choices (not how researchers define them) and modeling how individuals and communities evolve over time when specific agents occupy roles in more than one overlapping game with different structures.

Wednesday, 6 June, 5.30pm – 7pm

No Choice Group meeting.

Thursday, 7 June to Saturday, 9 June

Workshop on Reductionism and Non-Reductionism in the Social Sciences|

Wednesday, 30 May, 5.30pm – 7pm

Book Launch Event: Wulf Gaertner and Erik Schokkaert, Empirical Social Choice

Wednesday, 23 May, 5.30pm – 7pm

No Choice Group meeting.

Wednesday, 16 May, 6.30pm – 8pm

The Auguste Comte Memorial Lectures

Joshua Cohen (Stanford, Political Science, Philosophy, and Law)

Mobile for Development, Human-Centred Design, Global Justice: Reflections on Global Justice|

Venue: Old Theatre, Old Building

Tuesday, 15 May, 6.30pm – 8pm

The Auguste Comte Memorial Lectures

Joshua Cohen  (Stanford, Political Science, Philosophy, and Law)

Mobile for Development, Human-Centred Design, Global Justice: Mobile for Development Meets Human-Centred Design|

Venue: Old Theatre, Old Building

The lecture will be followed by a reception in the Atrium.

Monday, 14 May, 5.30pm – 7pm, Room CON.1.06

Geoff Hodgson (Hertfordshire, Business School)

What are institutions?

The importance of institutions is now widely appreciated in economics, politics, sociology, geography, ecology and other disciplines. Unfortunately there is not yet full agreement on what an institution is. Although there are several different approaches to the understanding of institutions, it is possible to detect some shared themes. It is proposed that institutions are essentially systems of rules. The broadness of this definition is not an impediment if different kinds of institution are distinguished. In turn this definition raises the question of the nature of social rules.

Wednesday, 9 May, 5.30pm – 7pm

Adam Oliver (LSE, Health, Social Policy)

Are People Consistent When Trading Time For Health?

The conventional, or 'standard', time trade-off (TTO) procedure asks respondents to trade off fewer life years for better health. It is possible to 'reverse' the procedure to ask respondents to trade off health for more life years. Theoretically, these two procedures should generate the same TTO values for any given health state. Moreover, the standard TTO is a very abstract exercise. It is possible to revise the TTO instrument so that it better reflects respondents' actual life expectancies. As with the standard versus reverse TTO, the standard and revised (to better reflect reality) instruments ought to generate consistent values. The study reported here examines two questions: (i) due to loss aversion, does the standard TTO give significantly higher values that the reverse TTO?; (ii) will a reframing of the TTO so as to better reflect respondents' real life expectancies significantly affect the elicited values? The results to (i) suggest that the answer is yes, and those to (ii) suggest the answer is no. The article then posits whether or not we should control for the effect of loss aversion in the TTO. The answer depends on whether we want to move towards internal consistency in the instrument, or whether we think that loss aversion legitimately captures the extent to which people want to trade off life years for health. Given that the TTO is used extensively for public policy purposes, this is a debate that ought to be had.

Wednesday, 2 May, 5.30pm – 7pm

John Weymark (Vanderbilt, Economics)

Extensive Social Choice and the Measurement of Group Fitness in Biological Hierarchies

Based on joint work with Walter Bossert (Université de Montréal) and Chloe X. Qi (Vanderbilt University)

Michod, Viossat, Solari, Hurand, and Nedulcu (Journal of Theoretical Biology, 2006) introduced a measure of group fitness for a multicelluar organism that is the product of a measure of group viability and group fecundity, which are in turn equal to the average viability and average fecundity of the individual cells, respectively, and used it to analyze the unicelluar-multicellular evolutionary transition. Samir Okasha (Biology and Philosophy, 2009) has used social choice theory to analyze the problem of measuring group fitness in a biological hierarchy. He reinterprets a social welfare functional as a group fitness functional whose arguments are fitness functions for the individual cells in a multicellular organism and whose output is an ordering of states of the world in terms of the overall fitness of the organism. We argue that it is more appropriate to model this measurement problem using extensive social choice theory, with the individual viability and fecundity functions used as inputs into the group fitness functional. Using this formulation of the problem, we provide an axiomatizaton of the ordering underlying the MVSHN measure. We use this axiomatiziation to discuss the appropriateness of the MVSHN measure for analyzing evolutionary transitions and to provide an alternative account of the decoupling of individual and group fitnesses during such a transition than that offered by Okasha.

Wednesday, 25 April, 5.30pm – 7pm

H. Peyton Young (Oxford, Economics)

The dynamics of social innovation

Social norms and institutions are mechanisms that facilitate coordination between individuals. A social innovation is a novel mechanism that increases the welfare of the individuals who adopt it compared with the status quo. We model the dynamics of social innovation as a coordination game played on a network. Individuals experiment with a novel strategy that would increase their payoffs provided that it is also adopted by their neighbors. The rate at which a social innovation spreads depends on three factors: the topology of the network and in particular the extent to which agents interact in small local clusters, the payoff gain of the innovation relative to the status quo, and the amount of noise in the best response process. The analysis shows that local clustering greatly enhances the speed with which social innovations spread. It also suggests that the welfare gains from innovation are more likely to occur in large jumps than in a series of small incremental improvements.

The paper is available here|.

Lent Term 2012

Wednesday, 14 March, 5.30pm – 7pm

Alvin Goldman (Rutgers)

Democracy, Knowledge, and Power

What is the relationship between democracy and the epistemic? Some epistemic approaches to democracy (those inspired by the Condorcet Jury Theorem, for example) seem to suggest that the fundamental goal or virtue of democracy should be understood in terms of truth-conduciveness or knowledge-conduciveness. I shall raise problems for this approach. I shall also argue, however, that once the primary virtue or value of democracy is identified, we can easily see that knowledge (of specifiable kinds) is absolutely essential to the kind of success of which democracy is capable. This primary virtue or value is associated with power, understood as the ability (or comparative ability) to satisfy one's preferences. The task for social epistemology in this arena is to identify (and/or design) the institutions that can best promote the requisite type of knowledge.

Wednesday, 7 March, 5.30pm – 7pm

Ben Ferguson (LSE)

Self-Defeating Theories of Exploitation

In this paper I show that theories of exploitation that depend upon 'rigid' duties toward the poor are self-defeating.

I consider Ruth Sample's (2003) account of exploitation where she characterises exploitation as a transaction that degrades---fails to respect---one of the transactors. Transactions may be degrading by:

[D1] Taking advantage of an injustice.

[D2] Neglecting what is necessary for that person's well-being or flourishing.

I present two game theoretic cases where D2 provides what Pogge (1992) terms 'reward incentives' for behaviour that leads to a moral loophole. In the first case, a sequential game, I show that one player may use D2 to 'exploit'---take unfair advantage---of the other player. In the second case, a simultaneous game, I show that the presence of D2 creates a prisoner's dilemma. Thus, when both players respond to the reward incentives created by D2 each does worse. In this sense I claim that D2 fails Kant's test of universalisation.

In the final section, I offer a brief diagnosis of the cause of the moral loophole as a failure to account for personal responsibility and suggest that successful accounts of exploitation must be sensitive to personal responsibility.

Wednesday, 29 February, 5.30pm – 7pm

Peter Dietsch (Université de Montréal, Philosophy)

Fiscal obligations to redistribute in an international setting

In order to think about obligations to redistribute, one needs to have a normative justification for the fiscal prerogatives of states to start with. The paper builds on a mixed justification of the state, appealing to both political participation and distributive justice. The desirable fiscal autonomy of states is legitimate if it respects a global justice constraint, that is a minimal threshold of justice that is consensual among different theories of justice. Against this background, the paper asks two central questions. First, what fiscal obligations to redistribute do states have towards others? Second, how should they discharge these obligations?

States can incur fiscal obligations to redistribute towards others in two ways: First, because the global justice constraint is violated; and second, because the rules of international taxation (or the lack thereof) do not respect the fiscal autonomy of some states that should therefore be compensated. Taking the interdependence of fiscal policy between states as the phenomenon that triggers obligations makes the theory of global justice defended here a member of the family of views that go beyond mere humanitarian duties but stop short of egalitarian duties. One key question in this context is how to determine the magnitude of fiscal obligations to redistribute between states.

Part 2 distinguishes three potential recipients of a fiscal transfer: another state, a non-governmental organisation (NGO), or individual citizens of other countries. Traditional state-to-state transfers are appropriate in cases where there is no violation of the global justice constraint or, if there is one, where the recipient government is willing to spend the transfer in the most effective way to remedy the situation. However, in cases where there is a violation of the global justice constraint and the recipient government is not cooperative, donor states have to look for alternative channels to redistribute. Supporting NGOs or even direct transfers to individual citizens of the other country represent alternative ways to discharge their fiscal obligations to redistribute.

Wednesday, 22 February, 5.30pm – 7pm

Special Double Session with Wulf Gärtner (University of Osnabrück) and Yongsheng Xu (Georgia State University) and Richard Bradley (LSE)

Wulf Gärtner (University of Osnabrück) and Yongsheng Xu (Georgia State University)

A General Scoring Rule

This paper studies a ranking rule of the following type axiomatically: each voter places k candidates into n categories with ranks from n to 1 attached to these categories, the candidate(s) with the highest aggregate score is (are) the winner(s). We show that it is characterized by a monotonicity condition and a multi-stage cancellation property.

Richard Bradley (LSE)

Transitivity and Preference-based Choice

What is required of the agent who makes her choices on the basis of her preferences? What can be inferred about her preferences from the choices she makes? In considering these questions, it is worth distinguishing between the choices that are permissable given her preferences and those that she actually makes. When her preferences are incomplete the former set will constrain but not determine the latter. Consequently we should not insist that observed choice be rationalisable in terms of a complete preference relation. But should it be transitive? In this paper I give necessary and sufficient conditions on choices for their rationalisability in terms of a Suzumura consistent preference relation, a form of consistency short of transitivity but stronger than acyclicity. and argue that this is all that rationalisation can and should deliver.

Wednesday, 15 February, 5.30pm – 7pm

Canceled. No Choice Group meeting.

Wednesday, 8 February, 5.30pm – 7pm

Kai Spiekermann (LSE, Government)

Do I Want to Know? – Individual Strategic Manipulations of Belief Sets in Response to Entitlement Norms

This paper presents an experiment to test whether individuals manipulate their belief sets in order to avoid social norms and the associated compliance costs. A social norm sets out what an individual is expected to do, given that the world is in a certain state (Bicchieri 2006, ch. 1). Therefore, a social norm exerts normative force if a subject believes that a state obtains in which the norm applies (Rabin 1995). By contrast, if the subject does not know that the world is in such a state, non-compliance may be excusable. This opens up an opportunity for strategic norm avoidance: if individuals can influence their belief sets about normatively relevant facts, then they may choose to acquire only those beliefs that help them to avoid the force of a norm and the associated compliance costs, using what is sometimes called "moral wriggle room" (Dana 2007).

Our experiment is in three stages. First, a relevant norm is made salient by informing participants about the results of a survey conducted in advance. Second, all participants play a competitive game that will sort participants into high and low performers. Third, pairs of participants play a dictator game such that the dictator knows she is a high performer, but does not know by default whether the receiver is a high or a low performer. Before playing, the dictator can optionally acquire different lotteries for information about the receiver's type.

We hypothesize that dictators tend to avoid information that the receiver is a high performer, and tend to pursue information that the receiver is a low performer. We also conjecture that the information acquired influences the giving behaviour. In contrast to previous studies, we control carefully for the norm causing the described effect, and we separate the normative and the distributional implications of the information acquired. The project raises important questions about the nature of norms, especially about how individuals avoid ethical dilemmas of compliance by choosing to stay ignorant about potentially normatively relevant facts.

Wednesday, 1 February, 5.30pm – 7pm

Nir Eyal (Harvard, Medical Ethics)

Fair chances—the very notion

Commonsense morality has it that tossing ordinary coins to allocate goods is fair, and allocation based on personal preference or flips of "loaded" coins is unfair. Here, the notions of fairness and unfairness are applied to the allocation of prospects and risks, not directly to that of outcomes. Despite commonsense intuition, I maintain that fairness never (or only sometimes) applies to the distribution of chance. My argument is that the probabilistic notions of personal prospect and risk admit of several meanings, and none allows for fairness to apply to their allocation throughout. There are examples where objective prospects are distributed in paradigmatically unfair patterns but, intuitively, no serious unfairness arises; and there are examples where subjective prospects are distributed in paradigmatically-unfair patterns but there is no good explanation for any alleged unfairness.

Wednesday, 25 January, 5.30pm – 7pm

Hykel Hosni (Pisa)

Rationality under second-order uncertainty

The consensus on the inadequacy of classical bayesianism is today virtually unanimous across the multifaceted field of uncertain reasoning. Yet, when it comes to putting forward justified alternatives, the consensus suddenly disappears. The purpose of this talk is to argue that the much needed extensions of the expressive power of classical bayesianism can go hand in hand with the foundational unity provided by bayesian epistemology.

I propose to tackle the problem from the point of view of the Choice norm which lies at the heart of bayesian epistemology, namely the prescription to never make dominated choices. This allows us to replace the philosophically challenging problem of providing a satisfactory taxonomy of uncertainty with the considerably more manageable task of analysing how certain modelling features of specific choice problems justify distinct  norms of rational behaviour.

I will refer to the domain of choice problems for which the classical bayesian norms are fully justified as the class of first-order uncertainty problems. The central part of the talk is devoted to showing how this approach suggests rather natural extensions of classical bayesianism  to second-order uncertainty, covering ambiguity, imprecision, ignorance and vagueness. In the last part of the talk I will argue that the proposed distinction between first and second-order uncertainty is also useful to understand why some currently popular (especially in economic theory)  anti-bayesian claims are intuitively appealing, yet foundationally fallacious. This will motivate my conclusion, namely that a bayesian theory of second-order uncertainty of the kind envisaged in this talk would allow us to have our (foundational) cake, and eat it too.

Wednesday, 18 January, 5.30pm – 7pm

Luc Bovens (LSE, Philosophy)

Concerns for the Poorly-Off in Ordering Prospects - Prioritarianism: an Ecumenical Approach

I construct a calculus for the evaluation of risky prospects on the basis of ex ante and ex post interpretations of prioritarianism by Diamond, Rabinowicz, McCarthy and Fleurbaey and consider how this calculus fares with respect to a range of hard cases.

Wednesday, 11 January, 5.30pm – 7pm

Magda Osman (Queen Mary, Psychology)

What is there to learn from coincidences?

"For anybody with a less accurate account of how the world works than a modern adult, such as an early scientist or a young child, coincidences are a rich source of information as to how a theory might be revised, and should be given great attention." (Griffiths & Tenenbaum, 2007, p 215).

In psychological research the study of coincidences mostly shows that that our perceptual, decision making and reasoning processes are generally biased when it comes to detecting and interpreting particular kinds of low probability events. Moreover, as the comment above implies, when we have fully formed "accurate" models of the world, we should really stop concerning ourselves with coincidences as sources of information for theory revision.

Aside from the apparent negative conclusions drawn from psychological research on coincidences, there are still a number of basic unanswered empirical questions. Is coincidentality (strength of coincidence) a stable psychological construct? Do people generally agree in their judgments of coincidental events? What are the factors that inform our judgements of coincidentality? We show that when asked to judge actual self-reported coincidental experiences people converge on stable consistent judgments of coincidentality. Also, there appears to be a monotonic relationship between the judged probability of occurrence of a coincidence and its coincidentality rating. These findings can be interpreted from a Bayesian approach (Griffiths & Tenenbaum, 2007) that explains the statistical inference process involved in judging the strength of a coincidence. The aim here is to discuss this account and evaluate it with respect to our claims. We propose that coincidental experiences are clusters of rare events. They illustrate new pattern repetitions an individual is prepared to observe. These particular events invite a causal explanation, but where no plausible explanation can be found. More specifically: "A coincidence is a pattern repetition which is observed to be unlikely by chance but is nonetheless ascribed to chance since the set of candidate causal hypotheses available to explain it are deemed to be even less plausible than mere chance." The fact that we experience coincidences at all is part and parcel of a sophisticated contingency learning mechanism that functions to utilize new experiences for future prediction and control purposes.

Michaelmas Term 2011

Monday, 26 September, 5.30pm – 7pm

Margaret Gilbert (U. Connecticut)

Wednesday, 5 October, 5.30pm – 7pm

EPSA. No Choice Group meeting.

Wednesday, 12 October, 5.30pm – 7pm

Mat Coakley (LSE, Government)

Interpersonal Comparisons of the Good: Epistemic, not Impossible

To evaluate the overall good of any action, policy or institutional choice we need some way of comparing the benefits and losses to those affected: we need to make interpersonal comparisons of the good/welfare. Yet skeptics have worried either: (1) that such comparisons are impossible as they involve an impossible introspection across individuals; (2) that they are indeterminate as individual level information is compatible with a range of welfare numbers; or (3) that they are metaphysically mysterious as they assume either the existence of a social mind or of absolute levels of welfare when no such things exist. This paper argues, however, that we should treat this as an epistemic problem - that is as a problem of forming justified beliefs about the overall good based on evidence about the good of individuals - and that, if we do so, these critiques can potentially be addressed. The Annex proves that, for any non-dogmatic well-defined credence function, accepting Arrow's Independence of Irrelevant Alternatives entails the possibility of incoherent beliefs. If we seek justified beliefs about overall welfare or the overall good, we should not accept Arrow's Impossibility Theorem as validly characterising our task.

Wednesday, 19 October, 5.30pm – 7pm

Richard Pettigrew (Bristol, Philosophy)

In Praise of Determinate Credences

How should we model an agent's cognitive state at a given time? The traditional Bayesian uses a single credence function; the sophisticated Bayesian uses a set of such credence functions. The sophisticated Bayesian says that there are actual agents whose cognitive state is not sufficiently determinate to be modelled by a single credence function. Moreover, she says that there are evidential situations to which the only rational response is a cognitive state that cannot be modelled by a single credence function. I defend traditional Bayesianism against both of these claims. I argue that an agent's cognitive state can always be represented by a single credence function; and that the rational response (or responses) to a given evidential situation is (or are) always given by a single credence function.

Wednesday, 26 October, 5.30pm – 7pm

Conal Duddy (Research Fellow, LSE, Philosophy)

Shortlisting

An individual faced with a large set X of alternatives may find it difficult to make a choice. However, she may be able to identify a subset S of X such that the alternatives in S are better than those in X-S. She can then ignore those alternatives in X-S and focus her attention on those in S. In doing so, she may be able to identify a further subset T of S such that the alternatives in T are better than those in S-T.

In this way, through successive "narrowing" of the set of alternatives, the individual can arrive at a choice without having had to construct a complete ordering over the set X.

How might a group make a choice in this way?

Wednesday, 2 November, 5.30pm – 7pm

Seamus Bradley and Katie Steele (LSE, Philosophy)

Can Free Evidence Be Bad?: Value of Information for the Imprecise Probabilist

This paper considers a puzzling conflict between two premises that are each compelling: 1) it is irrational for an agent to pay to avoid `free' evidence before making a decision, and 2) rational agents may have imprecise beliefs and/or desires. The first part of the paper elaborates this prima facie conflict; we show that Good's (1967) theorem concerning the non-negative value of free evidence does not generalise to the imprecise realm, given the plausible existing decision theories for handling imprecision. (A `decision theory' here refers to a learning rule plus a choice rule.) The paper then considers two strategies for resolving the conflict. The first is conservative: we suggest that `free' evidence must be understood differently in the imprecise context, and many cases of apparently free evidence are not in fact so. The second strategy is more radical: we suggest a modification to imprecise decision theories so that an agent will prefer free evidence over payment.

Wednesday, 9 November, 5.30pm – 7pm

Philippe Mongin (CNRS Paris)

What the Bayesian Decision Theorist Could Tell the Bayesian Philosopher

An extended abstract is available here|.

Wednesday, 16 November, 5.30pm – 7pm

Armin Schulz (LSE, Philosophy)

Preferences vs. Desires: Debating the Structure of Conative States

In this paper, I address a major, but quite overlooked, question about the structure of the cognitive / conative model of the mind that underlies much of the work in psychology, social science, and philosophy: namely, whether conative states are monistic (desire-like) or comparative (preference-like). I begin by making clear that deciding this issue is important, both for its inherent interest (it tells us more about what our minds – and those of other animals – are like) and for its implications for other questions (such as how conative states relate to other mental states – like cognitions and pleasures). I then argue that, on the one hand, none of the currently widespread theories that appeal to these states – the major decision theories being defended – can be used to answer this question, and on the other, the only explicit attempt at settling this debate to date – that of John Pollock – is unsuccessful as well. Given this, I suggest that one consideration that speaks in favour of a preference-based view is the fact that it makes it easier to explain certain empirically observed patterns in decision making – namely, violations of choice transitivity – which have been hard to fully make sense of up to now. Overall, therefore, I hope to show that the debate between desires and preferences is an important and widely overlooked issue that needs to be further investigated – and that there are some plausible inroads to explore that might make its resolution possible.

Wednesday, 23 November, 5.30pm – 7pm

Chew Soo Hong (National University of Singapore, Economics)

Affective Saliency: Modeling Prospect Theory Thinking using Neurochemistry

Two strands of thinking have been attributed to prospect theory. One is built on a loss-averse reference-dependent value function which has recently been given a neurochemistry-based foundation (Zhong et al., 2009, Proceedings of the Royal Society B, Ä Neurochemical Approach to Valuation Sensitivity over Gains and Losses). This talk focuses on the other strand of thinking having to do with how people respond to probabilistic stimuli in a nonlinear manner. We posit a salience function to model the way affective aspects of a lottery influence the extent of nonlinear response to probabilities associated with lottery outcomes and link its behavior to the underlying neurochemistry. At the level of revealed choice, our model can generate the fourfold pattern of risk attitudes including longshot bias, account for Allais behavior, and exhibit both Ellsbergian ambiguity aversion and Fox-Tversky familiarity bias. At the molecular level, this model has implications linking gene, brain, and revealed choice which can be tested in laboratory settings using imaging genetics.

Wednesday, 30 November, 5.30pm – 7pm

Special Double Session on Measuring Freedom with Johan E. Gustafsson (CNRS Paris) and Hendrik Rommeswinkel (St. Gallen)

Johan E. Gustafsson (CNRS Paris)

Indifference between No-Choice Situations

One of most discussed conditions on measures of freedom of choice is the principle of indifference between no-choice situations.

This is the principle that all choice sets with only one alternative offer equally as much freedom of choice. A number of arguments for and against this principle have been proposed in the literature. This paper will critically examine these arguments and develop some of them.

Hendrik Rommeswinkel (St. Gallen)

A Causal Measure of Freedom

The paper axiomatizes a freedom measure based on the idea that an agent is free if she can causally influence relevant aspects of her life. To model this, causal networks are employed. It is shown that the measure is a generalization of several opportunity set based measures. The measure solves two fundamental problems of the literature on freedom of choice: The integration of freedom and welfare into a single measure and the difficulty of interpreting opportunity sets in situations where agents interact.

Wednesday, 7 December, 5.30pm – 7pm

Oliver Walker and Simon Dietz (LSE, Grantham Institute)

A representation result for choice under conscious awareness

Abstract and paper available here are available on the Grantham Institute| website.

Share:Facebook|Twitter|LinkedIn|