Wednesday, 15 January, 5.30 - 7pm
Joe Halpern (Cornell)
Causality, Responsibility, and Blame: A Structural-Model Approach
I first review the basic definition of causality introduced by Halpern and Pearl. This definition (like most in the literature) treats causality as an all-or-nothing concept; either A is a cause of B or it is not. We show how it can be extended to take into account the degree of responsibility of A for B. For example, if someone wins an election 11--0, then each person who votes for him is less responsible for the victory than if he had won 6--5. I then define a notion of degree of blame, which takes into account an agent's epistemic state. Roughly speaking, the degree of blame of A for B is the expected degree of responsibility of A for B, taken over the epistemic state of an agent. I also briefly discuss the extent to which definitions reflect how people use notions like cause, blame, and responsibility in practice.
Wednesday, 22 January, 5.30 - 7pm
Wednesday, 29 January, 5.30 - 7pm
No seminar this week due to departmental affairs.
Wednesday, 5 February, 5.30 - 7pm
Jason Konek (Bristol)
Why Accuracy-First Epistemology Does Not Sanction Epistemic Bribe-Taking
Hilary Greaves (2013) and Selim Berker (2013) pose a serious challenge to a popular brand of epistemic consequentialism. On the view in question, which we call accuracy-first epistemology, accuracy is the fundamental epistemic good. Epistemic norms — probabilism, conditionalisation, the principal principle, etc. — have their binding force in virtue of helping to secure this good. Greaves and Berker argue that accuracy-first epistemology sanctions an obviously irrational sort of epistemic bribe-taking. It sanctions coming to believe a small number of known falsehoods in order to increase overall accuracy. We defend accuracy-first epistemology. Greaves and Berker assume that proponents of this approach are committed to treating epistemic states and the action of adopting such a state as interchangeable. This is false. Epistemic states and acts are properly evaluated according to different standards. As a result, rational preferences over those states and acts do not, in general, agree. Neither do the choices that those preferences license. Distinguishing between epistemic states and acts, and carefully delineating how our evaluations of the two figure into an accuracy-first epistemology makes clear why epistemic bribe-taking is irrational.
Wednesday, 12 February, 5.30 - 7pm
Luc Bovens (LSE)
The Affirmative Action Debate is Stuck in the First Moment
In the affirmative action debate it is implicitly assumed that we can make a precise assessment of the qualifications of the candidates. Anyone who has been involved in selection knows that there is much uncertainty in the process. I show that, in the face of uncertainty, it may be rational for an employer to favour candidates with lower expected qualifications over higher expected qualifications and that this may benefit persons with protected characteristics. I discuss this result on the background of the UK Equality Act 2010.
Wednesday, 19 February, 5.30 - 7pm
Adam Oliver (LSE, Social Policy)
A Return to the Utility Analysis of Choices Involving Risk
For more than sixty years, the descriptive and predictive validity of orthodox utility theory has been subject to periodic critical examination. This scrutiny has not fundamentally undermined the orthodox model as a normative theory, but the assumption of universal risk aversion does not translate to observed choice, which is important when considering the predictive usefulness of the standard theory of rational choice as a tool to inform policy. In this article, a study is reported that tests whether the orthodox model, the Markowitz hypothesis or prospect theory is the best predictor of risk attitudes in a number of incentivised yet hypothetical litigation questions. Prospect theory emerges as an almost perfect predictive theory in relation to the answers given, and the tendency for people to be risk seeking in the face of high probabilities of losses emerges once more as a key consideration of economic irrationality in human decision making.
Wednesday, 26 February, 5.30 - 7pm
Joe Mazor (LSE Philosophy & Government)
Momentary Maximization, Anticipatory Feelings, and the Evolution of Human Intelligence
There is a general consensus among scholars who study animal behavior that animals cannot delay gratification for more than a few minutes at the very most. They are, according to some scholars, momentary maximizers. How, then, can human beings delay gratification? One possibility is that the maximization horizon of humans has simply steadily increased over time. However, there is another possibility: namely, humans are, like animals, momentary maximizers. However, when making intertemporal choices, human beings obtain immediate hedonic utility from anticipatory feelings.
That is, they get pleasure and pain now from the anticipation of future pleasure and pain. In this presentation, I present a model of human intertemporal choice that combines momentary maximization with anticipatory feelings. I show that this model is, under certain assumptions, functionally equivalent to the well-known quasi-hyperbolic beta-delta model of human intertemporal choice. I then present a (very tentative) hypothesis about the relationship between anticipatory feelings and the evolution of human intelligence and suggest (also very tentatively) how such feelings might have arisen in the first place.
Wednesday, 5 March, 5.30 - 7pm
Katya Tentori (University of Trento, Psychology)
Judging the probability of hypotheses versus the impact of evidence: Which form of inductive inference is more reliable?
Humans’ spectacular ability to draw inferences from limited information underpins perception, categorization, prediction, diagnoses, and scientific discovery. Such inferences are inductive: they venture beyond the information gained to draw conclusions that are probable, but not logically implied by the available evidence.
Inductive reasoning requires exploiting links between evidence and hypotheses. But how are these links better represented in our minds? A natural assumption is that they are well expressed by the posterior probability of hypotheses. An alternative view sees the assessment of evidential impact as a cognitively more reliable representation.
Research in cognitive science has focused almost exclusively on how people judge the probability of a hypothesis in light of the given evidence. Assessment of the impact of new evidence on the credibility of hypotheses has not received equal consideration. As a consequence, numerous basic questions still await an answer: When does an inference sound convincing? How should the weight of evidence be quantified? Are human reasoners good at these tasks? What are the relations between evidence assessment and other domains of reasoning?
In my talk, I will present some studies that my collaborators and I carried out by combining the refinement of Bayesian confirmation measures set out in the epistemology literature with the development of a new experimental paradigm for eliciting assessments of evidential impact. One of our main recent findings is that people’s inferences are more accurate and consistent when they concern evidential impact rather than hypothesis credibility. We have also found that it is possible to use evidential impact to reinterpret puzzling phenomena traditionally pertaining to probabilistic reasoning. These results raise the possibility that evidence assessments have greater normative merit than do probability judgments, which are often observed to be deficient.
Wednesday, 12 March, 5.30 - 7pm
Wednesday, 19 March, 5.30 - 7pm
Jim Franklin (UNSW, Maths)
Extreme risk: decision-making with data-free statistics
Evaluating the probability of extreme events (terrorist attacks, quarantine incursions, disasters ...) is hard. They haven't happened yet, so there is no data - or little data, or data of doubtful relevance. The statistical method of Extreme Value Theory is some use in extrapolating beyond known values, but there is still a need to supplement sparse data with expert opinion. Unfortunately, expert opinion is unreliable for many reasons well known to psychologists, and especially so when there is no data to call it to account. Quarantine systems and the Basel II banking compliance system for operational risk have addressed this problem, with some degree of success. Their "advocacy model" depends on different teams of experts with different agendas who can keep each other honest.
Wednesday, 2 October, 5.30pm – 7pm
Roberto Fumagalli (Bayreuth)
Neural Findings and Economic Models: the Failure of Neuroeconomics
The proponents of neuroeconomics often argue that better knowledge of the human neural architecture enables economists to improve standard models of choice.
In their view, these improvements provide compelling reasons to use neural findings in constructing and evaluating economic models. In a recent paper, I criticized this view by pointing to the trade-offs between the modelling desiderata valued by neuroeconomists and other economists respectively. The present article complements my former critique by focusing on four modelling desiderata that figure prominently in economic and neuroeconomic modelling. For each desideratum, I examine findings that neuroeconomists deem to be especially relevant for economists and argue that neuroeconomists have failed to substantiate their calls to use such findings in constructing and evaluating economic models.
In doing so, I identify methodological and evidential constraints that will continue to hinder neuroeconomists’ attempts to improve economic models.
Moreover, I draw on the literature on scientific modelling to advance the ongoing philosophical discussion regarding interdisciplinary models of choice.
Wednesday, 9 October, 5.30pm – 7pm
Tom Cunningham (Institute for Economics Studies, Stockholm)
Biases and Implicit Knowledge
A common explanation for biases in judgment and choice has been to postulate two separate processes in the brain: a “System 1” that generates judgments automatically, but using only a subset of the information available, and a “System 2” that uses the entire information set, but is only occasionally activated. This theory faces two important problems: that inconsistent judgments often persist even with high incentives, and that inconsistencies often disappear in within-subject studies. In this paper I argue that these behaviors are due to the existence of “implicit knowledge”, in the sense that our automatic judgments (System 1) incorporate information which is not directly available to our reﬂective system (System 2). System 2 now faces a signal extraction problem, and information will not always be eﬃciently aggregated. The model predicts that biases will exist whenever there is an interaction between the information private to System 1 and that private to System 2. Additionally it can explain other puzzling features of judgment: that judgments become consistent when they are made jointly, that biases diminish with experience, and that people are bad at predicting their own future judgments. Because System 1 and System 2 have perfectly aligned preferences, welfare is well-deﬁned in this model, and it allows for a precise treatment of eliciting preferences in the presence of framing eﬀects.
Wednesday, 16 October, 5.30pm – 7pm
No seminar due to Popper memorial lecture given by Helga Nowotny, President of the European Research Council, entitled 'The Cunning of Uncertainty' on 15 October at 18.30 in Sheikh Zayed Theatre, NAB.
Wednesday, 23 October, 5.30pm – 7pm
Matthew Williams (Imperial College, Medicine)and Tony Hunter (UCL, Computer Science)
Logic-based argumentation for aggregating clinical trials
There is a desire for medical practice to be evidence-based.
However, the increasing amount of medical evidence makes it difficult for clinicians to stay up to date, and this is exacerbated by the fact that evidence may be partial, incomplete and contradictory. Here we introduce a novel framework for logic-based argumentation for reasoning with the summarised results of clinical trials. We describe some of the motivations and features of our approach, and show its use in a complex, real-world domain, aggregating the evidence for use of chemo-radiotherapy in lung cancer. In particular, we show how argumentation and ontological reasoning are both important for evidence aggregation, and demonstrate how we can incorporate subjective criteria in the assessment of evidence.
Wednesday, 30 October, 5.30pm – 7pm
Larry Temkin (Rutgers, Philosophy)
Rethinking the Good-A Small Taste
Most people accept the Axiom of Transitivity: if, all things considered, A is better than B, and B is better than C, then all things considered, A is better than C. Moreover, importantly, most people believe that the Axiom of Transitivity is an analytic truth, or true as a matter of the logic of goodness. Most people also believe that except in cases where we have agent-relative duties, or special obligations to individuals, we should be neutral with respect to people, places, and times, and they also accept various Pareto-like principles, assuming that if one outcome is better than another for each person, or at each moment, or at every place, then it must be better, all things considered. In this talk, I shall present an impossibility result, and some key examples that challenge these standard assumptions. In doing this, I shall also distinguish between two rival conceptions of the good, an Internal Aspects View, and an Essentially Comparative View, and show that the latter will be difficult to reject, but that only the former guarantees the Axiom of Transitivity. Overall, my arguments raise deep questions about our understanding of the good, moral ideals, and the nature of practical reasoning.
Wednesday, 6 November, 5.30pm – 7pm
Adam Oliver (LSE)
Testing the Rate of Preference Reversal in Personal and Social Decision-Making
Classic preference reversal, where choice and valuation procedures generate inconsistent preference orderings, has rarely been tested in hypothetical health care treatment scenarios. Two studies – the first non-incentivised and the second incentivised – are reported in this article. In both studies, respondents are asked to make decisions that affect themselves (a personal decision making frame) and those for whom they are responsible (a social decision making frame). The results show non-negligible and systematic rates of preference reversal in both frames, although these rates are slightly, but non-significantly, lower in the incentivised condition. Moreover, in both studies, the rate of predicted preference reversal was higher in the social than in the personal decision making frame, a finding that is explained by greater risk aversion when choosing treatment options for others than when choosing treatments for oneself.
Wednesday, 13 November, 5.30pm – 7pm
John Wigglesworth (LSE and Institute of Philosophy)
Cognitive Biases and Nonmonotonic Logics
We look at the use of nonmonotonic logics to model certain cognitive biases involving deductive reasoning. Looking at two groups of experimental data, we see that people often claim invalid arguments to be valid, and that they fail to recognize valid arguments as valid. In these ways, participants deviate in their reasoning from the standards of classical logic. The question is whether they are using other, non-classical principles in their reasoning processes. And if so, what principles do they use? The data suggest that people tend to judge arguments to be valid when they have a prior acceptance of the conclusion. We discuss how nonmonotonic logics have been used to capture this kind of reasoning through the use of preferential models. We conclude by showing that nonmonotonic logics do not capture what is happening in all of these cases involving cognitive bias, with particular focus on new experimental data on reasoning in contradictory circumstances.
Wednesday, 20 November, 5.30pm – 7pm
Felix Pinkert (Oxford, Philosophy)
Solving coordination problems in the spirit of individual rational choice
This paper is an ambitious attempt to argue that traditional individual rational choice theory can explain why rationality requires agents to collectively achieve optimal outcomes in coordination games. Traditional rational choice theory only requires individuals to maximize their individual expected utility, and therefore faces problems to direct agents away from pareto-suboptimal and towards pareto-optimal equilibria in coordination games. I argue that traditional rational choice theory can solve this problem without invoking any form of collective rationality or team reasoning. The core concepts invoked in my solution is that of rational motivation and collective ability. Rational motivation is understood solely in terms of traditional rational choice theory, namely as meaning that the agent cares about optimally satisfying her own preferences. Collectively ability signifies those conditions under which a group of agents can reasonably be excpected to bring about pareto-optimal outcomes in coordination games. I argue that if collective ability is given, then any group of agents with rational motivation will select an optimal equilibrium. I argue that this result holds both in cases of sequential choice, and in the more difficult cases of synchronous choice.
Wednesday, 27 November, 5.30pm – 7pm
Hilary Greaves (Oxford, Philosophy)
Prioritarianism is supposed to be a theory of the overall good that captures the common intuition of "priority to the worse off". Over the past few decades, there has been a largely unannounced slide, from formulating prioritarianism in terms of an alleged primitive notion of quantity of well-being, to formulating it in terms of von Neumann-Morgenstern utility. The resulting two forms of prioritarianism (which I call, respectively, "Primitivist" and "Technical" prioritarianism) are not mere variants on a theme, but are entirely distinct theories, amenable to different motivating arguments and open to different objections. This talk argues, against an apparently sweeping current consensus, that the basic intuition of "priority to the worse off" provides no support for Technical Prioritarianism. The argument proceeds via the observation that insofar as an argument can be constructed leading from the intuition of priority to Technical Prioritarianism, an analogous and equally compelling intuition (one of caution in the face of risk) leads, via a precisely analogous line of argument, to a theory that is in a clear sense "opposite" to Technical Prioritarianism. (This is the “anti-prioritarianism” that gives this talk its title.) I conclude that those whose only motivation in this vicinity is that of the basic intuition of priority should be either Primitivist Prioritarians, or utilitarians (in the modern, minimal sense of the latter). A corollary is that much of the recent discussion of prioritarianism in the literature - in particular, the numerous attempts to defend Technical Prioritarianism's manner of violating the Ex Ante Pareto principle in the name of the priority intuition - is misguided.
Wednesday, 4 December, 5.30pm – 7pm
Alex Voorhoeve (LSE)
Ambiguity Aversion, the Hurwicz Criterion, and the Principle of Insufficient Reason: a Planned Experiment
How should you evaluate a gamble when you can't assign probabilities to winning? The Hurwicz criterion tells you maximize (1-h)p + hp*, where p and p* are the lower and upper probabilities of winning, and h (0≤h≤1) is the Hurwicz "optimism-pessimism coefficient". People with h<1/2 are pessimistic, and display a form of ambiguity aversion - they favour a bet with with a probability of winning of 1/2 over a bet with a probability of winning between 0 and 1. However, in a recent experiment, we found that subjects' behaviour was consistent only with h=1/2 (which implies ambiguity neutrality). Inspired by this finding, we propose a justification for h=1/2 as the only rational value in a particular type of ambiguous situation. We also outline a new experiment to distinguish subjects using an ambiguity-neutral version of the Hurwicz criterion from subjects using the Principle of Insufficient Reason.
Wednesday, 11 December, 5.30pm – 7pm
Philipp Koralus (Oxford, Philosophy)
Decisions, Questions, and Illusory Reasons
It is a remarkable fact that we make seemingly irrational choices in systematic ways, amply documented in the experimental literature. It is an equally remarkable fact that we are capable of systematically rational decisions. An explanation of the nature of human decision-making that gives equal weight to these facts is elusive. I propose that we can make sense of these facts as the result of a cognitive strategy of raising questions and answering them in the most direct way. I have argued elsewhere that when made formally precise, this model can explain successes and failures in reasoning (Koralus and Mascarenhas, forthcoming). Rational inferences result from a kind of “erotetic equilibrium,” in which further questions will not change the conclusion in the absence of external influences. I sketch how to extend this model to decision-making as the erotetic theory of decision (ETD). I suggest that we could make sense of various irrational decision patterns as instances of what I call “illusory reasons,” analogous to so-called “illusory inferences” in reasoning. Illusory reasons disappear if the right kinds of questions are raised in making a decision.