On 2–3 June 2017, this conference will bring together researchers and graduate students in Philosophy, Psychology/Cognitive Science, Physics, Medicine, Computer Science and related fields to discuss issues in the philosophy of probability.
The 5^{th} LSE Graduate Conference in Philosophy of Probability will take place on 2–3 June 2017 at the Centre for Philosophy of the Natural and Social Science [CPNSS]. This is a philosophy of science graduate conference addressing students in Philosophy, Psychology/Cognitive Science, Physics, Medicine, Computer Science and related fields. The conference will bring together four established researchers in philosophy of probability as keynote speakers as well as eight graduate students.
Programme
Day 1: Friday 2 June |
|||
9:00–9:30 | Registration | LAK G.01C | |
9:30–11:00 | Keynote – Maria Carla Galavotti (Bologna): “Probability and its interpretations” | LAK 2.06 | |
11:00–11:30 | Tea & Coffee | LAK G.01C | |
11:30–12:15 | Grzegorz Lisowski, Dean McHugh & Max Rapp (Amsterdam): “Winning Questions: inquisitive semantics and the lottery paradox” |
LAK 2.06 | |
12:15–13:00 | Pablo Zendejas Medina (Pittsburgh): “Better Not to Know” | LAK 2.06 | |
13:00–14:30 | Lunch | LAK G.01C | |
14:30–15:15 | Jeremy Steeger (Notre Dame): “Betting on quantum objects” | LAK 2.06 | |
15:15–16:00 | Alexander Carver & Paolo Turrini (Imperial): “On the non-deterministic nature of segregation in Schelling’s model: a computer-aided analysis” |
LAK 2.06 | |
16:00–16:30 | Tea & Coffee | LAK G.01C | |
16:30–18:00 | Keynote – Sylvia Wenmackers (KU Leuven): “Infinitesimal probabilities & ultra-additivity” | LAK 2.06 | |
19:00– | Conference Dinner | Ciao Bella |
Day 2: Saturday 3 June |
|||
10:00–11:30 | Keynote – Anna Mahtani (LSE): “Vague Credence” | LAK 2.06 | |
11:30–12:00 | Tea & Coffee | LAK G.01C | |
12:00–12:45 | Milana Kostic (MCMP): “Updating with Restrictor Conditionals” | LAK 2.06 | |
12:45–13:30 | Gary Mullen (Leeds): “The No Option Puzzle: a puzzle for accounts of options in decision theory” | LAK 2.06 | |
13:30–14:30 | Lunch | LAK G.01C | |
14:30–15:15 | Boris Babic (Michigan): “Generalized Entropy and Epistemic Risk” | LAK 2.06 | |
15:15–16:00 | James Wilson (Bristol): “Accuracy and Probability Kinematics” | LAK 2.06 | |
16:00–16:30 | Tea & Coffee | LAK G.01C | |
16:30–18:00 | Keynote – Julia Staffel (St Louis): “A Puzzle about Outright Belief” | LAK 2.06 | |
18:15– | (Informal) drinks |
Abstracts
Boris Babic (Michigan): “Generalized Entropy and Epistemic Risk”
This paper concerns Bayesian epistemology and statistical inference. I seek to answer the following questions: (A) what makes one probability distribution (or credence function) riskier than another (or, equivalently: what is epistemic risk?); (B) how much risk is it reasonable for an epistemic agent to assume?; and finally (C) how will attitudes to risk affect an agent’s inferential/epistemic behavior? In short, to construct a measure of epistemic risk I draw on the economic notion of a mean preserving spread and generalize this idea to credence functions by constructing an accuracy-based measure of central tendency and defining the risk-free probability distribution as the distribution that guarantees a particular accuracy score within a class of equal expected accuracy prospects. I then measure increases in risk in terms of the increase in variation from the risk-free prospect. Such an approach has several interesting consequences.
First, I explain that the risk free credence is the maximum information entropy credence. As risk goes down, entropy goes up, and vice versa. Taken to its logical conclusion, this implies (as I explain in more detail in the extended abstract) that entropy reaches its maximum where risk reaches its minimum. This duality holds for all strictly proper scoring rules: risk is a scaled reflection of entropy. This result affects the current debate over the “immodesty” condition presupposed by strictly proper measures of accuracy. Second, I examine several important applications to learning and statistical inference. For example, I show that minimizing epistemic risk under a symmetric score implies a the Laplacean principle of indifference. Moreover, I show that we can use principles of epistemic risk to derive an important theorem from the theory of statistical estimation (the Rao-Blackwell Theorem).
[Commentator: Matt Parker]
Alexander Carver & Paolo Turrini (Imperial): “On the non-deterministic nature of segretation in Schelling’s model: a computer-aided analysis”
The Schelling Segregation Model is a spacial proximity model where individuals have
preferences over their neighbours and can change their location according to a specific order, the Schelling’s turn function. We provide an example in which such order has an impact in determining the properties of the final outcome, in particular its segregation level. Motivated by this finding, we introduce a tool for the analysis of different starting scenarios, which handles the complexity of the problem thanks to randomised Monte-Carlo explorer. Challenging Schelling’s claim that small preferences for local
uniformity inevitably lead to higher levels of global segregation, the explorer was able to find instances for which multiple paths, including the one induced by the Schelling’s turn function, could converge to a zero level of segregation.
[Commentator: Peter Sozou]
Maria Carla Galavotti: “Probability and its interpretations”
Far from being reducible to its mathematical aspects, the notion of probability has a rich and yet controversial philosophical insight, which is the source of an ongoing debate. After recalling the birth of probability in the Seventeenth century in connection to the work of Blaise Pascal and Pierre Fermat, the talk will overview the fundamental ideas underpinning the major interpretations of probability. The survey will include the “classical” theory of Pierre Simon de Laplace, the frequency
interpretation embraced by Richard von Mises and Hans Reichenbach, the propensity interpretation of Karl Raimund Popper, the logical interpretation developed by John Maynard Keynes, Rudolf Carnap and Harold Jeffreys, and the subjective interpretation put forward by Frank Plumpton Ramsey and Bruno de Finetti.
Milana Kostic (MCMP): “Updating with Restrictor Conditionals”
Evidence used in support of scientific theories sometimes comes in conditional form. However, modelling the way such information impacts an information state of a Bayesian agent has represented a great challenge in Bayesian epistemology. No hitherto proposed account gives the right predictions for the impact of the conditional information on the posterior probability distribution in all of the standard test cases (Douven, 2012).
In this paper I present a novel proposal for modelling learning from indicative conditionals, which is based on Kratzer’s semantics for conditionals. Even though it represents a near orthodoxy among the linguistic semanticists, Kratzer’s semantics has not received comparable attention among Bayesian epistemologists. In Kratzer’s semantics the role of the antecedent is to restrict the domain of application of the probability operator introduced by the consequent. I demonstrate that the model which incorporates Kratzer’s account gives the right predictions in all of the test cases. Furthermore, I demonstrate that Kratzer’s account satisfies Adams’s thesis (which states that the probability of the natural language indicative conditional equals the conditional probability of the antecedent given consequent), while it simultaneously avoids the issues raised by Lewis’s triviality proofs. In the end, I discuss the prospects for extending this account to modelling the way nested conditionals impact the information state of the Bayesian agent and the limitations of such extended model.
Grzegorz Lisowski, Dean McHugh & Max Rapp (Amsterdam): “Winning Questions: inquisitive semantics and the lottery paradox”
Probability theory has come to play a central role in recent discussions about belief, once thought to be too nuanced to ever admit quantitative analysis. The lottery paradox poses a major hurdle to this approach, demonstrating how, together with some highly plausible assumptions, any probabilistic threshold for belief less than one is bound to lead to inconsistent beliefs. We aim to defend a probabilistic theory of belief by taking up an idea of Leitgeb (2017). Our approach focuses on the role of context in the formation of beliefs; and in particular, on the alternative beliefs considered by an agent. To do so, we appeal to inquisitive semantics as a background framework. Inquisitive semantics seeks to formalise the role of questions in discourse, thereby furnishing a new perspective of the meaning of a sentence in terms of the questions it answers. We give an account for several aspects of questions in a framework based on Leitgeb’s theory, such as the difficulty of questions.
[Commentator: Roman Frigg]
Pablo Zendejas Medina (Pittsburgh): “Better Not to Know”
You have an important decision to make, and I offer you some relevant evidence, free of charge. Should you take it? The natural response is that you should, and a famous argument due to I.J Good (1967) is generally taken to have shown that Bayesian agents who are offered cost-free information should accept that information. Call that the Value of Evidence thesis (VoET). In this paper I argue, in a Bayesian framework, that VoET is false. I show that Good’s argument makes two crucial assumptions: first the that the possible propositions that the agent may learn form a partition, and secondly that the agent is certain of what beliefs learning any proposition would rationalize. These assumptions entail an overly strong constraint on epistemic rationality: that it requires us to always be certain about what it requires of us. I then present three counterexamples to the Value of Evidence thesis when the assumptions are relaxed. The first is a case where one’s evidence does not settle the question of what one’s evidence is, and so does not settle what one’s rational beliefs are. The second is a case where one does not know what the rational prior is, and the third is a case where one knows that one will update by Jeffrey Conditionalization and so will not become certain in anything. Finally, I show that the counterexamples cannot be blamed on Conditionalization as opposed to VoET, since as long as the agent thinks that he may update by a rule that satisfies a very minimal minimal constraint, there will be counterexamples.
[Commentator: Campbell Brown]
Anna Mahtani: “Vague Credence”
In this talk I investigate an alternative to imprecise probabilism. Imprecise probabilism is a popular revision of orthodox Bayesianism: while the orthodox Bayesian claims that a rational agent’s belief state can be represented by a single credence function, the imprecise probabilist claims instead that a rational agent’s belief state can be represented by a set of such functions. The alternative that I put forward in this paper is to claim that the expression “credence” is vague, and then apply the theory of supervaluationism to sentences containing this expression. This gives us a viable alternative to imprecise probabilism. I show that supervaluationism has a simpler way of handling sentences relating the belief states of two different people, or of the same person at two different times; that both accounts may have the resources to develop plausible decision theories; and finally I show how the supervaluationist can accommodate higher-order vagueness in a way that is not available to the imprecise probabilist.
Gary Mullen (Leeds): “The No Option Puzzle: a puzzle for accounts of options in decision theory”
Decision theory says that you ought to choose an option of maximal expected value relative to your degrees of belief and desire. But what gets to count as an option in the first place? I discuss a puzzle – the No Option Puzzle – for accounts of options in decision theory. A similar puzzle is discussed in Pollock (2002) and Hedden (2012). The puzzle is that in cases where it’s uncertain what the agent’s abilities are, it seems that there is no suitable set of options. For instance, suppose Jane is uncertain whether she can ford a creek because she might get swept away downstream. What is the option corresponding to ford the creek? Ford the creek itself is not an option because decision theory’s evaluation of it ignores the outcome in which Jane gets swept away. You might conceive of the option as intend to ford the creek, but this won’t do in a case where the agent doubts whether she can intend to ford the creek. So in such a case it seems that there is no suitable option set. My solution to this puzzle says that the ford-the-creek-like option is the following counterfactual: if the agent were able to ford the creek, then she would ford the creek. I preempt any worries that this isn’t the right sort of thing to be an option. And I end by considering how the No Option Puzzle differs to the puzzle in Pollock (2002) and Hedden (2012).
[Commentator: Richard Bradley]
Julia Staffel: “A Puzzle about Outright Belief”
It is commonly assumed that people need outright beliefs in addition to credences to simplify their reasoning. Outright beliefs do this by allowing agents to ignore small error possibilities. What is outright believed can change between contexts. It has been claimed that our beliefs change via an updating procedure resembling conditionalization. However, conditionalization is notoriously complicated. This claim is thus in tension with the explanation that the function of beliefs is to simplify our reasoning. I propose to resolve this puzzle by endorsing a different hypothesis about how beliefs change across contexts that better accounts for the simplifying role of beliefs.
Jeremy Steeger (Notre Dame): “Betting on quantum objects”
We prove a quantum version of the probabilists’ Dutch book theorem: treating the projection lattice of a finite-dimensional Hilbert space as a quantum logic, if the possible ideal beliefs an agent should have regarding propositions in the lattice are given by the restrictions of unit vector states to the lattice, then all and only the Born-rule probabilities avoid Dutch books. We then demonstrate the implications of this theorem for several operational and realist quantum logics. In the latter case, we show that the defenders of the eigenstate-value orthodoxy face a trilemma. They must choose one of: having no beliefs regarding ascriptions of property-values to quantum objects not in appropriate eigenstates; not using Born’s rule to fix their beliefs; or succumbing to Dutch books. Contrariwise, those who favor vague properties avoid the first two horns of this trilemma and admit all and only those beliefs about quantum objects that avoid Dutch books.
[Commentator: Miklós Rédei]
Sylvia Wenmackers: “Infinitesimal probabilities & ultra-additivity”
In this presentation, I discuss the motivations for a particular set of axioms for non-Archimedean probability theory (developed together with Vieri Benci and Leon Horsten), which allows us to assign non-zero infinitesimal probabilities to remote contingencies. I also address some criticisms that have been raised against this approach in the recent literature.
James Wilson (UCL): “Accuracy and Probability Kinematics”
Accuracy-based arguments aim to justify Bayesian norms by showing that these norms further the epistemic goal of accuracy—having credences (or degrees of belief) that are as close as possible to the truth. The standard defence of Conditionalization appeals to expected accuracy: from a prior standpoint, the expected accuracy of Conditionalization is always at least as great as that of any alternative updating rule. Leitgeb and Pettigrew showed that in the situations normally assumed to be governed by Richard Jeffrey’s generalised Probability Kinematics, the expected accuracy argument fails. We present a new accuracy-based argument for Probability Kinematics that appeals to accuracy-dominance, rather than expected accuracy. Our argument provides a reason for the so-called condition of rigidity, and we describe its role in those situations for which Probability Kinematics was designed.
[Commentator: Anna Mahtani]
Location
All talks will take place in room 2.06 of the Lakatos Building (marked “LAK” on the LSE Campus Map).
(click me for a handy Google map)
For further help finding your way please see the LSE Maps and Directions website.
Organising Committee
- Goreti Faria
- Silvia Milano
- Colin Elliot
- Aron Vallinder
- Chloé De Canson
Scientific Committee
- Goreti Faria
- Silvia Milano
- Colin Elliot
- Aron Vallinder
- Chloé De Canson
- Seamus Bradley
- James Nguyen
- Hamid Seyedsayamdost
- Alexandru Marcoci
- Márton Gömöri
Featured image credit: Jase Curtis / CC BY 2.0 (colour tinted from original)
Connect with us
Facebook
Twitter
Youtube
Flickr