Accepted Papers


Jean Baccelli (École Normale Supérieure): "Further Issues with the Identification of Subjective Probabilities by Choices"

ABSTRACT. This paper is about the elicitation of subjective probabilities by the simple observation of choices. It shows that even when choices are known to be made by a bayesian decision maker, the identification of a unique subjective probability function may not be achieved. This is a mathematical fact that runs counter received interpretations of, amongst others, Savage’s and Anscombe-Aumann’s theorems. Conceptually speaking, it has to do with state-dependent utility issues. The full scope of those issues is illustrated by five variations on a famous example that appeared in correspondence between Aumann and Savage: the case of Mr. Smith’s elicited beliefs regarding Mrs. Smith’s recovering from a potentially fatal operation. Set in Savage’s framework, the variations establish that there might not exist any, or exist more than one, subjective probability function suitably compatible with the choices of a bayesian decision maker. They also suggest that this has less to do with Savage’s own axiomatic proposal than with the type of behavioral information available in any kind of Savage-like framework - that is, in a standard act set-up. Finally, the paper discusses one efficient way to solve this indetermination problem: the use of so-called hypothetical preferences. It shows that this approach cannot stumble on any of the indetermination issues previously listed, and gives an interpretation thereof. In addition, the paper provides some arguments against the claim that, irrespective of whether this approach could technically overcome those indetermination issues, it should be refused by principle on general methodological grounds.

Benjamin Bewersdorf (University of Groningen): "Total Evidence, Uncertainty and A Priori Beliefs"

ABSTRACT. Defining the rational belief state of an agent in terms of an a priori, hypothetical or initial belief state as well as the agent's total evidence can help to address a number of interesting philosophical problems. In this paper, I discuss how this strategy can be applied to cases in which evidence is uncertain. I also argue that taking evidence to be uncertain allows us to uniquely determine the subjective a priori belief state of an agent from her present belief state and her total evidence, given that evidence is understood in terms of update factors.

 

Catrin Campbell-Moore (MCMP, LMU München): "Imprecise Probabilities and Supervaluational Logic"

ABSTRACT. In this paper we shall show that there is an intimate connection between supervaluational logic and imprecise probabilities. We shall also present a new argument for imprecise probabilities which is very different from the existing justifications: giving a formal semantics for a language expressing embedded (precise) probabilities in a type-free manner can lead to paradoxes which are avoided when one instead considers imprecise probabilities. 

We shall first show that the notion of being a probability over supervaluational logic collapses naturally into being an imprecise probability. We then consider how one might give a language to talk about imprecise probabilities. This is interesting for example if one was interested in formalizing agent's beliefs about the other agents' beliefs, where the beliefs are modelled by imprecise probabilities. We suggest that supervaluational logic is appropriate for this task. 


So we result in a two-way connection between supervaluational logic and imprecise probabilities. This allows one to iterate the process and develop a semantics for languages with embedded probabilities which we do using a possible worlds style framework. By considering a type-free language for probability we achieve a vast gain in expressive power. A precise probabilities approach cannot give a semantics for such a language because of paradoxes analogous to the liar paradox. However the imprecise probabilities approach doesn't have this limitation.  

Hugh Desmond (KU Leuven, Belgium): "Probabilistic Explanation isn't about Probability"

ABSTRACT. In probabilistic explanations of events, to what extent is the size of the probability of the outcome events explanatorily relevant? The consensus approach has been to find the right combination of relative and absolute probabilistic difference-making to characterize genuine probabilistic explanation. Thus the ‘elitists’ have emphasized absolute size of probability of the explanandum, and the ‘egalitarians’ the relative probability. In this paper, I first argue that this approach is misguided, because there is no such thing as the probability of an event. An event’s probability can be defined in different equally valid ways, depending on the conceptual framework introduced. In the constructive part of the paper, two distinct measures of probability, the ‘outcome-based probability’ and ‘initial condition-based probability’, can be used to identify which types of event are explainable. Further, I argue that the principles of equivalence necessary to define the measures on these respective probabilities are what do the explanatory work, not the relative or absolute magnitude of the probabilities.

Márton Gömöri (ELTE, Hungary): "What is a physical “and”?"

ABSTRACT. This paper analyzes the very concept of statistical correlation. Correlation requires the specification of a statistical ensemble. It will be argued that the choice of the statistical ensemble contains an ambiguous element, what will be called the conjunctive relation between the “members” of the ensemble. An example of the specification of this relation is when one considers the simultaneous tosses of two coins when calculating the correlation between the outcomes. On the basis of the argued ambiguity of the conjunctive relation, a new local hidden variable model of the EPR--Bell correlations in quantum mechanics will be outlined.

 

Teddy Groves (University of Kent): "Accuracy arguments for probabilism in the context of Carnapian inductive logic"

ABSTRACT. Several recent philosophical arguments seek to show that states of belief must be representable by probability spaces in order to avoid being needlessly inaccurate. I consider whether such accuracy arguments can be applied to the project of developing Carnapian inductive logic by supporting the claim I call `probabilistic necessity’. I argue that they cannot, as the arguments in question depend on dubious assumptions about which measures of inaccuracy are legitimate. 

The Carnapian tradition in inductive logic attempts to replace informal inductive assumptions with inductive-logical adequacy criteria. One such adequacy criterion is called probabilism: `probabilistic necessity’ is a meta-inductive-logical claim according to which all useful adequacy criteria are at least as strong as probabilism.

I then discuss accuracy arguments in general and show how they can be used to support probabilistic necessity. 

I dispute three assumptions about measures of inaccuracy, namely sum-decomposability, strict propriety and continuity. All accuracy arguments that I am aware of make at least one of these assumptions. Sum-decomposability and continuity seem to rule out potentially plausible inaccuracy measures, while strict propriety is difficult to justify without first assuming that rational agents have probabilistic states of beliefs, making it unsuitable as part of an argument for probabilistic necessity. 

In light of the difficulties I find with assuming that measures of inaccuracy have these properties, I conclude that there has not yet been a successful accuracy argument for probabilism. Consequently, such arguments cannot currently be used to support probabilistic necessity in inductive logic. However, I note that this situation may change in the future with the discovery of new formal features of legitimate inaccuracy measures, new reasons to make as-yet unjustified suppositions about legitimate inaccuracy measures, or more general accuracy arguments that do not require these suppositions.

Eric Johannesson (Stockholm University): "Centered Worlds and Conditional Probabilities: A Challenge for Thirders"

ABSTRACT. It's far from obvious how to conditionalize in a centered world context. Guided only by what we know about conditionalization in ordinary (uncentered) contexts, it seems like the choice will have to be to somewhat arbitrary. Therefore it's not clear what to say about Sleeping Beauty. There's disagreement between halfers and thirders as to whether Beauty's probability of heads should be 1/2 or 1/3 on the Monday wakening. Be that as it may, it seems clear at least to me that her probability of heads on Wednesday should be 1/2. Thus, I'm challenging thirders to provide a general theory of conditionalization that explains the pattern 1/2-1/3-1/2 (probability 1/2 on Sunday, 1/3 on Monday, and 1/2 on Wednesday). Arguably, such a theory would at least have to preserve the conditionalization relation that obtains in the ordinary context. Perhaps it should also satisfy what I call the principle of evidentiality: that conditional probabilities (with respect to some centered or ordinary proposition) should be equal to the unconditional probabilities that the agent would have if he had actually learned the proposition in question. (There are, however, counterexamples to this principle even in the ordinary case). Although these two principles alone do not determine a unique theory, my tentative conclusion is that any reasonable theory of conditionalization in a centered world context will either yield 1/2-1/3-1/3 or 1/2-1/2-1/2. If I'm right, this should speak in favor of the 1/2 solution.

Amanda Macaskill (NYU): "Safe Scoring Rules'

ABSTRACT. Traditionally, the epistemology of partial belief has employed proper scoring rules such as the Brier score in order to determine how well an agent has performed in a given decision situation. In this paper I argue that standard proper scoring rules should be supplemented with a safety condition across Savage consequences when we are assessing certain decision situations. I show that adding the safety condition I construct to the standard Brier score solves several cases that the standard Brier score seems to get wrong, and allows us to track good decision methods more accurately. In section 1 of the paper I outline the standard Brier score: the scoring rule most often employed in the epistemology of partial belief. In section 2 I present the problem of epistemic luck for partial beliefs, and outline my safety Brier score as a solution to this problem. In section 3 I show that the safety Brier score is also able to resolve another set of problems for the standard Brier score: epistemic Jackson cases. In section 4 I discuss objections to the view, as well as further applications (such as to the swamping problem for partial belief). I conclude that in light of these benefits, the safety Brier score I outline can be adopted instead of the standard Brier score in order measure of how well an agent has performed in the relevant decision situations.

Rossella Marrano (Scuola Normale Superiore): "Degrees of Truth as Objective Probabilities"

ABSTRACT. The conceptual distinction between truth and belief is generally taken for granted. In a logical setting, this distinction is marked by assuming that, unlike degrees of belief, truth-values are compositional. This paper investigates the robustness of the truth-values vs. belief-values distinction when we relax the assumption that the underlying semantics is classically two- valued. In particular we will focus on Lukasiewicz's real-valued logic, which occupies a central position in the current research on many-valued logics. We will argue that the conceptual distinction resists under this generalised logical setting, provided that we can put forward an interpretation of degrees of truth as objective probabilities. This will emerge as a consequence of giving Lukasiewicz's real-valued semantics an ordinal foundation based on the relation "no less true than".

Danny November (The Hebrew University of Jerusalem): "Bertrand's Paradox Resurrected - the Problem of Probability Assignment"

ABSTRACT. One of the most common ways for assigning an a-priori probability to a random variable is using the Indifference Principle (IP). According to this principle equi-possible events are equi-probable. In all IP related paradoxes, including the famous Bertrand's paradox (BER), there is a random variable, which is assigned at least two different probabilities using IP. Since a random variable can have only one probability function, a paradox arises. 

In this paper, I claim that IP related paradoxes are in fact specific cases of a more general problem of a-priori probability assignment and that similar paradoxes appear each time a random variable is assigned two (or more) different probabilities. Moreover, and contrary to the common view that paradoxes such as BER are rear and even unique, I claim that such paradoxes can be created each and every time one assigns a probability to a random variable. In other words, in any probability assignment one can assign the random variable at least two different and equally mathematically-justified probabilities, resulting in a contradiction.

In order to support my claims, I first describe a general scheme for creating BER-like paradoxes, then analyze several of the more famous paradoxes in light of this scheme, and finally reexamine some of the proposed solutions to these paradoxes. I hope to show that none of these solutions are complete, since in fact one cannot solve a BER-like paradox (if at all) without appealing to "external" non-mathematical arguments. 

Share:Facebook|Twitter|LinkedIn|
bayespromo
AnnualFund
cpnss
bsps-logo2