In her second post in this series, Anna Mahtani explores the parallels between philosophy of language and decision theory’s treatment of indexicals and vagueness.

In the previous post I introduced decision theory, and looked at one way that it connects with philosophy of language. In this post I’m looking at two other areas where we can see this connection: indexicals and vagueness.

Indexicals

The meaning of an utterance can sometimes depend on the context of utterance. Sentences containing indexicals – such as “I”, “here” and “now” – are good examples of this. If I say “I am hungry” and you say “I am hungry”, then at least in some sense we have said different things, for it may be that what I have said is true and what you have said is false.

Do we assign credences and utilities to indexical claims like these; that is, do we believe and value them to some degree? Plausibly we do: I currently have a high credence that I am hungry and I assign a low utility to this situation! Furthermore, it seems as though this object of my credence and utility function may be essentially indexical. For we may not be able to replace the indexical content with a non-indexical equivalent content without changing the credence and utility assigned. For example, while I attach a high credence and low utility to my being hungry, I may attach an intermediate credence and utility to Anna Mahtani’s being hungry, if for example I don’t know who Anna Mahtani is, and have no knowledge or interest in her hunger levels.

 

“The meaning of an utterance can sometimes depend on the context of utterance.”
Image credit: Sarah Page / CC BY 2.0

 

As I explained in my previous post, decision theorists often model decision problems by setting out the agent’s relevant credences and utilities. And for some decision problems, it looks like an agent’s credences and utilities in indexical claims may be relevant. For example, suppose that I am trying to decide whether to hurry to my train platform, or stop and buy a coffee on the way. One important factor in this decision is whether the train is there now. We can assume that in this example I don’t know with certainty what the time is, and so (let’s suppose) the claim that the train is there now does not correspond exactly to any non-indexical claim. If it is there now, then I want to go straight to the platform and get onto it before it leaves; if it is not there now, then I’d prefer to buy a coffee and get to the platform in a minute or two. If I don’t know whether it is there now or not, then this is the sort of uncertainty that we would expect to be represented in the model. But it is not obvious that the train’s being there now is a “state” or “event” – at least not if states and events correspond to sets of possible worlds.

Decision theorists and formal epistemologists have made various moves to accommodate credences in indexical claims. Many theorists now reject the traditional way of modelling the objects of belief with sets of possible worlds, in favour of modelling the objects of belief with sets of centred possible worlds. A centred possible world is a possible world, plus an individual and a time. Thus a claim like “the train is on the platform now” can be represented by the set of all centered worlds at which the train is on the platform at the time on which the world is centered. Theorists have also attempted to create new rules for how credences should be updated in the light of new evidence. Traditionally, decision theorists and formal epistemologists have claimed that rational agents update in the light of new evidence “by conditionalization”, but this doesn’t seem to work where credences involve indexicals.

Here there is an overlap with work in the philosophy of language exploring the nature of indexicals and demonstratives. In our seminar we looked at papers by two philosophers that lie in this intersection: Michael Titelabaum’s “De Se Epistemology” and Ofra Magidor’s “The Myth of the De Se”.

Vagueness

On the standard decision theorist’s picture, a rational person assigns precise credences and utilities to each proposition that she entertains. But this seems implausible. Plausibly you have a precise credence (of ½) in the claim that the next time I toss a fair coin, it will land heads. But what is your credence in the claim that I have a fake coin in my pocket? I’m guessing that no particular number springs to mind. So the standard decision theorist – who maintains that you do have some precise credence in this claim – needs to explain why you don’t know what it is. Another thing that this standard decision theorist needs to explain is what it is that makes it the case that your credence is any particular value – e.g. why might it be 0.352 rather than 0.351?

These problems parallel the challenges levelled at the epistemic view of vagueness. A classic example of a vague term is “bald”. Some people are bald, and some people are not, but where does the boundary lie? Is there some number n such that a person with n hairs is bald but a person with n+1 hairs is not bald? Intuitively there is no such n. But this intuitive thought leads to paradox, for we can use it to argue from the true claim that a person with 0 hairs is bald, to the false claim that a person with 1000,000 hairs is bald. The epistemic theorist responds by claiming that vague terms like “bald” do have sharp boundaries – it’s just that we can’t know where these sharp boundaries lie. Thus the epistemicist faces analogues of the problems raised above for the standard decision theorist: why can’t we know where these sharp boundaries lie? And what makes them lie in any particular place?1

One response to vagueness is a theory called “supervaluationism”. According to this theory, there are many acceptable ways of making our language precise, and the truth-value of a sentence containing a vague predicate depends on the truth-value of the sentence under each precisification. The supervaluationist faces the problem of “higher-order vagueness”: just as intuitively the term “bald” does not draw a sharp boundary, so intuitively there is no sharp boundary around this set of acceptable precisifications.

Many decision theorists have responded along the same lines as the supervaluationist, by adopting “imprecise probability theory”. According to this theory, instead of modelling an agent with a single credence function and a single utility function, we need a set of such pairs of functions. Then whether you are rationally permitted to carry out an action depends on the verdict of each such utility-credence function pair. The imprecise probabilist faces various challenges, including an analogue of the problem of higher-order vagueness.

In our seminars we read Susanna Rinard’s paper “Imprecise Probability and Higher-Order Vagueness”, and Robbie Williams’ “Decision Making Under Indeterminacy”, both of which grapple with some of the problems at the intersection of these topics.

You may also be interested in this paper by our PhD student, Aron Vallinder, which discusses a more general problem for imprecise probablism

 

Each of these areas could be investigated in more depth, and there are still further topics to consider. For example, we will shortly be reading Daniel Rothschild’s “Game Theory and Scalar Implicatures”, which brings yet another area of philosophy of language into contact with decision theory. We look forward to investigating these connections further at the workshop in May and beyond.

 

By Anna Mahtani

Anna Mahtani is an Assistant Professor in LSE’s Department of Philosophy, Logic and Scientific Method. Anna’s research interests include formal epistemology, decision theory and the philosophy of language. She is currently working on projects concerning intensional contexts, vagueness, and probabilities. A selection of her published papers are available via the LSE Library.

 

Notes

[1] – I discuss vagueness in more detail in a previous blog post

 

Featured Image: Hanoos, via Wikimedia Commons / CC-BY-SA 3.0 (cropped, transformed and filtered from original)