Abstracts of talks

Luc Bovens

Beta Functions and Decision-Making on the Basis of Uncertain Evidence

In risk analysis, the precautionary principle is held up as a counter weight to expected utility maximization. But clearly, a strict application of the adage that it is better to be safe than sorry is stifling. We do not make policy by focusing strictly on the worst outcomes and choosing the policy that yields the best worst outcome. Although the probability of worst outcomes is not known with precision, we do make estimates of the risks and decide to accept certain risky prospects and not others. For Ellsberg, expected-utility maximization for decision-making under risk and the maximin solution for decision-making under uncertainty are two poles of a continuum. Between these poles we have varying degrees of confidence in our probability assessment. Ellsberg modeled this continuum by introducing a measure of our degree of confidence in our probability assessment. He maximizes the sum of the expected utility, weighted by rho, and the utility of the worst outcome, weighted by 1 - rho. I argue that the same results can be obtained by means of expected utility maximization within a strictly Bayesian framework. We represent our degree of confidence in our probability assessment by means of a Beta density function. By letting our probability assessment be the value of p for which the density function reaches its maximum and by calculating the expected utility by means of the upper bound of a confidence interval, expected utility maximization enjoins us to be the more cautious, the lower our degree of confidence in our probability assessment is. My approach has the following advantages: (i) it respects the intuition that we are more inclined to take account of the worst-case scenario when our degree of confidence concerning our probability assessment is low; (ii) it avoids the dogmatic stand of the precautionary principle; (iii) It does not bring in any machinery outside of the Bayesian framework of expected-utility maximization.

Mike Redmayne

The Law of Evidence

Lawyers might be thought able to offer special insights into evidence. After all, we have a law of evidence, developed over centuries. Not many disciplines can boast a detailed set of rules for dealing with evidence! But in fact, a little familiarity suggests that the law of evidence offers limited insights as to what evidence is or how we should reason about it. The trend in evidence law has been towards principles of 'free proof', which leaves the assessment of evidence to the fact-finder (the judge or the jury). Modern rules of evidence tend to be rules of policy rather than rules of inference, for example rules excluding information which might bias the fact-finder. Nevertheless, lawyer's experiences might give us some insights on concepts of evidence. One such insight is that Achinstein's account of evidence as a threshold concept is problematic, at least with respect to law. This alerts us to the importance of institutional context in the assessment of evidence. Another possibility is that close scrutiny of the rules of evidence, and of what judges and lawyers say about evidence, might tell us something about relatively unreflective, 'lay' understandings of evidence. Evidence law might provide us with useful information in a more roundabout manner, too. Psychologists have spent considerable efforts studying how jurors reason about evidence. This research may point to the importance of narrative in the analysis of evidence in court. All this prompts broader questions: how universal are our concepts of evidence and evidential reasoning (might Achinstein be right about our concept of evidence in science but wrong about our concept of evidence in law?) Are there fundamental differences between narrative based contexts (such as the legal trial) and non-narrative contexts such as experimentation in the sciences?

Philip Dawid

Statistics and the Law

Recent cases involving multiple infant deaths and DNA profiling identification have highlighted some of the problematic issues that can arise when statistical evidence is introduced into legal proceedings. It might appear that the concerns of Statistics and those of Law have little common ground, but in fact both disciplines address the same fundamental task: the drawing out of sound inferences from evidence. I will describe the logic of probabilistic reasoning and its application to cases at law, and show how its all too frequent neglect or misapplication has led to serious errors and miscarriages of justice. Both Statistics and Law are faced with the problem of structuring and making sense of mixed masses of evidence. The modern technology of "Probabilistic Expert Systems" can be seen as an extension of the century-old "Wigmore chart" method, used by lawyers to organise the many items of evidence in a case and express the many kinds of relationship between them. This technology is now being used to provide a correct and efficient way of taking account of whatever limited evidence may be at hand, a task that could otherwise be impossible. An important area of application is the interpretation of DNA profiles taken from relatives when that of the suspect (in a criminal case) or putative father (in a paternity case) is unavailable. Finally I shall discuss the wider relevance of the use of formal methods of reasoning about evidence, in the context of an inter-disciplinary programme on "Evidence, Inference and Enquiry".

John Worrall

Evidence in Medicine

The almost universally held view in medicine is that if you want really 'valid' 'scientific' evidence for the efficacy of any therapy (or indeed, in principle at least, for any etiological claim such as 'smoking causes lung cancer') you should perform an RCT (randomised controlled experiment). Practical and ethical concerns may sometimes force you to settle for other kinds of evidence, but this is always, at best, second best - RCTs provide, as is so often said, the 'gold standard'. When looked at from a more fundamental perspective concerning what constitutes good evidence, these claims about RCTs are, I argue, often unfounded and misguided. I highlight a number of difficulties (epistemological, not merely practical) with the RCT methodology and advocate in particular a much more positive attitude toward (properly conducted) 'historically controlled trials' then is usual within medicine today.

Staffan Mueller-Wille

Evidence in Historical Perspective

What counts as evidence varies historically and contextually. I will discuss a philosophical argument brought forward by Ludvik Fleck that compels us to think so. In a paper published in 1936 under the title "The problem of epistemology" Fleck pointed out that forms of evidence alien to us must be treated and understood as empirical phenomena that cannot be discarded as being irrational in them selves. Fleck called for a descriptive, rather than normative epistemology.
The model of science he developed was, as I will argue, nevertheless far from being as relativistic as later interpreters, most famously Thomas S. Kuhn, have made it appear. For Fleck, there was no such thing as "the scientific mind". Knowledge production necessitates communication among individuals, and it is communicative processes that bring about often unprecedented transformations in science. What counts as evidence at a give place and time, therefore, depends on such communicative processes. Fleck's descriptive epistemology, which shows some surprising affinities with American pragmatism, does not only offer a resort from the impasses of both rationalism and relativism. It also brings philosophy of science in touch with the political and ethical questions that have been raised by the rapid advances of science during the twentieth century.

Lenny Smith

Evidence and Scientific Simulation Models: Known Unknowns in Weather Forecasts and Climate Modelling

How can we most usefully interpret an ensemble of model simulations of a physical system as evidence of likely future behaviour? This question will be explored in two distinct settings which, between them, cover much of scientific prediction of dynamic systems: the "weather scenario" where are quality of our model simulations is fairly well known (we see then fail regularly), and the "climate scenario" where, by the nature of question asked, the evaluation of our models can only be done "in-sample". I will argue that given merely the known unknowns, presenting information extracted from weather-like models as probability forecasts is unlikely to be helpful.And it is unclear how to even design experiments in the climate modelling scenario so that they might be coherently interpreted in empirically-relevant, probabilistic terms. The role of the unknown unknowns both in science and in Bayesian statistics is also touched upon, inasmuch as it significantly impacts the weight given to evidence from scientific simulation models.

Relevant Background:

Stainforth DA, et al (2005) NATURE 433 (7024): 403-406 JAN 27 2005,
Uncertainty in predictions of the climate response to rising levels of
greenhouse gases

Judd & Smith (2004) PHYSICA D 196 (3-4): 224-242 SEP 15 2004
Indistinguishable states II - The imperfect model scenario

Smith LA et al (1999) QJ ROYAL METEORO SOCIETY 125 (560): 2855-2886.
Uncertainty dynamics and predictability in chaotic systems

Marcel Boumans

When Evidence is Not in the Mean

|

When observing or measuring phenomena, errors are inevitable, one can only aspire to reduce these errors as much as possible. An obvious strategy to achieve this reduction is by using more precise instruments. Another strategy was to develop a theory of these errors that could indicate how to take them into account. One of the greatest achievements of statistics in the beginning of the 19th century was such a theory of error. This theory told the practitioners that the best thing they could do is taking the arithmetical mean of their observations. This average would give them the most accurate estimate of the value they were searching for. Soon after its invention, this method made a triumphal march across various sciences. However, not in all sciences one stood waving aside. This method, namely, only worked well when the various observations were made under similar circumstances and when there were very many of them. And this was not the case for e.g. meteorology and actuarial science, the two sciences discussed in this paper.
In meteorology, each measurement came from a different instrument of which its reliability was not clear. Buys Ballot - the leading Dutch meteorologist of his days - had to develop a calculus of observations not based on the method of means but on the method of residues to turn unreliable observations into accurate estimates.
In actuarial science, measurements are not produced by an instrument but are the result of counting. Still, errors are inevitable. Here, we don't have many observations of a phenomenon at a certain point of time made under various circumstances (similar or not) but a time series that gives just one observation for each different point of time. To find the most accurate estimate of the variable at a certain point of time, the different observations should not be valued equally. A simple arithmetical average would neglect the kind of useful information provided by the observation of that variable (though erroneous) at the same time compared with observations made at other moments. Landré - the leading Dutch actuary at that time - designed a requirement for a weighted average that most emphasised the concurrent observation of the variable one is measuring.

Susan Haack|

Defending Science: the Question of Evidence

Dr. Haack will illustrate hew views on scientific evidence as presented in her book 'Defending Science - Within Reason|' (Prometheus Books, New York, 2003) and she will take questions from discussants and audience.

The honorific use of "scientific" notwithstanding, the evidence with respect to scientific claims and theories is like the evidence with respect to empirical claims generally -- only more so: its experiential parts are more dependent on instruments, for example, and the internal connections among reasons denser and more complex. It is, moreover, almost always a shared resource, pooled within and between generations of scientists.
We often speak of a theory's being more, or less, warranted at a time, but this has to be understood as shorthand. Warrant depends on quality of evidence; but we must begin with an account of the warrant of a claim for a person at a time; proceed to the warrant of a claim for a group of people at a time; and finally construct an account of the target concept, the degree of warrant of a claim at a time. We need, moreover, to articulate the difference between experiential evidence and reasons; the way in which experience contributes to warrant; and how experiential evidence and reasons work together -- like clues and completed entries in a crossword. This in turn requires an account of the multi-dimensional determinants of evidential quality: supportiveness, the independent security of reasons, and comprehensiveness.
This conception of evidence is worldly, not formal, since it insists on the relevance of scientists' interactions with the world. And it is social; but not socially-relativist. It thus lies between the narrowly logical conceptions of the Old Deferentialism, and the denial of objective evidential standards characteristic of the New Cynicism.

Peter Achinstein

Concepts of Evidence

|

Discussants: John Worrall (LSE); Hasok Chang (UCL)

Prof. Achinstein will illustrate his views on evidence as presented in his The Book of Evidence (Oxford University Press, Oxford/New York, 2001), and will take questions from discussants and audience.

Share:Facebook|Twitter|LinkedIn|