Current Events

Summer Term

28 April

This seminar has been cancelled.

12 May

J. Brian Pitts, Cambridge

Real Change Happens in Hamiltonian General Relativity

In General Relativity in Hamiltonian form, change has seemed to be missing, because the Hamiltonian is a sum of first-class constraints and thus supposedly generates gauge transformations.  The gauge generator, a specially _tuned sum_ of first-class constraints, facilitates seeing that a solitary first-class constraint generates not a gauge transformation, but a bad physical change in electromagnetism or General Relativity, spoiling the constraints, Gauss's law or the Gauss-Codazzi relations, in terms of the physically relevant velocities.  Insistence on Hamiltonian-Lagrangian equivalence holds the key, including a reformed Bergmann-inspired notion of observables.  Taking objective change to be  ineliminable time dependence, there is change in GR without matter just in case there is no time-like Killing vector.  The Hamiltonian formalism agrees.

19 May

Owen Maroney, Oxford

How epistemic can a quantum state be?

The “psi-epistemic” view is that the quantum state does not represent a state of the world, but a state of knowledge about the world. It draws its motivation, in part, from the observation of qualitative similarities between characteristic properties of non-orthogonal quantum wave functions and between overlapping classical probability distributions.
It might be suggested that this gives a natural explanation for these properties, which seem puzzling for the alternative “psi-ontic” view. However, looking at the two such similarities, quantum state overlap and quantum state discrimination, it will be shown that the psi-epistemic view cannot account for these values shown by quantum theory, and must instead rely on the same kind of explanations as the “psi-ontic” view.

23 June

Luke Glynn, UCL

Unsharp Best System Chances

Much recent philosophical attention has been devoted to variants on the Best System Analysis of laws and chance. In particular, philosophers have been interested in the prospects of such Best System Analyses (BSAs) for yielding *high-level* laws and chances. Nevertheless, a foundational worry about BSAs lurks: there do not appear to be uniquely appropriate measures of the degree to which a system exhibits theoretical virtues, such as simplicity and strength. Nor does there appear to be a uniquely correct exchange rate at which the theoretical virtues of simplicity, strength, and likelihood (or *fit*) trade off against one another in the determination of a best system. Moreover, it may be that there is no *robustly* best system: no system that comes out best under *any* reasonable measures of the theoretical virtues and exchange rate between them. This worry has been noted by several philosophers, with some arguing that there is indeed plausibly a set of tied-for-best systems for our world (specifically, a set of very good systems, but no robustly *best* system). Some have even argued that this entails that there are no Best System laws or chances in our world. I argue that, while it *is* plausible that there is a set of tied-for-best systems for our world, it doesn't follow from this that there are no Best System chances. (As I will argue, the situation with regard to laws is more complex.) Rather, it follows that (some of) the Best System chances for our world are *unsharp*.

THURSDAY, 26 June

Katherine Brading, Notre Dame

Absolute, true and mathematical time in Newton’s Principia

In the famous scholium at the beginning of Newton’s Principia, Newton discusses time, space, place and motion, making use of three distinctions: absolute and relative; true and apparent; mathematical and common. The literature on the scholium has focused primarily on absolute versus relative motion, and on absolute space, with comparatively little discussion of time, and nothing that I know of addressing why Newton has this three-fold set of distinctions. I discuss the three distinctions for the case of time, arguing that all three are important for the project of the Principia, and that all three become subject to empirical investigation through their relationship to the project of the Principia. I conclude by explaining the significance of this for philosophy of time.


Lent Term

20 January

Richard Pettigrew, Bristol

Accuracy, Risk, and the Principle of Indifference

In Bayesian epistemology, the problem of the priors is this:  How should we set our credences (or degrees of belief) in the absence of evidence?  The Principle of Indifference gives a very restrictive answer.  It demands that an agent with no evidence divide her credences equally over all possibilities.  That is, she ought to adopt the uniform distribution.  In this paper, I offer a novel argument for the Principle of Indifference.  I call it the Argument from Accuracy.  It shares much in common with Jim Joyce's ''nonpragmatic vindication of probabilism'' (Joyce 1998).  Joyce showed that, for a broad range of accuracy measures, if an agent's credences do not satisfy the axioms of probability, then there are credences that do satisfy those axioms that are guaranteed to be more accurate.  In my argument, I show that if an agent's credence do not satisfy the Principle of Indifference, then, for a similarly broad range of accuracy measures, they risk greater inaccuracy than they need to.

3 February

Bryan W. Roberts, LSE

Three Merry Roads to T-Violation

This talk aims to give a general perspective on how the fundamental laws of nature can distinguish between the past and the future, or be T-violating. I argue that, in terms of basic analytic arguments, there are really just three approaches currently being explored. I show how each is characterised by a symmetry principle, which provides a template for detecting T-violating laws even without knowing the laws of physics themselves. Each approach is illustrated with an example, and the prospects for each are considered in extensions of particle physics beyond the standard model.

17 February

James Ladyman, Bristol

What are Weak Values  

This paper is about the philosophical interpretation and implications of weak values in quantum mechanics. In particular, we present an argument for a subtle kind of retrocausality (which we define as the causal dependence of earlier events on later events) that is based on the probabilistic structure of experiments to determine weak values that we explain. We consider what type of property weak values are, and explain some of their unusual features and how they relate to the nature of the retrocausality they exhibit, and we briefly consider the merits of the two-state vector approach to weak values, and explore how they should be understood in the context of Everettian quantum mechanics.

17 March

Hans Halvorson, Princeton

Explanation via surplus structure

Our attempts to model physical systems seem to be cursed by the problem of surplus structure: our mathematical representations of such systems contain structure which apparently has no analogue in the system under study. When interpreting our theories, then, we invoke some notion of "physical equivalence" of models in order to wash out this surplus structure. At least that's how the story is typically told. I will argue, in contrast, that surplus structure can itself be explanatory. For concreteness, I will focus on the case of surplus structure in quantum field theory. I will show that explanations in QFT trade on the fact that there are multiple "representations" that are nonetheless mathematically isomorphic.


Michaelmas Term

7 October

Oswaldo Zapata, LSE

Is there still any hope for string theory?

Abstract: In the first part of the talk we will cover how string theory is related to supersymmetry, and what we can learn from the experiments at the LHC. We will also touch upon how supersymmetry extends our notion of the standard relativistic  four-dimensional space-time. A short discussion of supergravity, that is, local supersymmetry, follows. We will see that supergravity is in fact a special limit of superstring theory. Next, we will explore the string theory proposal stating that the universe is a sort of hologram. According to this idea, the universe we live win is in one to one correspondence with a lower dimensional world where all the information of our higher dimensional universe is stored. Some phenomenological consequences  of this correspondence, such as the understanding and conception of new materials with technological applications, will be discussed at the end.

21 October

Joseph Henson, Imperial College London

Locality Reinstated?

The assumptions of Bell's theorem were meant by its author to to characterise a lack of any physical superluminal influences.  The ``Quantum non-locality'' he discovered does violence not only to our intuitions, but arguably also to any thoroughgoing attempts to apply causal explanation in QM.

It is natural to wonder if there is a better way to characterise locality (= lack of superluminal influence) that is consistent with QM, thereby resolving an apparent contradiction by discarding some conceptual ``excess baggage.''  However, in order to do this we must be careful to preserve what is essential in the idea of locality.  In this talk, I will describe a particular view of  Bell's condition, and point out a (hopefully uncontroversial) list of conditions that any reinstated ``locality'' should obey.  I will then apply these conditions to a number of candidates in the literature, including some treatments of locality in many-worlds QM, Howard's non-separability and ``super-determinism,'' and find them to be violated.  Finally I will speculate about the extent to which a useful concept of locality \textit{can} be preserved in QM.

4 November

Samuel Fletcher, Logic and Philosophy of Science, University of California Irvine

On the Reduction of General Relativity to Newtonian Gravitation

Abstract: Accounts of the reduction of general relativity (GR) to Newtonian gravitation (NG) usually take one of two approaches. One considers the limit as the speed of light c → ∞, while the other focuses on the limit of formulae (e.g., three-momentum) in the low-velocity limit, i.e., as v/c ≈ 0.  Although the first approach treats the reduction of relativistic spacetimes globally, many have argued that ‘c → ∞’ can at best be interpreted counterfactually, which is of limited value in explaining the past empirical success of NG.  The second, on the other hand, while more applicable to explaining this success, only treats a small fragment of GR.  Further, it usually applies only locally, hence is unable to account for the reduction of global structure.  Building on work by Ehlers, I propose a different account of the reduction relation that offers the global applicability of the c → ∞ limit while maintaining the explanatory utility of the v/c ≈ 0 approximation.  In doing so, I highlight the role that a topology on the collection of all spacetimes plays in defining the relation, and how the choice of topology corresponds with broader or narrower classes of observables that one demands be well-approximated in the limit

Share:Facebook|Twitter|LinkedIn|