On 16–17 March 2016 this conference will bring together philosophers and economists to explore philosophical aspects of economic modelling.

The conference will explore the potentials and limitations of economic modelling in relation to the following three questions:

1. Do economic models explain real-life socio-economic phenomena despite their idealised assumptions?
2. Are results derived with the help of decision theoretic and game theoretic models descriptive or normative in nature? Can they be both?
3. Do state of the art economic models capture the uncertainties we face in economic decision and policy making?

Register Online (Free)

Registration is now closed. Please be aware that due to high interest in the event, we cannot guarantee a seat in every session.

Programme

(Printer-friendly version)

All talks will take place in room 2.06 of the Lakatos Building (marked “LAK” on the LSE Campus Map). The exception is the public lecture at 18:00–19:30, which will take place in lecture room G.01 on the ground floor Tower One (marked “TW1”).

Day 1: Wednesday 16 March

12:30–13:00 Registration & Light Refreshments LAK.G01C
13:00–13:15 Introduction LAK.206
13:15–14:15 Anna Alexandrova and Robert Northcott
“Striking the Optimal Balance: Theoretical Versus Empirical Methods in Economics”Abstract
LAK.206
14:15–15:15 Robert Sugden
“Preference Purification and the Inner Rational Agent: A Critique of the Conventional
Wisdom of Behavioural Welfare Economics”Abstract
LAK.206
15:15–15:45 Coffee LAK.G01C
15:45–16:15 Roberto Fumagalli
“How ‘Thin’ Rational Choice Theory Explains”Abstract
LAK.206
16:15–16:45 Giulio Gipsy Crespi
“Models in a Cluster: Beyond Reiss’ Explanation Paradox”Abstract
LAK.206
16:45–17:15 Jennifer Jhun Soyun
“What’s the Point of Ceteris Paribus? (Or: How to Understand Supply and Demand
Curves)”Abstract
LAK.206
17:15–17:45 Coffee LAK.G01C
18:00–19:30 Kevin Hoover
“Models and Piecemeal Empirical Knowledge”Chair: Richard BradleyAbstract
Public Lecture
TW1.G.01
20:00 Dinner 20 min walk

 

Day 2: Thursday 17 March

9:30–10:30 Caterina Marchionni
“Mechanisms, Explanation and Confirmation in Theoretical Economics”Abstract
LAK.206
10:30–11:30 Philipp Wichardt
“Models and Fictions in (Micro-)Economics”Abstract
LAK.206
11:30–11:50 Coffee LAK.G01C
11:50–12:20 Yang Liu
“A More Realistic Approach to Subjective Expected Utilities”Abstract
LAK.206
12:20–12:50 Osman Çağlar Dede
“Assessing the Evidence-Base of Behavioural Policies: The Case of Smoking Cessation
Programs”Abstract
LAK.206
12:50–13:30 Patricia Rich
“Against the Process-Based Objection to Expected Utility Theory”Abstract
LAK.206
13:20–14:00 Lunch LAK.G01C
14:00–15:00 Richard Bradley
“Managing Model Uncertainty”Abstract
LAK.206
15:00–16:00 Wouter den Haan
“Uncertainty and Macroeconomics”Abstract
LAK.206
16:00–16:30 Coffee LAK.G01C
16:30–17:00 Daniel Malinsky
“Decision making under causal uncertainty”Abstract
LAK.206
17:00–17:30 Roel Visser
“Inductive Risk in Macroeconomic Forecasting”Abstract
LAK.206

 

Abstracts

Anna Alexandrova and Robert Northcott: “Striking the Optimal Balance: Theoretical Versus Empirical Methods in Economics”

When economics gets criticized for being insufficiently empirical, a common defence is that this misses the point. Rather, the role of economic models is to provide a toolbox or library of representations that can be used piecemeal, application by application. Philosophers have argued that they can provide local explanations or sketches of mechanisms even when some of their assumptions are false. Sceptics have adopted a weaker reading of models as mere heuristics for building explanations rather than themselves being evidentiary.

Here, we argue that this debate is no longer fruitful. What really matters is not whether models can play the roles that optimists insist they can, but rather how well, as a matter of fact, they do play these roles. Answering this question requires a systematic study of whether theoretical modelling is a productive method compared to alternatives, such as observational or historical studies, field or laboratory experiments, process tracing, and many others.

Robert Sugden (with Geraldo Infante and Guilhem Lecouteux): “Preference Purification and the Inner Rational Agent: A Critique of the Conventional Wisdom of Behavioural Welfare Economics”

Neoclassical economics assumes that individuals have stable and context-independent preferences, and uses preference satisfaction as a normative criterion. By calling this assumption into question, behavioural findings cause fundamental problems for normative economics. A common response to these problems is to treat deviations from conventional rational choice theory as mistakes, and to try to reconstruct the preferences that individuals would have acted on, had they reasoned correctly. We argue that this preference purification approach implicitly uses a dualistic model of the human being, in which an inner rational agent is trapped in an outer psychological shell. This model is psychologically and philosophically problematic.

Roberto Fumagalli: “How ‘Thin’ Rational Choice Theory Explains”

Several critics of rational choice theory (RCT) build on the contrast between ‘thick’ and ‘thin’ interpretations of RCT to argue that RCT falls prey to the following dilemma. Thick RCT can be used to explain choices, but is vulnerable to falsifying empirical evidence from neuro-psychology. Conversely, thin RCT is insulated from such evidence, but cannot explain choices. In this paper, I draw on influential RCT applications to demonstrate that thin RCT can and does explain choices. I then explicate this result’s implications for the ongoing debate about RCT’s explanatory potential and the merits of entrenched philosophical accounts of scientific explanation.

Giulio Gipsy Crespi: “Models in a cluster: beyond Reiss’ explanation paradox”

In this presentation, I comment on Julian Reiss’ explanation paradox, a set three jointly contradictory sentences that the author regards to be true: a) economic models are false; b) economic models are nevertheless explanatory; c) only true accounts can explain. I argue that Reiss’ approach has one main flaw: he analyses abstract models in isolation, failing to grasp the epistemic value of models and misrepresenting the way economists use models in the scientific practice. I take into account recent contributions by Ylikoski&Adynonat and Rodrik who endorse a “cluster view of models”: models make sense in a family of related models that provide competing explanation to the task at hand. The variety of models, which often suggest conflicting rather than univocal answers, is a key feature of economics; economic models are valuable to the extent that they expand the set of plausible explanations for a variety of social phenomena.

Jennifer Jhun Soyun: “What’s the Point of Ceteris Paribus? (Or: How to Understand Supply and Demand Curves)”

Philosophers sometimes claim that economics, and the idealizing strategies it employs, is ultimately unable to provide genuine laws of nature. Therefore, unlike physics, it does not qualify as an actual science. Careful consideration of thermodynamics, a well-developed physical theory, reveals close substantial parallels with economic methodology. The corrective account of scientific understanding I offer appreciates these parallels: understanding in terms of efficient performance.

Kevin Hoover: “Models and Piecemeal Empirical Knowledge”

Models are ubiquitous tools in economics and many other fields, though their epistemic status has raised puzzles, such as how models that apparently misrepresent the world can serve our explanatory purposes. Such puzzles seem largely to be an artifact of a misconception of what it means for a model to represent the truth and what it means to acquire knowledge. This paper suggests a view of the function of models that tries to account for their successful roles as instruments for the acquisition of, especially empirical knowledge in economics, in the face of a complex economic reality in which knowledge is acquired in a piecemeal fashion.

Caterina Marchionni: “Mechanisms, Explanation and Confirmation in Theoretical Economics”

In this talk I clarify, qualify and defend three main theses, which, although not novel, have been repeatedly challenged. First, an important class of theoretical models in economics can be conceptualised as models of mechanisms. Second, these models can be deployed to generate potential explanations of economic phenomena. Third, the move from potential to actual explanations involves both non-empirical and empirical forms of confirmation. I illustrate these theses with the help of two examples: the Prisoner’s Dilemma and Hotelling’s model of spatial competition.

Philipp Wichardt: “Models and Fictions in (Micro-)Economics”

In the literature, there is a recurrent comparison of economic models with items of literary fiction (fables, metaphors, parables,…) often used in a derogatory way. In the paper, I take this comparison seriously and argue that many of the concerns regarding economics modelling can be alleviated in a coherent picture if one adopts the fictional view of models proposed by Frigg (2010). In particular, the argument suggests a distinctive role for strong mathematical theories such as
expected utility theory (setting limits to the fictional world to be imagined), the often extensive story telling in economic modelling (adapting the model to a context and suggesting comparisons with reality) as well as for empirical studies putting economic modelling to the test (exploring properties of the real world and how they relate to properties of the fictional model-world). Thus, the discussion suggests that a fictional view of economic models may indeed fit common practice and clarifies why there still may be something real to be learned.

Yang Liu: “A More Realistic Theory of Subjective Expected Utilities”

Savage’s theory of subjective expected utilities provides a paradigmatic derivation of personal probabilities and utilities within a general framework of rational decision-making. The system is based on a set of possible states of the world, and on acts, which are functions that assign to each state a “consequence”. The upshot is a representation theorem which states that the agent’s preferences among acts (satisfying a series of postulated rationality and structural axioms) can be represented by their expected utilities, based on uniquely determined probabilities (assigned to sets of states), and numeric utilities (assigned to consequences). Savage’s derivation, however, is based on a highly problematic well-known assumption not included among his postulates: for any consequence of an act in some state, there is a “constant act” which has that consequence in all states. This ability to transfer consequences from state to state is, in many cases, miraculous – including simple scenarios suggested by Savage as natural cases for applying his theory. We propose simplifications of the system with two main features: (1) the representation theorem is derived without the constant act assumption, (2) the personal probability is defined over a countable algebra instead of a σ-algebra. These are done at the cost of reducing the set of acts included in the setup. The reduction excludes certain theoretical infinitary scenarios, but includes the scenarios that should be handled by a system that models human decisions.

Osman Çağlar Dede: “Assessing the Evidence-Base of Behavioural Policies: The Case of Smoking Cessation Programs”

The behavioral economics evidence that is usually cited by the public policy designers are uninformative about the underlying mechanisms through which intended behavioral changes are achieved. When we lack such information, we are unable to determine whether an intervention that “works” in an experimental context could also be effective in the changing policy contexts. In this paper, I investigate how behavioral public policy programs for smoking cessation (Halpern et al 2015) fare with regards to this challenge. I argue that when we pursue the demand for more mechanistic information, we encounter deeper challanges regarding the evidence-base of behavioral policies. In particular, we also need to assess the theoretical underpinnings and methodological relevance of the mechanistic evidence used for justifying behavioral policy interventions. This task, I argue, goes beyond the methodological external validity challange that is currently emphasized. I illustrate this issue by contrasting the behavioral and the epidemiological evidence paradigms pertaining to smoking behavior and its change.

Patricia Rich: “Against the Process-Based Objection to Expected Utility Theory”

A significant challenge to the normativity of Expected Utility Theory comes from proponents of ecological rationality, who argue that the rationality of a decision depends on the process that produced it, whereas Expected Utility Theory only considers the outcome of this choice process. I note that the rationality of processes and of outcomes are not mutually exclusive, and that ecological rationality evaluates a process according to its expected outcome. I argue that if we consider how best to evaluate the expected outcome of a process, we are forced to rehearse the tradition of decision theory resulting in the view that rational choice patterns conform to Expected Utility Theory. Using the example of a heuristic that chooses between lotteries, I show that ecological rationality has no viable alternative to Expected Utility Theory for assessing the rationality of the heuristic.

Richard Bradley: “Managing Model Uncertainty”

Economics models capture some of the uncertainty there is about the state of the economy by giving probabilistic predictions for variables of interest. Policy makers can use them to make optimal choices by determining which policy option has the greatest expected benefit relative to these probabilities. But how should policymakers respond to model uncertainty: to uncertainty about whether the model correctly captures the underlying structure of the economy? In this talk I’ll consider three approaches to the problem (1) Bayesian model averaging approaches, (2) robustness approaches of the kind proposed by Hansen and Sargent and (3) approaches which employ a confidence measure on the probabilistic predictions.

Wouter den Haan: “Uncertainty and Macroeconomics”

Uncertainty is important for macroeconomics for many different reasons. First, the agents in the economy that we are trying to model face uncertainty. This could be the uncertainty regarding the outcome of known stochastic processes, the uncertainty about parameters describing such processes, or more general uncertainty about what is affecting the outcome. Second, as model makers macroeconomists themselves have to deal with the question that there is a lot of uncertainty about what they are trying to model. This is especially important if the model is designed to give policy advice. This talk gives an overview of how macroeconomists deal with uncertainty.

Daniel Malinsky: “Decision making under causal uncertainty”

Economists often construct causal models and estimate causal effects from non-experimental data in order to guide decisions about interventions or policies. However, usually the underlying causal structure cannot be uniquely pinned down by correlational data or conditional independence facts, and so the estimated intervention effect is not unique. Furthermore, often one cannot rule out the possibility of “hidden” causal structure (unmeasured variables which confound particular estimates). So, there is a kind of causal uncertainty which crops up in causal modeling, distinct from the familiar kind of statistical uncertainty that comes from inference on finite sample sizes (and which is usually characterized by confidence intervals or Bayesian credible intervals). What can a policy-maker — someone interested in comparatively evaluating different interventions to achieve particular goals — do in the face of such uncertainty? This paper considers what kind of decision rule(s) may be appropriate in the face of causal uncertainty, for the purpose of cost-benefit analysis.

Roel Visser: “Inductive Risk in Macroeconomic Forecasting”

How should economists deal with the existence of different attitudes towards inductive risk in society, when they use statistical models? In this talk I call attention to a rich tradition in econometrics that, I argue, deals with the same questions as the philosophy of science literature on inductive risk and values in science. These econometricians in this tradition advocate the use of loss functions that are based on social welfare effects, rather than loss functions chosen for their computational simplicity. If successful, such loss functions provide a mapping from forecast errors to welfare effects, similar to how philosophers talk about inductive risk to refer to the harmful effects of erroneous inductions on the interests of stakeholders. I will discuss the historical development, contemporary problems and practical applications of this tradition. I conclude by suggesting two ways in which both economists and philosophers of science can learn from each other.

 

Programme Committee

Jason Alexander, Anna Alexandrova, Philippe van Basshuysen, Richard Bradley, Kamilla Haworth Buchter, Goreti Faria, Roman Frigg, Conrad Heilmann, Jurgis Karpus, Alex Marcoci, Caterina Marchionni, James Nguyen, Mantas Radzvilas, H. Orri Stefánsson, Aron Valinder, Alex Vorhooeve, Philipp Wichardt, Nicolas Wüthrich

Contact

For any questions related to the conference please email philosophy.enuem@lse.ac.uk

Organisers

Jurgis Karpus, James Nguyen, Mantas Radzvilas, Nicolas Wüthrich

Funding

We are extremely grateful to the British Society for the Philosophy of Science, the Centre for Macroeconomics, the Centre for Philosophy of Natural and Social Science, the LSE Annual Fund, and the Swiss Study Foundation for their generous support.

 

BSPS CFM CPNSS LSE Annual Fund Swiss Study Foundation