Information and insight regarding (a) the behaviours of computational models, (b) the properties of theoretical mathematical constructs and (c) observations of the world itself, must always be distinguished clearly, especially when these three distinct entities share the same name.
Computational models often use variable names corresponding to the real-world quantities they represent, sometimes even real-world observable/measurable quantities, and the variables in. This is done for obvious reasons as it makes the modelling process more intuitive and explainable. While this simplifies coding the model significantly, it is critical to distinguish (for example) model-temperative when presenting results. Variables in mathematical models correspond to yet another type of entity.
Model Intercomparison Projects (MIPs) can be useful for advancing the art modelling, but model-model comparisons often tell us little about the real world. Indeed some of the most popular CMIP graphs have no connection with reality. Reality Intercomparison Projects might better clarify the level of confidence we place in model simulations by comparing each of them with observations more directly.
For clarity, for good science, and to ensure that the strength and weakness of model-based conclusions are made clear to those using them. It is important not to confuse the properties of a model with the properties of reality.
TELL ME MORE
Example: the measurable viscosity of a physical fluid may not be the same as the “viscosity” variable used in a numerical integration scheme to model that fluid, as the latter may depend on grid spacing and time step. In climate simulations, an “eddy viscosity” or effective viscosity is used, which takes the role of the molecular viscosity but encompasses other sub-grid-scale dissipative processes.
As the numerical value of the eddy viscosity is several orders of magnitude different from the molecular viscosity, the two are always distinguished, but in other cases the physical-variable and model-variable may have such similar values (one observable, the other either derived from physical principles, calibrated with respect to data, or assumed) that they are given the same name.
Properties of model ensembles, such as the “climate sensitivity”, are sometimes assumed to be informative about a real-world climate sensitivity, but the real world is a single system which is not generated in the same way as our class of models. Statistical methods are hamstrung if they assume that the real world is statistically indistinguishable from our class of models.
Simpler cases are worth considering, too. The “altitude above sea level” is a climate model-variable on the scale of tens of kilometres, which averages out sharp peaks such as the Andes, where the “altitude above sea level” in the real world can be more “spiky”, with dynamical consequences for circulation and orographically-induced rainfall. The “freezing point of water” may be set to the measured value (zero Celsius) in a model, but how do we know that we might not get a better model by treating this as a calibration parameter? If this sounds like a crazy idea, consider that perhaps the widespread use of anomaly rather than absolute temperatures is already treating the freezing point of water as a calibration parameter, with results varying by around 3 Celsius (IPCC AR5 WG1, figure 9.8a).
Comments on Principle 4
Luke Bevan - P04-2307
I think that to some extent the overlap in langage can be caused by a difference in epistemological understanding what a model is. There is some research that shows that different people can understand what modelling is in very different ways. For example, some may equate modelling for example with a form of experimentation, some may see them as simplification tools and others may unreflectively consider modelling to be a kind of replication/reconstruction of reality. I guess a useful question is how to communicate the intended relationship between model and either theory, data or other elements that have been used in its construction?
There are indeed relevant, deep and open philosophy of science questions regarding the epistemological status of modelling versus experimentation (or observation) of the real world, on the one hand, and versus theory (or even pure mathematics), on the other hand. The aim of Lorentz Principle 4 is to sensitise modellers to the problem of unreflectively equating a computational model to analytic mathematics and to the real world. Our ‘philosophy of science’ is that making these equal is a (very problematic) assumption that should be avoided. Still, arguments can be made in some cases that models are ‘good enough’ for the purpose to inform real-world decisions and that they do not suffer from spurious results due to computational uncertainties. The point of the principle is to force practitioners to explicitly make their (fallible) arguments. (AP)
Hailiang Du - P04-0726
It would be also useful to clarify under what conditions (if exist), a), b) and c) will share the same/similar properties/behaviours.