# Principle 4

Information and insight regarding (a) the behaviours of computational models, (b) the properties of theoretical mathematical constructs and (c) observations of the world itself, must always be distinguished clearly, especially when these three distinct entities share the same name.

## WHY?

Computational models often use variable names corresponding to the real-world quantities they represent, sometimes even real-world observable/measurable quantities, and the variables in. This is done for obvious reasons as it makes the modelling process more intuitive and explainable. While this simplifies coding the model significantly, it is critical to distinguish (for example) model-temperative when presenting results. Variables in mathematical models correspond to yet another type of entity.

Model Intercomparison Projects (MIPs) can be useful for advancing the art modelling, but model-model comparisons often tell us little about the real world. Indeed some of the most popular CMIP graphs have no connection with reality. Reality Intercomparison Projects might better clarify the level of confidence we place in model simulations by comparing each of them with observations more directly.

For clarity, for good science, and to ensure that the strength and weakness of model-based conclusions are made clear to those using them. It is important not to confuse the properties of a model with the properties of reality.

## TELL ME MORE

Example: the measurable viscosity of a physical fluid may not be the same as the “viscosity” variable used in a numerical integration scheme to model that fluid, as the latter may depend on grid spacing and time step. In climate simulations, an “eddy viscosity” or effective viscosity is used, which takes the role of the molecular viscosity but encompasses other sub-grid-scale dissipative processes.

As the numerical value of the eddy viscosity is several orders of magnitude different from the molecular viscosity, the two are always distinguished, but in other cases the physical-variable and model-variable may have such similar values (one observable, the other either derived from physical principles, calibrated with respect to data, or assumed) that they are given the same name.

Properties of model ensembles, such as the “climate sensitivity”, are sometimes assumed to be informative about a real-world climate sensitivity, but the real world is a single system which is not generated in the same way as our class of models. Statistical methods are hamstrung if they assume that the real world is statistically indistinguishable from our class of models.

Simpler cases are worth considering, too. The “altitude above sea level” is a climate model-variable on the scale of tens of kilometres, which averages out sharp peaks such as the Andes, where the “altitude above sea level” in the real world can be more “spiky”, with dynamical consequences for circulation and orographically-induced rainfall. The “freezing point of water” may be set to the measured value (zero Celsius) in a model, but how do we know that we might not get a better model by treating this as a calibration parameter? If this sounds like a crazy idea, consider that perhaps the widespread use of anomaly rather than absolute temperatures is already treating the freezing point of water as a calibration parameter, with results varying by around 3 Celsius (IPCC AR5 WG1, figure 9.8a).