Principle 9

Future-tuned simulations designed intentionally to illustrate some selected model property must be clearly distinguished from forecasts and projections (predictions) which, while conditioned on future forcing, are tuned using only the past.

WHY?

Tuning or calibrating a model (making alterations to the parameters to improve the performance) should be done using only past observations and fundamental physical principles. Running the model forward then explores the space of what is possible given our modelling assumptions and consistency with past observations. If we constrain in addition based on future performance (for example, removing simulations which experience more than 0.2K/year of global mean temperature change, or those which fall outside our expectation of the climate sensitivity) then we artificially narrow the range of outcomes. To do so we must indeed be confident that the situation is unphysical, and understand how it can still be consistent with the model dynamics and our prior assumptions.

TELL ME MORE

“Future tuning” need not be a deliberate choice, but can occur accidentally. For example, it is common to re-tune models multiple times to improve the outputs. How do you know when to stop this process? If you do it once, run the simulation, and find that your previously well-behaved model now has a climate sensitivity well outside the range of the CMIP5 “pack”, do you re-tune? No doubt you find an error or a bug, and you re-tune it confidently, coming out with something more expected. But of course, the observation that your climate sensitivity is very high was the prompt, so this is future-tuning. It will always be possible to correct one more error or remove one more bug, but you are more likely to look for these when the model is not performing “as expected” and more likely to stop when it is doing what you expect. This could result in something elsewhere referred to as “intellectual phase-locking”, whereby the expectation becomes a (model-)reality and reinforces future expectations.

References and further reading

Hourdin, F., Mauritsen, T., Gettelman, A., Golaz, J. C., Balaji, V., Duan, Q., ... & Rauser, F. (2017). The art and science of climate model tuning. Bulletin of the American Meteorological Society, 98(3), 589-602.

Mauritsen, T., Stevens, B., Roeckner, E., Crueger, T., Esch, M., Giorgetta, M., ... & Mikolajewicz, U. (2012). Tuning the climate of a global model. Journal of advances in modeling Earth systems, 4(3).

Schmidt, G. A., Bader, D., Donner, L. J., Elsaesser, G. S., Golaz, J. C., Hannay, C., ... & Saha, S. (2017). Practice and philosophy of climate model tuning across six US modeling centers. Geoscientific model development, 10(9), 3207.


Leave a comment
   

 

Comments on Principle 9

Wilfran Moufouma-Okia - P09-0718
Not sure what is meant by "future-tuned simulations designed intentionally to illustrate some selected model property must be clearly distinguished from forecasts and projections (predictions) which, while conditioned on future forcing, are tuned using only the past" in the climate science realm? It is my understanding that for climate modellers once a model configuration/formulation is frozen/adopted in the development cycle, tuning will consist mainly in selecting parameter values in such a way that a measure of the discrepancy between observations and model outputs or between a modelled process and theory is minimized to a level acceptable in terms of process studies and process-oriented metrics. In this sense, tuning is often perceived as a way to compensate for model errors over the historic period for which observations exist.  

We agree that tuning using the past is good practise, and that further the traditional statistical good practice of not using the same historical data many times must be relaxed in climate-like simulations where one “cannot afford” to wait 50 years for truly out-of-sample observations. The objection here would be if one runs the model into the future, dislikes the result, and then tunes the model to achieve a model-future one finds more acceptable. In that case we question whether the model can be considered a forecast engine in any sense. (LAS)