Scientific information to inform decisions regarding the future must always be accompanied by a quantitative statement regarding its expected robustness and potential irrelevance.
All models have a range of applicability beyond which it is inadvisable to go. Without quantitative statements about where this range of applicability ends, decision-makers are left with a binary choice between assuming that scientific information is perfect or rejecting it completely. For how long is the model likely to be useful? In what areas is it more or less reliable? Better understanding of the limits of relevance helps decision-makers incorporate scientific information into their decisions.
TELL ME MORE
When we make use of a weather forecast, we understand that tomorrow’s forecast is quite reliable, that next week’s forecast is indicative but liable to change, and that the forecast for a month’s time is usually not worth looking at (although there is value for certain users at this time scale).
Climate forecasts (projections), by contrast, are typically presented to 2100 without an indication of the degree to which this timescale reflects robustness and confidence. An example of good practice, which we recommend to others, is the IPCC AR5 headline projections for 2100 as summarised in WG1 Table SPM.2. This states that the range in which 90% of modelled global mean surface temperature changes fall, for each scenario, is assessed to be a likely (66-100%) range, after accounting for additional uncertainties or different levels of confidence in models. In plainer terms, they expect a chance of up to about one-in-four that the actual global mean surface temperature would fall outside the 90% range of models, even if the scenario were followed exactly.
A similar approach could be taken for quantities derived from these global models, for example the outputs of climate impacts models, downscaling models, and integrated assessment models. In the first instance a simple propagation of the IPCC statement above through to the outputs of the second model would be informative even before the robustness of the second model itself has been quantified in a similar way.
References and further reading
Thompson, E., Frigg, R. and Helgeson, C. (2016) Expert judgment for climate change adaptation, Philosopy of Science, 83 (5) (December 2016) pp.1110-1121. DOI:10.1086/687942.
H. Tennekes. Protesting against dogma. Energy and Environment, 17(4):609–612, 2006.
Comments on Principle 3
Reason Machete - P03-0724
On the WHY, I find the question of how long the model is likely to be useful a bit unclear. The TELL ME MORE appears to shed light on this question when it talks about lead times of weather and climate forecasts. Does the "how long" refer to lead times of projections (or forecasts)? If that is the case, then I think the question should be "What is the range of lead times for which the model is likely to be useful?" or something along that line. In the statement of the principle, there is mention of "potential irrelevance." It doesn't seem obvious to me what potential irrelevance means. Maybe a line in the WHY or TELL ME MORE should be added to explain this point. An example could be helpful. Here is my faint idea of what potential irrelevance means. Climate regional climate models might tell us that Botswana will have dryer summers and wetter winters by 2050. Nonetheless, the information is irrelevant for a blanket water management policy over the entire country because regional variations of climatic and socioeconomic factors have to be taken into account.
Thank you for your comment. I agree that the tell-me-more can be clarified. What we intend is something along the lines of your “ What is the range of lead times for which the model is likely to be useful?". To take an extreme case of “potential irrelevance”, consider the initial HadSM3 simulations run under climateprediction.net . In some runs a region in the model-Pacific reduced the temperature of the model-planet in an unphysical manner to very low values. Those runs were “irrelevant” in that the model did something that was physically hogwash: one would not want to include those simulations in model-based support for tasks of future planning. Less dramatic examples are common do to assumptions made (today) in all CMIP climate models. We were intending something more focused on the climate model’s shortcomings; aspects of the model mathematical structure that make the model unable to reproduce “the future” even after we observe what the future holds. (LAS)
Hailiang Du - P03-0726
The quantitative statement about future climate based on current climate model is likely to be questionable. But the climate model can still provide insight about "probable" future climate scenarios, which could be valuable to decision makers.
I am not sure what exactly you mean by ‘"probable" future climate scenarios’, CMIP5 models differ significantly regarding the Earth’s global mean temperature, nevertheless each and every of these very different model planets show significant warming over the past 100 years. It seems reasonable to take from this that anthropogenic impacts will warm Earth-like model-planets in general; but I do not see how this can give us quantitative scenarios for the Earth itself without significant additional assumptions regarding the fidelity of the model(s). (LAS)