"Traceable accounts of uncertainty" must be provided, covering all known significant sources of uncertainty including, but not limited to, those of simulation (imprecision, ambiguity, model inadequacy...) and those identified via expert judgement.
WHY?
By a “traceable account of uncertainty”, we mean a system which allows the reader to trace the path of uncertainty back to its ultimate sources; or conversely, from the (many) sources to the final statement. For example, some uncertainties may be due to measurement imprecision, some may be due to variation between models, and some may be due to inadequacies in models (among other things). The traceable account would identify the chain of uncertainty, showing how each contribution is propagated, quantifying where possible and offering descriptions elsewhere. This would give confidence in the final statement. Scientific uses of the traceable account include identifying the largest sources of uncertainty for further research.
TELL ME MORE
The traceable account is not a new concept, and although easy to describe, it is difficult to implement for large and complex bodies of work. For example, the IPCC offer a traceable account of their conclusions (not limited to uncertainty) by a system of signposting from summaries, to chapters, to journal papers in the literature on which the reports are based.
This can be detrimental to the readability at all levels. It is also an extremely challenging task to retrospectively create a traceable account of uncertainty for a large body of work. For a traceable account of uncertainty to be achievable, it must be built up piece-by-piece, with contributions in a reasonably standardised format by authors of every paper. It is unfeasible, for instance, for authors of a study of climate impacts in West Africa to have to recreate an uncertainty analysis for the global climate models on which those impacts are based, of which they probably have no more than a download of relevant variables. But, those authors must be responsible for cataloguing all of the input and output uncertainties relevant to their own study or model, so that the latter can be used by anyone building on the study.
We have seen this confusion repeatedly from those using the outputs of global climate models (“which one is the best to use as an input for my impact model?” / “how can I decide which to use, if I only have time to do two?” / “so what is the uncertainty in the projection?” / etc). There would be huge community benefit to widespread adoption of some kind of standard, though we are not proposing a specific format here.
Lastly, traceable accounts of uncertainty allow identification of the Relevant Dominant Uncertainty (RDU), which may be of interest both to scientists (as a pointer towards fruitful avenues for further research) and to decision-makers (as an indication of where the uncertainty arises).
References and further reading
Mastrandrea, M. D., Field, C. B., Stocker, T. F., Edenhofer, O., Ebi, K. L., Frame, D. J., ... & Plattner, G. K. (2010). Guidance note for lead authors of the IPCC fifth assessment report on consistent treatment of uncertainties.
Comments on Principle 5
Hailiang Du - P05-0726
Model discrepancy is often the major source of uncertainty, yet it is difficult to account for with reasonable reliability, let alone to trace them.
In my view, discrepancy is simply not well-defined in large-scale simulation modelling; the challenges include Lorenz’s “subtractability” and uniqueness; one model-state corresponds to an infinity of initial conditions and thus an infinity of outcomes outcomes. (LAS)