by Catherine Moore and John Doherty
We have talked about uncertainty before – in last year’s webinar series. We have been asked to talk about it some more. Our target audience for this webinar are those who are not necessarily modellers but who must work with modellers, and who must assess the credibility of modellers’ work. However, those who are new to modelling, and who would like some insights into how to explore model predictive uncertainty, will also benefit.
The webinar will address concepts rather than detail; details can be addressed in other webinars. Among other things, we will talk about the following.
- How do we characterize pre-calibration model predictive uncertainty?
- If a model is calibrated, how can its predictions still be uncertain?
- Which predictions made by a calibrated model have uncertainty? All of them? Or just some of them?
- How can we evaluate these post-calibration uncertainties?
- Is there uncertainty in the uncertainty?
- How does exploration of model predictive uncertainty help decision-making?
Recognition of uncertainty doesn’t make decision-making any easier; that is for sure. But neither is decision-making well served by a narrative that conceals the truth about what we can, and cannot say, about the environmental future. On the other hand, recognizing that numerical simulation is not a crystal ball should not sound the death knell of model-based environmental management.
Our industry is still exploring how environmental management should operate when model-based data processing exposes our inability to know the exact consequences of our actions. Our webinar will try to explain the context in which environmental management must operate. In time, collective wisdom will find the best way for decisions to be made in this context.
Links to reports of hypothesis-testing use cases:
[to be updated]