Home > Research > Publications & Outputs > On (in)validating environmental models. 1. Prin...

Electronic data

  • Hydrological Processes - 2022 - Beven - On in validating environmental models 1 Principles for formulating a Turing‐like

    Rights statement: 12m

    Accepted author manuscript, 455 KB, PDF document

    Embargo ends: 1/01/50

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

On (in)validating environmental models. 1. Principles for formulating a Turing‐like Test for determining when a model is fit‐for purpose

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

On (in)validating environmental models. 1. Principles for formulating a Turing‐like Test for determining when a model is fit‐for purpose. / Beven, Keith; Lane, Stuart.
In: Hydrological Processes, Vol. 36, No. 10, e14704, 31.10.2022.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Author

Bibtex

@article{d612198383964faa857897aedb182b74,
title = "On (in)validating environmental models. 1. Principles for formulating a Turing‐like Test for determining when a model is fit‐for purpose",
abstract = "Model invalidation is a good thing. It means that we are forced to reconsider either model structures or the available data more closely, that is to challenge our fundamental understanding of the problem at hand. It is not easy, however, to decide when a model should be invalidated, when we expect that the sources of uncertainty in environmental modelling will often be epistemic rather than simply aleatory in nature. In particular, epistemic errors in model inputs may well exert a very strong control over how accurate we might expect model predictions to be when compared against evaluation data that might also be subject to epistemic uncertainties. We suggest that both modellers and referees should treat model validation as a form of Turing-like Test, whilst being more explicit about how the uncertainties in observed data and their impacts are assessed. Eight principles in formulating such tests are presented. Being explicit about the decisions made in framing an analysis is one important way to facilitate communication with users of model outputs, especially when it is intended to use a model simulator as a {\textquoteleft}model of everywhere{\textquoteright} or {\textquoteleft}digital twin{\textquoteright} of a catchment system. An example application of the concepts is provided in Part 2.",
keywords = "Water Science and Technology",
author = "Keith Beven and Stuart Lane",
year = "2022",
month = oct,
day = "31",
doi = "10.1002/hyp.14704",
language = "English",
volume = "36",
journal = "Hydrological Processes",
issn = "0885-6087",
publisher = "John Wiley and Sons Ltd",
number = "10",

}

RIS

TY - JOUR

T1 - On (in)validating environmental models. 1. Principles for formulating a Turing‐like Test for determining when a model is fit‐for purpose

AU - Beven, Keith

AU - Lane, Stuart

PY - 2022/10/31

Y1 - 2022/10/31

N2 - Model invalidation is a good thing. It means that we are forced to reconsider either model structures or the available data more closely, that is to challenge our fundamental understanding of the problem at hand. It is not easy, however, to decide when a model should be invalidated, when we expect that the sources of uncertainty in environmental modelling will often be epistemic rather than simply aleatory in nature. In particular, epistemic errors in model inputs may well exert a very strong control over how accurate we might expect model predictions to be when compared against evaluation data that might also be subject to epistemic uncertainties. We suggest that both modellers and referees should treat model validation as a form of Turing-like Test, whilst being more explicit about how the uncertainties in observed data and their impacts are assessed. Eight principles in formulating such tests are presented. Being explicit about the decisions made in framing an analysis is one important way to facilitate communication with users of model outputs, especially when it is intended to use a model simulator as a ‘model of everywhere’ or ‘digital twin’ of a catchment system. An example application of the concepts is provided in Part 2.

AB - Model invalidation is a good thing. It means that we are forced to reconsider either model structures or the available data more closely, that is to challenge our fundamental understanding of the problem at hand. It is not easy, however, to decide when a model should be invalidated, when we expect that the sources of uncertainty in environmental modelling will often be epistemic rather than simply aleatory in nature. In particular, epistemic errors in model inputs may well exert a very strong control over how accurate we might expect model predictions to be when compared against evaluation data that might also be subject to epistemic uncertainties. We suggest that both modellers and referees should treat model validation as a form of Turing-like Test, whilst being more explicit about how the uncertainties in observed data and their impacts are assessed. Eight principles in formulating such tests are presented. Being explicit about the decisions made in framing an analysis is one important way to facilitate communication with users of model outputs, especially when it is intended to use a model simulator as a ‘model of everywhere’ or ‘digital twin’ of a catchment system. An example application of the concepts is provided in Part 2.

KW - Water Science and Technology

U2 - 10.1002/hyp.14704

DO - 10.1002/hyp.14704

M3 - Journal article

VL - 36

JO - Hydrological Processes

JF - Hydrological Processes

SN - 0885-6087

IS - 10

M1 - e14704

ER -