Home > Research > Publications & Outputs > On (in)validating environmental models. 2. Impl...

Electronic data

Links

Text available via DOI:

View graph of relations

On (in)validating environmental models. 2. Implementation of a Turing‐like Test to modelling hydrological processes

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

On (in)validating environmental models. 2. Implementation of a Turing‐like Test to modelling hydrological processes. / Beven, Keith; Lane, Stuart; Page, Trevor et al.
In: Hydrological Processes, Vol. 36, No. 10, e14703, 31.10.2022.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Beven K, Lane S, Page T, Kretzschmar A, Hankin B, Smith P et al. On (in)validating environmental models. 2. Implementation of a Turing‐like Test to modelling hydrological processes. Hydrological Processes. 2022 Oct 31;36(10):e14703. Epub 2022 Oct 11. doi: 10.1002/hyp.14703

Author

Bibtex

@article{efe7d9aa068c4b40bb2be35896d5c961,
title = "On (in)validating environmental models. 2. Implementation of a Turing‐like Test to modelling hydrological processes",
abstract = "Abstract: Part 1 of this study discussed the concept of using a form of Turing‐like test for model evaluation, together with eight principles for implementing such an approach. In this part, the framing of fitness‐for‐purpose as a Turing‐like test is discussed, together with an example application of trying to assess whether a rainfall‐runoff model might be an adequate representation of the discharge response in a catchment for predicting future natural flood management scenarios. It is shown that the variation between event runoff coefficients in the record can be used to create some limits of acceptability that implicitly take some account of the epistemic uncertainties arising from lack of knowledge about errors in rainfall and discharge observations. In the case study it is demonstrated that the model used cannot be validated in this way across all the range of observed discharges, but that behavioural models can be found for the peak flows that are the subject of interest in the application. Thinking in terms of the Turing‐like test focusses attention on the critical observations needed to test whether streamflow is being produced in the right way so that a model is considered as fit‐for‐purpose in predicting the impacts of future change scenarios. As is the case for uncertainty estimation in general, it is argued that the assumptions made in setting behavioural limits of acceptability should be stated explicitly to leave an audit trail in any application that can be reviewed by users of the model outputs.",
keywords = "Water Science and Technology",
author = "Keith Beven and Stuart Lane and Trevor Page and Ann Kretzschmar and Barry Hankin and Paul Smith and Nick Chappell",
year = "2022",
month = oct,
day = "31",
doi = "10.1002/hyp.14703",
language = "English",
volume = "36",
journal = "Hydrological Processes",
issn = "0885-6087",
publisher = "John Wiley and Sons Ltd",
number = "10",

}

RIS

TY - JOUR

T1 - On (in)validating environmental models. 2. Implementation of a Turing‐like Test to modelling hydrological processes

AU - Beven, Keith

AU - Lane, Stuart

AU - Page, Trevor

AU - Kretzschmar, Ann

AU - Hankin, Barry

AU - Smith, Paul

AU - Chappell, Nick

PY - 2022/10/31

Y1 - 2022/10/31

N2 - Abstract: Part 1 of this study discussed the concept of using a form of Turing‐like test for model evaluation, together with eight principles for implementing such an approach. In this part, the framing of fitness‐for‐purpose as a Turing‐like test is discussed, together with an example application of trying to assess whether a rainfall‐runoff model might be an adequate representation of the discharge response in a catchment for predicting future natural flood management scenarios. It is shown that the variation between event runoff coefficients in the record can be used to create some limits of acceptability that implicitly take some account of the epistemic uncertainties arising from lack of knowledge about errors in rainfall and discharge observations. In the case study it is demonstrated that the model used cannot be validated in this way across all the range of observed discharges, but that behavioural models can be found for the peak flows that are the subject of interest in the application. Thinking in terms of the Turing‐like test focusses attention on the critical observations needed to test whether streamflow is being produced in the right way so that a model is considered as fit‐for‐purpose in predicting the impacts of future change scenarios. As is the case for uncertainty estimation in general, it is argued that the assumptions made in setting behavioural limits of acceptability should be stated explicitly to leave an audit trail in any application that can be reviewed by users of the model outputs.

AB - Abstract: Part 1 of this study discussed the concept of using a form of Turing‐like test for model evaluation, together with eight principles for implementing such an approach. In this part, the framing of fitness‐for‐purpose as a Turing‐like test is discussed, together with an example application of trying to assess whether a rainfall‐runoff model might be an adequate representation of the discharge response in a catchment for predicting future natural flood management scenarios. It is shown that the variation between event runoff coefficients in the record can be used to create some limits of acceptability that implicitly take some account of the epistemic uncertainties arising from lack of knowledge about errors in rainfall and discharge observations. In the case study it is demonstrated that the model used cannot be validated in this way across all the range of observed discharges, but that behavioural models can be found for the peak flows that are the subject of interest in the application. Thinking in terms of the Turing‐like test focusses attention on the critical observations needed to test whether streamflow is being produced in the right way so that a model is considered as fit‐for‐purpose in predicting the impacts of future change scenarios. As is the case for uncertainty estimation in general, it is argued that the assumptions made in setting behavioural limits of acceptability should be stated explicitly to leave an audit trail in any application that can be reviewed by users of the model outputs.

KW - Water Science and Technology

U2 - 10.1002/hyp.14703

DO - 10.1002/hyp.14703

M3 - Journal article

VL - 36

JO - Hydrological Processes

JF - Hydrological Processes

SN - 0885-6087

IS - 10

M1 - e14703

ER -