Home > Research > Publications & Outputs > Quantifying Uncertainty for Estimates Derived f...

Links

Text available via DOI:

View graph of relations

Quantifying Uncertainty for Estimates Derived from Error Matrices in Land Cover Mapping Applications: The Case for a Bayesian Approach

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Close
Publication date5/02/2020
Host publicationEnvironmental Software Systems. Data Science in Action: 13th IFIP WG 5.11 International Symposium, ISESS 2020, Wageningen, The Netherlands, February 5–7, 2020, Proceedings
EditorsIoannis N. Athanasiadis, Steven P. Frysinger, Gerald Schimak, Willem Jan Knibbe
Place of PublicationCham
PublisherSpringer
Pages151-164
Number of pages14
ISBN (electronic)9783030398156
ISBN (print)9783030398149
<mark>Original language</mark>English

Publication series

NameIFIP Advances in Information and Communication Technology
PublisherSpringer
Volume554
ISSN (Print)1868-4238
ISSN (electronic)1868-422X

Abstract

The use of land cover mappings built using remotely sensed imagery data has become increasingly popular in recent years. However, these mappings are ultimately only models. Consequently, it is vital for one to be able to assess and verify the quality of a mapping and quantify uncertainty for any estimates that are derived from them in a reliable manner.

For this, the use of validation sets and error matrices is a long standard practice in land cover mapping applications. In this paper, we review current state of the art methods for quantifying uncertainty for estimates obtained from error matrices in a land cover mapping context. Specifically, we review methods based on their transparency, generalisability, suitability when stratified sampling and suitability in low count situations. This is done with the use of a third-party case study to act as a motivating and demonstrative example throughout the paper.

The main finding of this paper is there is a major issue of transparency for methods that quantify uncertainty in terms of confidence intervals (frequentist methods). This is primarily because of the difficulty of analysing nominal coverages in common situations. Effectively, this leaves one without the necessary tools to know when a frequentist method is reliable in all but a few niche situations. The paper then discusses how a Bayesian approach may be better suited as a default method for uncertainty quantification when judged by our criteria.