Home > Research > Publications & Outputs > Virtual image pair-based spatio-temporal fusion

Electronic data

  • VIPSTF

    Rights statement: This is the author’s version of a work that was accepted for publication in Remote Sensing of Environment. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Remote Sensing of Environment, 249, 2020 DOI: 10.1016/j.rse.2020.112009

    Accepted author manuscript, 1.48 MB, PDF document

    Available under license: CC BY-NC-ND

Links

Text available via DOI:

View graph of relations

Virtual image pair-based spatio-temporal fusion

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Virtual image pair-based spatio-temporal fusion. / Wang, Q.; Tang, Y.; Tong, X. et al.
In: Remote Sensing of Environment, Vol. 249, 112009, 17.11.2020.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Wang, Q, Tang, Y, Tong, X & Atkinson, PM 2020, 'Virtual image pair-based spatio-temporal fusion', Remote Sensing of Environment, vol. 249, 112009. https://doi.org/10.1016/j.rse.2020.112009

APA

Wang, Q., Tang, Y., Tong, X., & Atkinson, P. M. (2020). Virtual image pair-based spatio-temporal fusion. Remote Sensing of Environment, 249, Article 112009. https://doi.org/10.1016/j.rse.2020.112009

Vancouver

Wang Q, Tang Y, Tong X, Atkinson PM. Virtual image pair-based spatio-temporal fusion. Remote Sensing of Environment. 2020 Nov 17;249:112009. Epub 2020 Aug 1. doi: 10.1016/j.rse.2020.112009

Author

Wang, Q. ; Tang, Y. ; Tong, X. et al. / Virtual image pair-based spatio-temporal fusion. In: Remote Sensing of Environment. 2020 ; Vol. 249.

Bibtex

@article{8fbdaa440543468d807cd1f88f648682,
title = "Virtual image pair-based spatio-temporal fusion",
abstract = "Spatio-temporal fusion is a technique used to produce images with both fine spatial and temporal resolution. Generally, the principle of existing spatio-temporal fusion methods can be characterized by a unified framework of prediction based on two parts: (i) the known fine spatial resolution images (e.g., Landsat images), and (ii) the fine spatial resolution increment predicted from the available coarse spatial resolution increment (i.e., a downscaling process), that is, the difference between the coarse spatial resolution images (e.g., MODIS images) acquired at the known and prediction times. Owing to seasonal changes and land cover changes, there always exist large differences between images acquired at different times, resulting in a large increment and, further, great uncertainty in downscaling. In this paper, a virtual image pair-based spatio-temporal fusion (VIPSTF) approach was proposed to deal with this problem. VIPSTF is based on the concept of a virtual image pair (VIP), which is produced based on the available, known MODIS-Landsat image pairs. We demonstrate theoretically that compared to the known image pairs, the VIP is closer to the data at the prediction time. The VIP can capture more fine spatial resolution information directly from known images and reduce the challenge in downscaling. VIPSTF is a flexible framework suitable for existing spatial weighting- and spatial unmixing-based methods, and two versions VIPSTF-SW and VIPSTF-SU are, thus, developed. Experimental results on a heterogeneous site and a site experiencing land cover type changes show that both spatial weighting- and spatial unmixing-based methods can be enhanced by VIPSTF, and the advantage is particularly noticeable when the observed image pairs are temporally far from the prediction time. Moreover, VIPSTF is free of the need for image pair selection and robust to the use of multiple image pairs. VIPSTF is also computationally faster than the original methods when using multiple image pairs. The concept of VIP provides a new insight to enhance spatio-temporal fusion by making fuller use of the observed image pairs and reducing the uncertainty of estimating the fine spatial resolution increment. {\textcopyright} 2020 Elsevier Inc.",
keywords = "Downscaling, Spatio-temporal fusion, Time-series images, Virtual image pair (VIP), Forecasting, Image acquisition, Image fusion, Image resolution, Radiometers, Uncertainty analysis, Downscaling process, Flexible framework, Land-cover change, Spatial and temporal resolutions, Spatial resolution, Spatial resolution images, Spatio-temporal fusions, Unified framework, Image enhancement, downscaling, image analysis, land cover, Landsat, MODIS, prediction, satellite imagery, spatial resolution, spatiotemporal analysis",
author = "Q. Wang and Y. Tang and X. Tong and P.M. Atkinson",
note = "This is the author{\textquoteright}s version of a work that was accepted for publication in Remote Sensing of Environment. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Remote Sensing of Environment, 249, 2020 DOI: 10.1016/j.rse.2020.112009",
year = "2020",
month = nov,
day = "17",
doi = "10.1016/j.rse.2020.112009",
language = "English",
volume = "249",
journal = "Remote Sensing of Environment",
issn = "0034-4257",
publisher = "Elsevier Inc.",

}

RIS

TY - JOUR

T1 - Virtual image pair-based spatio-temporal fusion

AU - Wang, Q.

AU - Tang, Y.

AU - Tong, X.

AU - Atkinson, P.M.

N1 - This is the author’s version of a work that was accepted for publication in Remote Sensing of Environment. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Remote Sensing of Environment, 249, 2020 DOI: 10.1016/j.rse.2020.112009

PY - 2020/11/17

Y1 - 2020/11/17

N2 - Spatio-temporal fusion is a technique used to produce images with both fine spatial and temporal resolution. Generally, the principle of existing spatio-temporal fusion methods can be characterized by a unified framework of prediction based on two parts: (i) the known fine spatial resolution images (e.g., Landsat images), and (ii) the fine spatial resolution increment predicted from the available coarse spatial resolution increment (i.e., a downscaling process), that is, the difference between the coarse spatial resolution images (e.g., MODIS images) acquired at the known and prediction times. Owing to seasonal changes and land cover changes, there always exist large differences between images acquired at different times, resulting in a large increment and, further, great uncertainty in downscaling. In this paper, a virtual image pair-based spatio-temporal fusion (VIPSTF) approach was proposed to deal with this problem. VIPSTF is based on the concept of a virtual image pair (VIP), which is produced based on the available, known MODIS-Landsat image pairs. We demonstrate theoretically that compared to the known image pairs, the VIP is closer to the data at the prediction time. The VIP can capture more fine spatial resolution information directly from known images and reduce the challenge in downscaling. VIPSTF is a flexible framework suitable for existing spatial weighting- and spatial unmixing-based methods, and two versions VIPSTF-SW and VIPSTF-SU are, thus, developed. Experimental results on a heterogeneous site and a site experiencing land cover type changes show that both spatial weighting- and spatial unmixing-based methods can be enhanced by VIPSTF, and the advantage is particularly noticeable when the observed image pairs are temporally far from the prediction time. Moreover, VIPSTF is free of the need for image pair selection and robust to the use of multiple image pairs. VIPSTF is also computationally faster than the original methods when using multiple image pairs. The concept of VIP provides a new insight to enhance spatio-temporal fusion by making fuller use of the observed image pairs and reducing the uncertainty of estimating the fine spatial resolution increment. © 2020 Elsevier Inc.

AB - Spatio-temporal fusion is a technique used to produce images with both fine spatial and temporal resolution. Generally, the principle of existing spatio-temporal fusion methods can be characterized by a unified framework of prediction based on two parts: (i) the known fine spatial resolution images (e.g., Landsat images), and (ii) the fine spatial resolution increment predicted from the available coarse spatial resolution increment (i.e., a downscaling process), that is, the difference between the coarse spatial resolution images (e.g., MODIS images) acquired at the known and prediction times. Owing to seasonal changes and land cover changes, there always exist large differences between images acquired at different times, resulting in a large increment and, further, great uncertainty in downscaling. In this paper, a virtual image pair-based spatio-temporal fusion (VIPSTF) approach was proposed to deal with this problem. VIPSTF is based on the concept of a virtual image pair (VIP), which is produced based on the available, known MODIS-Landsat image pairs. We demonstrate theoretically that compared to the known image pairs, the VIP is closer to the data at the prediction time. The VIP can capture more fine spatial resolution information directly from known images and reduce the challenge in downscaling. VIPSTF is a flexible framework suitable for existing spatial weighting- and spatial unmixing-based methods, and two versions VIPSTF-SW and VIPSTF-SU are, thus, developed. Experimental results on a heterogeneous site and a site experiencing land cover type changes show that both spatial weighting- and spatial unmixing-based methods can be enhanced by VIPSTF, and the advantage is particularly noticeable when the observed image pairs are temporally far from the prediction time. Moreover, VIPSTF is free of the need for image pair selection and robust to the use of multiple image pairs. VIPSTF is also computationally faster than the original methods when using multiple image pairs. The concept of VIP provides a new insight to enhance spatio-temporal fusion by making fuller use of the observed image pairs and reducing the uncertainty of estimating the fine spatial resolution increment. © 2020 Elsevier Inc.

KW - Downscaling

KW - Spatio-temporal fusion

KW - Time-series images

KW - Virtual image pair (VIP)

KW - Forecasting

KW - Image acquisition

KW - Image fusion

KW - Image resolution

KW - Radiometers

KW - Uncertainty analysis

KW - Downscaling process

KW - Flexible framework

KW - Land-cover change

KW - Spatial and temporal resolutions

KW - Spatial resolution

KW - Spatial resolution images

KW - Spatio-temporal fusions

KW - Unified framework

KW - Image enhancement

KW - downscaling

KW - image analysis

KW - land cover

KW - Landsat

KW - MODIS

KW - prediction

KW - satellite imagery

KW - spatial resolution

KW - spatiotemporal analysis

U2 - 10.1016/j.rse.2020.112009

DO - 10.1016/j.rse.2020.112009

M3 - Journal article

VL - 249

JO - Remote Sensing of Environment

JF - Remote Sensing of Environment

SN - 0034-4257

M1 - 112009

ER -