Home > Research > Publications & Outputs > Land cover classification from remote sensing i...

Electronic data

  • 10095020.2021

    Rights statement: This is an Accepted Manuscript of an article published by Taylor & Francis in Geo-spatial Information Science on 07/01/2022, available online: http://www.tandfonline.com/10.1080/10095020.2021.2017237

    Accepted author manuscript, 16.9 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

Text available via DOI:

View graph of relations

Land cover classification from remote sensing images based on multi-scale fully convolutional network

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
  • Rui Li
  • Shunyi Zheng
  • Chenxi Duan
  • Libo Wang
  • Ce Zhang
Close
<mark>Journal publication date</mark>1/07/2022
<mark>Journal</mark>Geo-spatial Information Science
Issue number2
Volume25
Number of pages17
Pages (from-to)278-294
Publication StatusPublished
Early online date7/01/22
<mark>Original language</mark>English

Abstract

Although the Convolutional Neural Network (CNN) has shown great potential for land cover classification, the frequently used single-scale convolution kernel limits the scope of information extraction. Therefore, we propose a Multi-Scale Fully Convolutional Network (MSFCN) with a multi-scale convolutional kernel as well as a Channel Attention Block (CAB) and a Global Pooling Module (GPM) in this paper to exploit discriminative representations from two-dimensional (2D) satellite images. Meanwhile, to explore the ability of the proposed MSFCN for spatio-temporal images, we expand our MSFCN to three-dimension using three-dimensional (3D) CNN, capable of harnessing each land cover category’s time series interaction from the reshaped spatio-temporal remote sensing images. To verify the effectiveness of the proposed MSFCN, we conduct experiments on two spatial datasets and two spatio-temporal datasets. The proposed MSFCN achieves 60.366% on the WHDLD dataset and 75.127% on the GID dataset in terms of mIoU index while the figures for two spatio-temporal datasets are 87.753% and 77.156%. Extensive comparative experiments and ablation studies demonstrate the effectiveness of the proposed MSFCN. Code will be available at https://github.com/lironui/MSFCN.

Bibliographic note

This is an Accepted Manuscript of an article published by Taylor & Francis in Geo-spatial Information Science on 07/01/2022, available online: http://www.tandfonline.com/10.1080/10095020.2021.2017237