Home > Research > Publications & Outputs > A Self-Training Hierarchical Prototype-based En...

Electronic data

  • STHPensemble_final

    Rights statement: This is the author’s version of a work that was accepted for publication in Information Fusion. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Information Fusion, 80, 2021 DOI: 10.1016/j.inffus.2021.11.014

    Accepted author manuscript, 2.92 MB, PDF document

    Available under license: CC BY-NC-ND: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Links

Text available via DOI:

View graph of relations

A Self-Training Hierarchical Prototype-based Ensemble Framework for Remote Sensing Scene Classification

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

A Self-Training Hierarchical Prototype-based Ensemble Framework for Remote Sensing Scene Classification. / Gu, Xiaowei; Zhang, Ce; Shen, Qiang et al.
In: Information Fusion, Vol. 80, 01.04.2022, p. 179-204.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Gu X, Zhang C, Shen Q, Han J, Angelov P, Atkinson P. A Self-Training Hierarchical Prototype-based Ensemble Framework for Remote Sensing Scene Classification. Information Fusion. 2022 Apr 1;80:179-204. Epub 2021 Nov 18. doi: 10.1016/j.inffus.2021.11.014

Author

Gu, Xiaowei ; Zhang, Ce ; Shen, Qiang et al. / A Self-Training Hierarchical Prototype-based Ensemble Framework for Remote Sensing Scene Classification. In: Information Fusion. 2022 ; Vol. 80. pp. 179-204.

Bibtex

@article{675e8a36930b4e01ad109f4b5aeb6f57,
title = "A Self-Training Hierarchical Prototype-based Ensemble Framework for Remote Sensing Scene Classification",
abstract = "Remote sensing scene classification plays a critical role in a wide range of real-world applications. Technically, however, scene classification is an extremely challenging task due to the huge complexity in remotely sensed scenes, and the difficulty in acquiring labelled data for model training such as supervised deep learning. To tackle these issues, a novel semi-supervised ensemble framework is proposed here using the self-training hierarchical prototype-based classifier as the base learner for chunk-by-chunk prediction. The framework has the ability to build a powerful ensemble model from both labelled and unlabelled images with minimum supervision. Different feature descriptors are employed in the proposed ensemble framework to offer multiple independent views of images. Thus, the diversity of base learners is guaranteed for ensemble classification. To further increase the overall accuracy, a novel cross-checking strategy was introduced to enable the base learners to exchange pseudo-labelling information during the self-training process, and maximize the correctness of pseudo-labels assigned to unlabelled images. Extensive numerical experiments on popular benchmark remote sensing scenes demonstrated the effectiveness of the proposed ensemble framework, especially where the number of labelled images available is limited. For example, the classification accuracy achieved on the OPTIMAL-31, PatternNet and RSI-CB256 datasets was up to 99.91%, 98. 67% and 99.07% with only 40% of the image sets used as labelled training images, surpassing or at least on par with mainstream benchmark approaches trained with double the number of labelled images.",
keywords = "self-training, pseudo-labelling, prototypes, remote sensing, scene classification",
author = "Xiaowei Gu and Ce Zhang and Qiang Shen and Jungong Han and Plamen Angelov and Peter Atkinson",
note = "This is the author{\textquoteright}s version of a work that was accepted for publication in Information Fusion. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Information Fusion, 80, 2021 DOI: 10.1016/j.inffus.2021.11.014",
year = "2022",
month = apr,
day = "1",
doi = "10.1016/j.inffus.2021.11.014",
language = "English",
volume = "80",
pages = "179--204",
journal = "Information Fusion",
issn = "1566-2535",
publisher = "Elsevier",

}

RIS

TY - JOUR

T1 - A Self-Training Hierarchical Prototype-based Ensemble Framework for Remote Sensing Scene Classification

AU - Gu, Xiaowei

AU - Zhang, Ce

AU - Shen, Qiang

AU - Han, Jungong

AU - Angelov, Plamen

AU - Atkinson, Peter

N1 - This is the author’s version of a work that was accepted for publication in Information Fusion. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Information Fusion, 80, 2021 DOI: 10.1016/j.inffus.2021.11.014

PY - 2022/4/1

Y1 - 2022/4/1

N2 - Remote sensing scene classification plays a critical role in a wide range of real-world applications. Technically, however, scene classification is an extremely challenging task due to the huge complexity in remotely sensed scenes, and the difficulty in acquiring labelled data for model training such as supervised deep learning. To tackle these issues, a novel semi-supervised ensemble framework is proposed here using the self-training hierarchical prototype-based classifier as the base learner for chunk-by-chunk prediction. The framework has the ability to build a powerful ensemble model from both labelled and unlabelled images with minimum supervision. Different feature descriptors are employed in the proposed ensemble framework to offer multiple independent views of images. Thus, the diversity of base learners is guaranteed for ensemble classification. To further increase the overall accuracy, a novel cross-checking strategy was introduced to enable the base learners to exchange pseudo-labelling information during the self-training process, and maximize the correctness of pseudo-labels assigned to unlabelled images. Extensive numerical experiments on popular benchmark remote sensing scenes demonstrated the effectiveness of the proposed ensemble framework, especially where the number of labelled images available is limited. For example, the classification accuracy achieved on the OPTIMAL-31, PatternNet and RSI-CB256 datasets was up to 99.91%, 98. 67% and 99.07% with only 40% of the image sets used as labelled training images, surpassing or at least on par with mainstream benchmark approaches trained with double the number of labelled images.

AB - Remote sensing scene classification plays a critical role in a wide range of real-world applications. Technically, however, scene classification is an extremely challenging task due to the huge complexity in remotely sensed scenes, and the difficulty in acquiring labelled data for model training such as supervised deep learning. To tackle these issues, a novel semi-supervised ensemble framework is proposed here using the self-training hierarchical prototype-based classifier as the base learner for chunk-by-chunk prediction. The framework has the ability to build a powerful ensemble model from both labelled and unlabelled images with minimum supervision. Different feature descriptors are employed in the proposed ensemble framework to offer multiple independent views of images. Thus, the diversity of base learners is guaranteed for ensemble classification. To further increase the overall accuracy, a novel cross-checking strategy was introduced to enable the base learners to exchange pseudo-labelling information during the self-training process, and maximize the correctness of pseudo-labels assigned to unlabelled images. Extensive numerical experiments on popular benchmark remote sensing scenes demonstrated the effectiveness of the proposed ensemble framework, especially where the number of labelled images available is limited. For example, the classification accuracy achieved on the OPTIMAL-31, PatternNet and RSI-CB256 datasets was up to 99.91%, 98. 67% and 99.07% with only 40% of the image sets used as labelled training images, surpassing or at least on par with mainstream benchmark approaches trained with double the number of labelled images.

KW - self-training

KW - pseudo-labelling

KW - prototypes

KW - remote sensing

KW - scene classification

U2 - 10.1016/j.inffus.2021.11.014

DO - 10.1016/j.inffus.2021.11.014

M3 - Journal article

VL - 80

SP - 179

EP - 204

JO - Information Fusion

JF - Information Fusion

SN - 1566-2535

ER -