Home > Research > Publications & Outputs > Prototypical Unknown-Aware Multiview Consistenc...

Electronic data

  • OSDA

    Accepted author manuscript, 5.52 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

Text available via DOI:

View graph of relations

Prototypical Unknown-Aware Multiview Consistency Learning for Open-Set Cross-Domain Remote Sensing Image Classification

Research output: Contribution to Journal/MagazineJournal articlepeer-review

E-pub ahead of print
Close
Article number5643616
<mark>Journal publication date</mark>31/12/2024
<mark>Journal</mark>IEEE Transactions on Geoscience and Remote Sensing
Volume62
Publication StatusE-pub ahead of print
Early online date8/10/24
<mark>Original language</mark>English

Abstract

Developing a cross-domain classification model for remote sensing images has drawn significant attention in the literature. By leveraging the open-set unsupervised domain adaptation (UDA) technique, the generalization performance of deep learning models has been improved with the capability to recognize unknown categories. However, it remains challenging to explore distribution patterns in the target domain using uncertain category-wise supervision from unlabeled datasets while reducing negative transfer caused by unknown samples. To develop a robust open-set UDA framework, this article presents prototypical unknown-aware multiview consistency learning (PUMCL) designed for remote sensing scene classification across heterogeneous domains. Specifically, it employs a consistency learning scheme with multiview and multilevel perturbations to improve feature learning from unlabeled target samples. An entropy separation strategy is utilized to facilitate open-set detection and recognition during adaptation, enabling unknown-aware feature alignment. Furthermore, the introduction of prototypical constraints optimizes pseudo-label generation through online denoising and promotes a compact category-wise feature subspace for improved class separation across domains. Experiments conducted on six cross-domain scenarios using AID, NWPU, and UCMD datasets demonstrate the method’s superior performance compared to nine state-of-the-art approaches, achieving a gain of 4.5% to 21.2% in mIoU. More importantly, it shows promising class separability with clear boundaries between different classes and compact clustering of unknown samples in the feature space. The source code will be available at https://github.com/zxk688 .