Home > Research > Publications & Outputs > Recurrent Semantic Change Detection in VHR Remo...

Electronic data

Links

Text available via DOI:

View graph of relations

Recurrent Semantic Change Detection in VHR Remote Sensing Images Using Visual Foundation Models

Research output: Contribution to Journal/MagazineJournal articlepeer-review

E-pub ahead of print
Close
Article number5402314
<mark>Journal publication date</mark>31/12/2025
<mark>Journal</mark>IEEE Transactions on Geoscience and Remote Sensing
Volume63
Publication StatusE-pub ahead of print
Early online date17/03/25
<mark>Original language</mark>English

Abstract

Semantic change detection (SCD) involves the simultaneous extraction of changed regions and their corresponding semantic classifications (pre- and post-change) in remote sensing images (RSIs). Despite recent advancements in vision foundation models (VFMs), the fast-segment anything model has demonstrated insufficient performance in SCD. In this article, we propose a novel VFMs architecture for SCD, designated as VFM-ReSCD. This architecture integrates a side adapter (SA) into the VFM-ReSCD to fine-tune the fast segment anything model (FastSAM) network, enabling zero-shot transfer to novel image distributions and tasks. This enhancement facilitates the extraction of spatial features from very high-resolution (VHR) RSIs. Moreover, we introduce a recurrent neural network (RNN) to model semantic correlation and capture feature changes. We evaluated the proposed methodology on two benchmark datasets. Extensive experiments show that our method achieves state-of-the-art (SOTA) performances over existing approaches and outperforms other CNN-based methods on two RSI datasets.