Home > Research > Publications & Outputs > Scale-Aware Neural Network for Semantic Segment...

Electronic data

  • remotesensing-13-05015

    Accepted author manuscript, 4.63 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

Text available via DOI:

View graph of relations

Scale-Aware Neural Network for Semantic Segmentation of Multi-Resolution Remote Sensing Images

Research output: Contribution to journalJournal articlepeer-review

Published
Close
Article number5015
<mark>Journal publication date</mark>10/12/2021
<mark>Journal</mark>Remote Sensing
Issue number24
Volume13
Number of pages19
Publication StatusPublished
<mark>Original language</mark>English

Abstract

Assigning geospatial objects with specific categories at the pixel level is a fundamental task in remote sensing image analysis. Along with the rapid development of sensor technologies, remotely sensed images can be captured at multiple spatial resolutions (MSR) with information content manifested at different scales. Extracting information from these MSR images represents huge opportunities for enhanced feature representation and characterisation. However, MSR images suffer from two critical issues: (1) increased scale variation of geo-objects and (2) loss of detailed information at coarse spatial resolutions. To bridge these gaps, in this paper, we propose a novel scale-aware neural network (SaNet) for the semantic segmentation of MSR remotely sensed imagery. SaNet deploys a densely connected feature network (DCFFM) module to capture high-quality multi-scale context, such that the scale variation is handled properly and the quality of segmentation is increased for both large and small objects. A spatial feature recalibration (SFRM) module was further incorporated into the network to learn intact semantic content with enhanced spatial relationships, where the negative effects of information loss are removed. The combination of DCFFM and SFRM allows SaNet to learn scale-aware feature representation, which outperforms the existing multi-scale feature representation. Extensive experiments on three semantic segmentation datasets demonstrated the effectiveness of the proposed SaNet in cross-resolution segmentation.