Home > Research > Publications & Outputs > Scale-Aware Neural Network for Semantic Segment...

Electronic data

  • remotesensing-13-05015

    Accepted author manuscript, 4.63 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

Text available via DOI:

View graph of relations

Scale-Aware Neural Network for Semantic Segmentation of Multi-Resolution Remote Sensing Images

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Scale-Aware Neural Network for Semantic Segmentation of Multi-Resolution Remote Sensing Images. / Wang, Libo; Zhang, Ce; Li, Rui et al.
In: Remote Sensing, Vol. 13, No. 24, 5015, 10.12.2021.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Wang L, Zhang C, Li R, Duan C, Meng X, Atkinson P. Scale-Aware Neural Network for Semantic Segmentation of Multi-Resolution Remote Sensing Images. Remote Sensing. 2021 Dec 10;13(24):5015. doi: 10.3390/rs13245015

Author

Wang, Libo ; Zhang, Ce ; Li, Rui et al. / Scale-Aware Neural Network for Semantic Segmentation of Multi-Resolution Remote Sensing Images. In: Remote Sensing. 2021 ; Vol. 13, No. 24.

Bibtex

@article{e4775396e0c64205a2dbc27924f00f1d,
title = "Scale-Aware Neural Network for Semantic Segmentation of Multi-Resolution Remote Sensing Images",
abstract = "Assigning geospatial objects with specific categories at the pixel level is a fundamental task in remote sensing image analysis. Along with the rapid development of sensor technologies, remotely sensed images can be captured at multiple spatial resolutions (MSR) with information content manifested at different scales. Extracting information from these MSR images represents huge opportunities for enhanced feature representation and characterisation. However, MSR images suffer from two critical issues: (1) increased scale variation of geo-objects and (2) loss of detailed information at coarse spatial resolutions. To bridge these gaps, in this paper, we propose a novel scale-aware neural network (SaNet) for the semantic segmentation of MSR remotely sensed imagery. SaNet deploys a densely connected feature network (DCFFM) module to capture high-quality multi-scale context, such that the scale variation is handled properly and the quality of segmentation is increased for both large and small objects. A spatial feature recalibration (SFRM) module was further incorporated into the network to learn intact semantic content with enhanced spatial relationships, where the negative effects of information loss are removed. The combination of DCFFM and SFRM allows SaNet to learn scale-aware feature representation, which outperforms the existing multi-scale feature representation. Extensive experiments on three semantic segmentation datasets demonstrated the effectiveness of the proposed SaNet in cross-resolution segmentation.",
keywords = "deep convolutional neural network, multiple spatial resolutions, remote sensing, scale-aware feature representation, semantic segmentation",
author = "Libo Wang and Ce Zhang and Rui Li and Chenxi Duan and Xiaoliang Meng and Peter Atkinson",
year = "2021",
month = dec,
day = "10",
doi = "10.3390/rs13245015",
language = "English",
volume = "13",
journal = "Remote Sensing",
issn = "2072-4292",
publisher = "MDPI AG",
number = "24",

}

RIS

TY - JOUR

T1 - Scale-Aware Neural Network for Semantic Segmentation of Multi-Resolution Remote Sensing Images

AU - Wang, Libo

AU - Zhang, Ce

AU - Li, Rui

AU - Duan, Chenxi

AU - Meng, Xiaoliang

AU - Atkinson, Peter

PY - 2021/12/10

Y1 - 2021/12/10

N2 - Assigning geospatial objects with specific categories at the pixel level is a fundamental task in remote sensing image analysis. Along with the rapid development of sensor technologies, remotely sensed images can be captured at multiple spatial resolutions (MSR) with information content manifested at different scales. Extracting information from these MSR images represents huge opportunities for enhanced feature representation and characterisation. However, MSR images suffer from two critical issues: (1) increased scale variation of geo-objects and (2) loss of detailed information at coarse spatial resolutions. To bridge these gaps, in this paper, we propose a novel scale-aware neural network (SaNet) for the semantic segmentation of MSR remotely sensed imagery. SaNet deploys a densely connected feature network (DCFFM) module to capture high-quality multi-scale context, such that the scale variation is handled properly and the quality of segmentation is increased for both large and small objects. A spatial feature recalibration (SFRM) module was further incorporated into the network to learn intact semantic content with enhanced spatial relationships, where the negative effects of information loss are removed. The combination of DCFFM and SFRM allows SaNet to learn scale-aware feature representation, which outperforms the existing multi-scale feature representation. Extensive experiments on three semantic segmentation datasets demonstrated the effectiveness of the proposed SaNet in cross-resolution segmentation.

AB - Assigning geospatial objects with specific categories at the pixel level is a fundamental task in remote sensing image analysis. Along with the rapid development of sensor technologies, remotely sensed images can be captured at multiple spatial resolutions (MSR) with information content manifested at different scales. Extracting information from these MSR images represents huge opportunities for enhanced feature representation and characterisation. However, MSR images suffer from two critical issues: (1) increased scale variation of geo-objects and (2) loss of detailed information at coarse spatial resolutions. To bridge these gaps, in this paper, we propose a novel scale-aware neural network (SaNet) for the semantic segmentation of MSR remotely sensed imagery. SaNet deploys a densely connected feature network (DCFFM) module to capture high-quality multi-scale context, such that the scale variation is handled properly and the quality of segmentation is increased for both large and small objects. A spatial feature recalibration (SFRM) module was further incorporated into the network to learn intact semantic content with enhanced spatial relationships, where the negative effects of information loss are removed. The combination of DCFFM and SFRM allows SaNet to learn scale-aware feature representation, which outperforms the existing multi-scale feature representation. Extensive experiments on three semantic segmentation datasets demonstrated the effectiveness of the proposed SaNet in cross-resolution segmentation.

KW - deep convolutional neural network

KW - multiple spatial resolutions

KW - remote sensing

KW - scale-aware feature representation

KW - semantic segmentation

U2 - 10.3390/rs13245015

DO - 10.3390/rs13245015

M3 - Journal article

VL - 13

JO - Remote Sensing

JF - Remote Sensing

SN - 2072-4292

IS - 24

M1 - 5015

ER -