Home > Research > Publications & Outputs > BARNet

Electronic data

  • remotesensing-1063256

    Accepted author manuscript, 25.4 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

Text available via DOI:

View graph of relations

BARNet: Boundary-Aware Refined Network for Automatic Building Extraction in Very High-Resolution Urban Aerial Images

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

BARNet: Boundary-Aware Refined Network for Automatic Building Extraction in Very High-Resolution Urban Aerial Images. / Jin, Yuwei; Xu, Wenbo; Zhang, Ce et al.
In: Remote Sensing, Vol. 13, No. 4, 692, 14.02.2021.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Jin Y, Xu W, Zhang C, Luo X, Jia H. BARNet: Boundary-Aware Refined Network for Automatic Building Extraction in Very High-Resolution Urban Aerial Images. Remote Sensing. 2021 Feb 14;13(4):692. doi: 10.3390/rs13040692

Author

Bibtex

@article{18f26c5cc707421fabd05a8409889de3,
title = "BARNet: Boundary-Aware Refined Network for Automatic Building Extraction in Very High-Resolution Urban Aerial Images",
abstract = "The convolutional neural networks (CNNs), such as U-Net, have shown competitive performance in automatic extraction of buildings from very high-resolution (VHR) remotely sensed imagery. However, due to the unstable multi-scale context aggregation, the insufficient combination of multi-level features, and the lack of consideration about semantic boundary, most existing CNNs produce incomplete segmentation for large-scale buildings and result in predictions with huge uncertainty at building boundaries. This paper presents a novel network embedded a special boundary-aware loss, called Boundary-aware Refined Network (BARNet), to address the gap above. The unique property of BARNet is the gated-attention refined fusion unit (GARFU), the denser atrous spatial pyramid pooling (DASPP) module, and the boundary-aware (BA) loss. The performance of BARNet is tested on two popular benchmark datasets that include various urban scenes and diverse patterns of buildings. Experimental results demonstrate that the proposed method outperforms several state-of-the-art (SOTA) benchmark approaches in both visual interpretation and quantitative evaluations.",
keywords = "VHR aerial images, building extraction, convolutional neural network, feature fusion, context aggregation, boundary",
author = "Yuwei Jin and Wenbo Xu and Ce Zhang and Xin Luo and Haitao Jia",
year = "2021",
month = feb,
day = "14",
doi = "10.3390/rs13040692",
language = "English",
volume = "13",
journal = "Remote Sensing",
issn = "2072-4292",
publisher = "MDPI AG",
number = "4",

}

RIS

TY - JOUR

T1 - BARNet

T2 - Boundary-Aware Refined Network for Automatic Building Extraction in Very High-Resolution Urban Aerial Images

AU - Jin, Yuwei

AU - Xu, Wenbo

AU - Zhang, Ce

AU - Luo, Xin

AU - Jia, Haitao

PY - 2021/2/14

Y1 - 2021/2/14

N2 - The convolutional neural networks (CNNs), such as U-Net, have shown competitive performance in automatic extraction of buildings from very high-resolution (VHR) remotely sensed imagery. However, due to the unstable multi-scale context aggregation, the insufficient combination of multi-level features, and the lack of consideration about semantic boundary, most existing CNNs produce incomplete segmentation for large-scale buildings and result in predictions with huge uncertainty at building boundaries. This paper presents a novel network embedded a special boundary-aware loss, called Boundary-aware Refined Network (BARNet), to address the gap above. The unique property of BARNet is the gated-attention refined fusion unit (GARFU), the denser atrous spatial pyramid pooling (DASPP) module, and the boundary-aware (BA) loss. The performance of BARNet is tested on two popular benchmark datasets that include various urban scenes and diverse patterns of buildings. Experimental results demonstrate that the proposed method outperforms several state-of-the-art (SOTA) benchmark approaches in both visual interpretation and quantitative evaluations.

AB - The convolutional neural networks (CNNs), such as U-Net, have shown competitive performance in automatic extraction of buildings from very high-resolution (VHR) remotely sensed imagery. However, due to the unstable multi-scale context aggregation, the insufficient combination of multi-level features, and the lack of consideration about semantic boundary, most existing CNNs produce incomplete segmentation for large-scale buildings and result in predictions with huge uncertainty at building boundaries. This paper presents a novel network embedded a special boundary-aware loss, called Boundary-aware Refined Network (BARNet), to address the gap above. The unique property of BARNet is the gated-attention refined fusion unit (GARFU), the denser atrous spatial pyramid pooling (DASPP) module, and the boundary-aware (BA) loss. The performance of BARNet is tested on two popular benchmark datasets that include various urban scenes and diverse patterns of buildings. Experimental results demonstrate that the proposed method outperforms several state-of-the-art (SOTA) benchmark approaches in both visual interpretation and quantitative evaluations.

KW - VHR aerial images

KW - building extraction

KW - convolutional neural network

KW - feature fusion

KW - context aggregation

KW - boundary

U2 - 10.3390/rs13040692

DO - 10.3390/rs13040692

M3 - Journal article

VL - 13

JO - Remote Sensing

JF - Remote Sensing

SN - 2072-4292

IS - 4

M1 - 692

ER -