Home > Research > Publications & Outputs > BARNet

Electronic data

  • remotesensing-1063256

    Accepted author manuscript, 25.4 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

Text available via DOI:

View graph of relations

BARNet: Boundary-Aware Refined Network for Automatic Building Extraction in Very High-Resolution Urban Aerial Images

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
  • Yuwei Jin
  • Wenbo Xu
  • Ce Zhang
  • Xin Luo
  • Haitao Jia
Close
Article number692
<mark>Journal publication date</mark>14/02/2021
<mark>Journal</mark>Remote Sensing
Issue number4
Volume13
Number of pages20
Publication StatusPublished
<mark>Original language</mark>English

Abstract

The convolutional neural networks (CNNs), such as U-Net, have shown competitive performance in automatic extraction of buildings from very high-resolution (VHR) remotely sensed imagery. However, due to the unstable multi-scale context aggregation, the insufficient combination of multi-level features, and the lack of consideration about semantic boundary, most existing CNNs produce incomplete segmentation for large-scale buildings and result in predictions with huge uncertainty at building boundaries. This paper presents a novel network embedded a special boundary-aware loss, called Boundary-aware Refined Network (BARNet), to address the gap above. The unique property of BARNet is the gated-attention refined fusion unit (GARFU), the denser atrous spatial pyramid pooling (DASPP) module, and the boundary-aware (BA) loss. The performance of BARNet is tested on two popular benchmark datasets that include various urban scenes and diverse patterns of buildings. Experimental results demonstrate that the proposed method outperforms several state-of-the-art (SOTA) benchmark approaches in both visual interpretation and quantitative evaluations.