Home > Research > Publications & Outputs > Superpixel-Based Attention Graph Neural Network...

Electronic data

  • remotesensing-14-00305

    Accepted author manuscript, 3.77 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

Links

Text available via DOI:

View graph of relations

Superpixel-Based Attention Graph Neural Network for Semantic Segmentation in Aerial Images

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
  • Qi Diao
  • Yaping Dai
  • Ce Zhang
  • Yan Wu
  • Xiaoxue Feng
  • Feng Pan
Close
Article number305
<mark>Journal publication date</mark>10/01/2022
<mark>Journal</mark>Remote Sensing
Issue number2
Volume14
Number of pages17
Pages (from-to)1-17
Publication StatusPublished
<mark>Original language</mark>English

Abstract

Semantic segmentation is one of the significant tasks in understanding aerial images with high spatial resolution. Recently, Graph Neural Network (GNN) and attention mechanism have achieved excellent performance in semantic segmentation tasks in general images and been applied to aerial images. In this paper, we propose a novel Superpixel-based Attention Graph Neural Network (SAGNN) for semantic segmentation of high spatial resolution aerial images. A K-Nearest Neighbor (KNN) graph is constructed from our network for each image, where each node corresponds to a superpixel in the image and is associated with a hidden representation vector. On this basis, the initialization of the hidden representation vector is the appearance feature extracted by a unary Convolutional Neural Network (CNN) from the image. Moreover, relying on the attention mechanism and recursive functions, each node can update its hidden representation according to the current state and the incoming information from its neighbors. The final representation of each node is used to predict the semantic class of each superpixel. The attention mechanism enables graph nodes to differentially aggregate neighbor information, which can extract higher-quality features. Furthermore, the superpixels not only save computational resources, but also maintain object boundary to achieve more accurate predictions. The accuracy of our model on the Potsdam and Vaihingen public datasets exceeds all benchmark approaches, reaching 90.23% and 89.32%, respectively.