Home > Research > Publications & Outputs > CFNet


Text available via DOI:

View graph of relations

CFNet: An Eigenvalue Preserved Approach to Multiscale Building Segmentation in High-Resolution Remote Sensing Images

Research output: Contribution to Journal/MagazineJournal articlepeer-review

  • Qi Liu
  • Yang Li
  • Muhammad Bilal
  • Xiaodong Liu
  • Yonghong Zhang
  • Huihui Wang
  • Xiaolong Xu
  • Hui Lu
<mark>Journal publication date</mark>10/03/2023
<mark>Journal</mark>IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Number of pages11
Pages (from-to)2481-2491
Publication StatusPublished
Early online date13/02/23
<mark>Original language</mark>English


In recent years, AI and deep learning (DL) methods have been widely used for object classification, recognition, and segmentation of high-resolution multispectral remote sensing images. These DL-based solutions perform better compared with the traditional spectral algorithms but still suffer from insufficient optimization of global and local features of object context. In addition, failure of code-data isolation and/or disclosure of detailed eigenvalues cause serious privacy and even secret leakage due to the sensitivity of high-resolution remote sensing data and their processing mechanisms. In this article, class feature modules have been presented in the decoder part of an attention-based CNN network to distinguish between building and nonbuilding (background) area. In this way, context features of a focused object can be extracted with more details being processed while the resolution of images is maintained. The reconstructed local and global feature values and dependencies in the proposed model are maintained by reconfiguring multiple effective attention modules with contextual dependencies to achieve better results for the eigenvalue. According to quantitative results and their visualization, the proposed model has depicted better performance over others' work using two large-scale building remote sensing datasets. The F1-score of this model reached 87.91 and 89.58 on WHU Buildings Dataset and Massachusetts Buildings Dataset, respectively, which exceeded the other semantic segmentation models.