Home > Research > Publications & Outputs > Spatial Gated Multi-Layer Perceptron for Land U...

Links

Text available via DOI:

View graph of relations

Spatial Gated Multi-Layer Perceptron for Land Use and Land Cover Mapping

Research output: Contribution to Journal/MagazineJournal articlepeer-review

E-pub ahead of print
Close
<mark>Journal publication date</mark>15/01/2024
<mark>Journal</mark>IEEE Geoscience and Remote Sensing Letters
Publication StatusE-pub ahead of print
<mark>Original language</mark>English

Abstract

Due to its capacity to recognize detailed spectral differences, hyperspectral data have been extensively used for precise Land Use Land Cover (LULC) mapping. However, recent multi-modal methods have shown their superior classification performance over the algorithms that use single data sets. On the other hand, Convolutional Neural Networks (CNNs) are models extensively utilized for the hierarchical extraction of features. Vision transformers (ViTs), through a self-attention mechanism, have recently achieved superior modeling of global contextual information compared to CNNs. However, to harness their image classification strength, ViTs require substantial training datasets. In cases where the available training data is limited, current advanced multi-layer perceptrons (MLPs) can provide viable alternatives to both deep CNNs and ViTs. In this paper, we developed the SGU-MLP, a deep learning algorithm that effectively combines MLPs and spatial gating units (SGUs) for precise Land Use Land Cover (LULC) mapping using multi-modal data from multi-spectral, LiDAR, and hyperspectral data. Results illustrated the superiority of the developed SGU-MLP classification algorithm over several CNN and CNN-ViT-based models, including HybridSN, ResNet, iFormer, EfficientFormer, and CoAtNet. The SGU-MLP classification model consistently outperformed the benchmark CNN and CNN-ViT-based algorithms. The code will be made publicly available at https: //github.com/aj1365/SGUMLP.