Home > Research > Publications & Outputs > How to Learn More?

Links

Text available via DOI:

View graph of relations

How to Learn More?: Exploring Kolmogorov–Arnold Networks for Hyperspectral Image Classification

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

How to Learn More? Exploring Kolmogorov–Arnold Networks for Hyperspectral Image Classification. / Jamali, Ali; Roy, Swalpa Kumar; Hong, Danfeng et al.
In: Remote Sensing, Vol. 16, No. 21, 4015, 29.10.2024.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Jamali A, Roy SK, Hong D, Lu B, Ghamisi P. How to Learn More? Exploring Kolmogorov–Arnold Networks for Hyperspectral Image Classification. Remote Sensing. 2024 Oct 29;16(21):4015. doi: 10.3390/rs16214015

Author

Jamali, Ali ; Roy, Swalpa Kumar ; Hong, Danfeng et al. / How to Learn More? Exploring Kolmogorov–Arnold Networks for Hyperspectral Image Classification. In: Remote Sensing. 2024 ; Vol. 16, No. 21.

Bibtex

@article{ffaad70b37174b0cb4cb8865470ec023,
title = "How to Learn More?: Exploring Kolmogorov–Arnold Networks for Hyperspectral Image Classification",
abstract = "Convolutional neural networks (CNNs) and vision transformers (ViTs) have shown excellent capability in complex hyperspectral image (HSI) classification. However, these models require a significant number of training data and are computational resources. On the other hand, modern Multi-Layer Perceptrons (MLPs) have demonstrated a great classification capability. These modern MLP-based models require significantly less training data compared with CNNs and ViTs, achieving state-of-the-art classification accuracy. Recently, Kolmogorov–Arnold networks (KANs) were proposed as viable alternatives for MLPs. Because of their internal similarity to splines and their external similarity to MLPs, KANs are able to optimize learned features with remarkable accuracy, in addition to being able to learn new features. Thus, in this study, we assessed the effectiveness of KANs for complex HSI data classification. Moreover, to enhance the HSI classification accuracy obtained by the KANs, we developed and proposed a hybrid architecture utilizing 1D, 2D, and 3D KANs. To demonstrate the effectiveness of the proposed KAN architecture, we conducted extensive experiments on three newly created HSI benchmark datasets: QUH-Pingan, QUH-Tangdaowan, and QUH-Qingyun. The results underscored the competitive or better capability of the developed hybrid KAN-based model across these benchmark datasets over several other CNN- and ViT-based algorithms, including 1D-CNN, 2DCNN, 3D CNN, VGG-16, ResNet-50, EfficientNet, RNN, and ViT.",
author = "Ali Jamali and Roy, {Swalpa Kumar} and Danfeng Hong and Bing Lu and Pedram Ghamisi",
year = "2024",
month = oct,
day = "29",
doi = "10.3390/rs16214015",
language = "English",
volume = "16",
journal = "Remote Sensing",
issn = "2072-4292",
publisher = "MDPI AG",
number = "21",

}

RIS

TY - JOUR

T1 - How to Learn More?

T2 - Exploring Kolmogorov–Arnold Networks for Hyperspectral Image Classification

AU - Jamali, Ali

AU - Roy, Swalpa Kumar

AU - Hong, Danfeng

AU - Lu, Bing

AU - Ghamisi, Pedram

PY - 2024/10/29

Y1 - 2024/10/29

N2 - Convolutional neural networks (CNNs) and vision transformers (ViTs) have shown excellent capability in complex hyperspectral image (HSI) classification. However, these models require a significant number of training data and are computational resources. On the other hand, modern Multi-Layer Perceptrons (MLPs) have demonstrated a great classification capability. These modern MLP-based models require significantly less training data compared with CNNs and ViTs, achieving state-of-the-art classification accuracy. Recently, Kolmogorov–Arnold networks (KANs) were proposed as viable alternatives for MLPs. Because of their internal similarity to splines and their external similarity to MLPs, KANs are able to optimize learned features with remarkable accuracy, in addition to being able to learn new features. Thus, in this study, we assessed the effectiveness of KANs for complex HSI data classification. Moreover, to enhance the HSI classification accuracy obtained by the KANs, we developed and proposed a hybrid architecture utilizing 1D, 2D, and 3D KANs. To demonstrate the effectiveness of the proposed KAN architecture, we conducted extensive experiments on three newly created HSI benchmark datasets: QUH-Pingan, QUH-Tangdaowan, and QUH-Qingyun. The results underscored the competitive or better capability of the developed hybrid KAN-based model across these benchmark datasets over several other CNN- and ViT-based algorithms, including 1D-CNN, 2DCNN, 3D CNN, VGG-16, ResNet-50, EfficientNet, RNN, and ViT.

AB - Convolutional neural networks (CNNs) and vision transformers (ViTs) have shown excellent capability in complex hyperspectral image (HSI) classification. However, these models require a significant number of training data and are computational resources. On the other hand, modern Multi-Layer Perceptrons (MLPs) have demonstrated a great classification capability. These modern MLP-based models require significantly less training data compared with CNNs and ViTs, achieving state-of-the-art classification accuracy. Recently, Kolmogorov–Arnold networks (KANs) were proposed as viable alternatives for MLPs. Because of their internal similarity to splines and their external similarity to MLPs, KANs are able to optimize learned features with remarkable accuracy, in addition to being able to learn new features. Thus, in this study, we assessed the effectiveness of KANs for complex HSI data classification. Moreover, to enhance the HSI classification accuracy obtained by the KANs, we developed and proposed a hybrid architecture utilizing 1D, 2D, and 3D KANs. To demonstrate the effectiveness of the proposed KAN architecture, we conducted extensive experiments on three newly created HSI benchmark datasets: QUH-Pingan, QUH-Tangdaowan, and QUH-Qingyun. The results underscored the competitive or better capability of the developed hybrid KAN-based model across these benchmark datasets over several other CNN- and ViT-based algorithms, including 1D-CNN, 2DCNN, 3D CNN, VGG-16, ResNet-50, EfficientNet, RNN, and ViT.

U2 - 10.3390/rs16214015

DO - 10.3390/rs16214015

M3 - Journal article

VL - 16

JO - Remote Sensing

JF - Remote Sensing

SN - 2072-4292

IS - 21

M1 - 4015

ER -