Home > Research > Publications & Outputs > Simplified object-based deep neural network for...

Electronic data

  • pan_2021_accepted

    Rights statement: This is the author’s version of a work that was accepted for publication in ISPRS Journal of Photogrammetry and Remote Sensing. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in ISPRS Journal of Photogrammetry and Remote Sensing, 181, 2021 DOI: 10.1016/j.isprsjprs.2021.09.014

    Accepted author manuscript, 13.5 MB, PDF document

    Available under license: CC BY-NC-ND: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Links

Text available via DOI:

View graph of relations

Simplified object-based deep neural network for very high resolution remote sensing image classification

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Simplified object-based deep neural network for very high resolution remote sensing image classification. / Pan, Xin; Zhang, Ce; Xu, Jun et al.
In: ISPRS Journal of Photogrammetry and Remote Sensing, Vol. 181, 30.11.2021, p. 218-237.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Pan, X, Zhang, C, Xu, J & Zhao, J 2021, 'Simplified object-based deep neural network for very high resolution remote sensing image classification', ISPRS Journal of Photogrammetry and Remote Sensing, vol. 181, pp. 218-237. https://doi.org/10.1016/j.isprsjprs.2021.09.014

APA

Vancouver

Pan X, Zhang C, Xu J, Zhao J. Simplified object-based deep neural network for very high resolution remote sensing image classification. ISPRS Journal of Photogrammetry and Remote Sensing. 2021 Nov 30;181:218-237. Epub 2021 Sept 24. doi: 10.1016/j.isprsjprs.2021.09.014

Author

Pan, Xin ; Zhang, Ce ; Xu, Jun et al. / Simplified object-based deep neural network for very high resolution remote sensing image classification. In: ISPRS Journal of Photogrammetry and Remote Sensing. 2021 ; Vol. 181. pp. 218-237.

Bibtex

@article{f7edcb216f27410c911a14af1e81ac40,
title = "Simplified object-based deep neural network for very high resolution remote sensing image classification",
abstract = "For the object-based classification of high resolution remote sensing images, many people expect that introducing deep learning methods can improve then classification accuracy. Unfortunately, the input shape for deep neural networks (DNNs) is usually rectangular, whereas the shapes of the segments output by segmentation methods are usually according to the corresponding ground objects; this inconsistency can lead to confusion among different types of heterogeneous content when a DNN processes a segment. Currently, most object-based methods utilizing convolutional neural networks (CNNs) adopt additional models to overcome the detrimental influence of such heterogeneous content; however, these heterogeneity suppression mechanisms introduce additional complexity into the whole classification process, and these methods are usually unstable and difficult to use in real applications. To address the above problems, this paper proposes a simplified object-based deep neural network (SO-DNN) for very high resolution remote sensing image classification. In SO-DNN, a new segment category label inference method is introduced, in which a deep semantic segmentation neural network (DSSNN) is used as the classification model instead of a traditional CNN. Since the DSSNN can obtain a category label for each pixel in the input image patch, different types of content are not mixed together; therefore, SO-DNN does not require an additional heterogeneity suppression mechanism. Moreover, SO-DNN includes a sample information optimization method that allows the DSSNN model to be trained using only pixel-based training samples. Because only a single model is used and only a pixel-based training set is needed, the whole classification process of SO-DNN is relatively simple and direct. In experiments, we use very high-resolution aerial images from Vaihingen and Potsdam from the ISPRS WG II/4 dataset as test data and compare SO-DNN with 6 traditional methods: O-MLP, O+CNN, OHSF-CNN, 2-CNN, JDL and U-Net. Compared with the best-performing method among these traditional methods, the classification accuracy of SO-DNN is improved by up to 7.71% and 10.78% for single images from Vaihingen and Potsdam, respectively, and the average classification accuracy is improved by 2.46% and 2.91% for the Vaihingen and Potsdam images, respectively. SO-DNN relies on fewer models and easier-to-obtain samples than traditional methods, and its stable performance makes SO-DNN more valuable for practical applications.",
keywords = "CNN, Very high resolution, Semantic segmentation, Classification, OBIA",
author = "Xin Pan and Ce Zhang and Jun Xu and Jian Zhao",
note = "This is the author{\textquoteright}s version of a work that was accepted for publication in ISPRS Journal of Photogrammetry and Remote Sensing. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in ISPRS Journal of Photogrammetry and Remote Sensing, 181, 2021 DOI: 10.1016/j.isprsjprs.2021.09.014",
year = "2021",
month = nov,
day = "30",
doi = "10.1016/j.isprsjprs.2021.09.014",
language = "English",
volume = "181",
pages = "218--237",
journal = "ISPRS Journal of Photogrammetry and Remote Sensing",
issn = "0924-2716",
publisher = "Elsevier Science B.V.",

}

RIS

TY - JOUR

T1 - Simplified object-based deep neural network for very high resolution remote sensing image classification

AU - Pan, Xin

AU - Zhang, Ce

AU - Xu, Jun

AU - Zhao, Jian

N1 - This is the author’s version of a work that was accepted for publication in ISPRS Journal of Photogrammetry and Remote Sensing. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in ISPRS Journal of Photogrammetry and Remote Sensing, 181, 2021 DOI: 10.1016/j.isprsjprs.2021.09.014

PY - 2021/11/30

Y1 - 2021/11/30

N2 - For the object-based classification of high resolution remote sensing images, many people expect that introducing deep learning methods can improve then classification accuracy. Unfortunately, the input shape for deep neural networks (DNNs) is usually rectangular, whereas the shapes of the segments output by segmentation methods are usually according to the corresponding ground objects; this inconsistency can lead to confusion among different types of heterogeneous content when a DNN processes a segment. Currently, most object-based methods utilizing convolutional neural networks (CNNs) adopt additional models to overcome the detrimental influence of such heterogeneous content; however, these heterogeneity suppression mechanisms introduce additional complexity into the whole classification process, and these methods are usually unstable and difficult to use in real applications. To address the above problems, this paper proposes a simplified object-based deep neural network (SO-DNN) for very high resolution remote sensing image classification. In SO-DNN, a new segment category label inference method is introduced, in which a deep semantic segmentation neural network (DSSNN) is used as the classification model instead of a traditional CNN. Since the DSSNN can obtain a category label for each pixel in the input image patch, different types of content are not mixed together; therefore, SO-DNN does not require an additional heterogeneity suppression mechanism. Moreover, SO-DNN includes a sample information optimization method that allows the DSSNN model to be trained using only pixel-based training samples. Because only a single model is used and only a pixel-based training set is needed, the whole classification process of SO-DNN is relatively simple and direct. In experiments, we use very high-resolution aerial images from Vaihingen and Potsdam from the ISPRS WG II/4 dataset as test data and compare SO-DNN with 6 traditional methods: O-MLP, O+CNN, OHSF-CNN, 2-CNN, JDL and U-Net. Compared with the best-performing method among these traditional methods, the classification accuracy of SO-DNN is improved by up to 7.71% and 10.78% for single images from Vaihingen and Potsdam, respectively, and the average classification accuracy is improved by 2.46% and 2.91% for the Vaihingen and Potsdam images, respectively. SO-DNN relies on fewer models and easier-to-obtain samples than traditional methods, and its stable performance makes SO-DNN more valuable for practical applications.

AB - For the object-based classification of high resolution remote sensing images, many people expect that introducing deep learning methods can improve then classification accuracy. Unfortunately, the input shape for deep neural networks (DNNs) is usually rectangular, whereas the shapes of the segments output by segmentation methods are usually according to the corresponding ground objects; this inconsistency can lead to confusion among different types of heterogeneous content when a DNN processes a segment. Currently, most object-based methods utilizing convolutional neural networks (CNNs) adopt additional models to overcome the detrimental influence of such heterogeneous content; however, these heterogeneity suppression mechanisms introduce additional complexity into the whole classification process, and these methods are usually unstable and difficult to use in real applications. To address the above problems, this paper proposes a simplified object-based deep neural network (SO-DNN) for very high resolution remote sensing image classification. In SO-DNN, a new segment category label inference method is introduced, in which a deep semantic segmentation neural network (DSSNN) is used as the classification model instead of a traditional CNN. Since the DSSNN can obtain a category label for each pixel in the input image patch, different types of content are not mixed together; therefore, SO-DNN does not require an additional heterogeneity suppression mechanism. Moreover, SO-DNN includes a sample information optimization method that allows the DSSNN model to be trained using only pixel-based training samples. Because only a single model is used and only a pixel-based training set is needed, the whole classification process of SO-DNN is relatively simple and direct. In experiments, we use very high-resolution aerial images from Vaihingen and Potsdam from the ISPRS WG II/4 dataset as test data and compare SO-DNN with 6 traditional methods: O-MLP, O+CNN, OHSF-CNN, 2-CNN, JDL and U-Net. Compared with the best-performing method among these traditional methods, the classification accuracy of SO-DNN is improved by up to 7.71% and 10.78% for single images from Vaihingen and Potsdam, respectively, and the average classification accuracy is improved by 2.46% and 2.91% for the Vaihingen and Potsdam images, respectively. SO-DNN relies on fewer models and easier-to-obtain samples than traditional methods, and its stable performance makes SO-DNN more valuable for practical applications.

KW - CNN

KW - Very high resolution

KW - Semantic segmentation

KW - Classification

KW - OBIA

U2 - 10.1016/j.isprsjprs.2021.09.014

DO - 10.1016/j.isprsjprs.2021.09.014

M3 - Journal article

VL - 181

SP - 218

EP - 237

JO - ISPRS Journal of Photogrammetry and Remote Sensing

JF - ISPRS Journal of Photogrammetry and Remote Sensing

SN - 0924-2716

ER -