Home > Research > Publications & Outputs > An object-based convolutional neural network (O...

Electronic data

  • OCNN_Manuscript_RSE_Ce_Accepted

    Rights statement: This is the author’s version of a work that was accepted for publication in Remote Sensing of Environment. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Remote Sensing of Environment, 216, 2018 DOI: 10.1016/j.rse.2018.06.034

    Accepted author manuscript, 847 KB, PDF document

    Available under license: CC BY-NC-ND: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Links

Text available via DOI:

View graph of relations

An object-based convolutional neural network (OCNN) for urban land use classification

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

An object-based convolutional neural network (OCNN) for urban land use classification. / Zhang, Ce; Sargent, Isabel; Pan, Xin et al.
In: Remote Sensing of Environment, Vol. 216, 10.2018, p. 57-70.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Zhang, C, Sargent, I, Pan, X, Li, H, Gardiner, A, Hare, J & Atkinson, PM 2018, 'An object-based convolutional neural network (OCNN) for urban land use classification', Remote Sensing of Environment, vol. 216, pp. 57-70. https://doi.org/10.1016/j.rse.2018.06.034

APA

Zhang, C., Sargent, I., Pan, X., Li, H., Gardiner, A., Hare, J., & Atkinson, P. M. (2018). An object-based convolutional neural network (OCNN) for urban land use classification. Remote Sensing of Environment, 216, 57-70. https://doi.org/10.1016/j.rse.2018.06.034

Vancouver

Zhang C, Sargent I, Pan X, Li H, Gardiner A, Hare J et al. An object-based convolutional neural network (OCNN) for urban land use classification. Remote Sensing of Environment. 2018 Oct;216:57-70. Epub 2018 Jul 3. doi: 10.1016/j.rse.2018.06.034

Author

Zhang, Ce ; Sargent, Isabel ; Pan, Xin et al. / An object-based convolutional neural network (OCNN) for urban land use classification. In: Remote Sensing of Environment. 2018 ; Vol. 216. pp. 57-70.

Bibtex

@article{1a7a025dd38a4b1c857ffc01e413e32c,
title = "An object-based convolutional neural network (OCNN) for urban land use classification",
abstract = "Urban land use information is essential for a variety of urban-related applications such as urban planning and regional administration. The extraction of urban land use from very fine spatial resolution (VFSR) remotely sensed imagery has, therefore, drawn much attention in the remote sensing community. Nevertheless, classifying urban land use from VFSR images remains a challenging task, due to the extreme difficulties in differentiating complex spatial patterns to derive high-level semantic labels. Deep convolutional neural networks (CNNs) offer great potential to extract high-level spatial features, thanks to its hierarchical nature with multiple levels of abstraction. However, blurred object boundaries and geometric distortion, as well as huge computational redundancy, severely restrict the potential application of CNN for the classification of urban land use. In this paper, a novel object-based convolutional neural network (OCNN) is proposed for urban land use classification using VFSR images. Rather than pixel-wise convolutional processes, the OCNN relies on segmented objects as its functional units, and CNN networks are used to analyse and label objects such as to partition within-object and between-object variation. Two CNN networks with different model structures and window sizes are developed to predict linearly shaped objects (e.g. Highway, Canal) and general (other non-linearly shaped) objects. Then a rule-based decision fusion is performed to integrate the class-specific classification results. The effectiveness of the proposed OCNN method was tested on aerial photography of two large urban scenes in Southampton and Manchester in Great Britain. The OCNN combined with large and small window sizes achieved excellent classification accuracy and computational efficiency, consistently outperforming its sub-modules, as well as other benchmark comparators, including the pixel-wise CNN, contextual-based MRF and object-based OBIA-SVM methods. The proposed method provides the first object-based CNN framework to effectively and efficiently address the complicated problem of urban land use classification from VFSR images.",
keywords = "convolutional neural network, OBIA, urban land use classification, VFSR remotely sensed imagery, high-level feature representations",
author = "Ce Zhang and Isabel Sargent and Xin Pan and Huapeng Li and Andy Gardiner and Jonathon Hare and Atkinson, {Peter Michael}",
note = "This is the author{\textquoteright}s version of a work that was accepted for publication in Remote Sensing of Environment. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Remote Sensing of Environment, 216, 2018 DOI: 10.1016/j.rse.2018.06.034",
year = "2018",
month = oct,
doi = "10.1016/j.rse.2018.06.034",
language = "English",
volume = "216",
pages = "57--70",
journal = "Remote Sensing of Environment",
issn = "0034-4257",
publisher = "Elsevier Inc.",

}

RIS

TY - JOUR

T1 - An object-based convolutional neural network (OCNN) for urban land use classification

AU - Zhang, Ce

AU - Sargent, Isabel

AU - Pan, Xin

AU - Li, Huapeng

AU - Gardiner, Andy

AU - Hare, Jonathon

AU - Atkinson, Peter Michael

N1 - This is the author’s version of a work that was accepted for publication in Remote Sensing of Environment. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Remote Sensing of Environment, 216, 2018 DOI: 10.1016/j.rse.2018.06.034

PY - 2018/10

Y1 - 2018/10

N2 - Urban land use information is essential for a variety of urban-related applications such as urban planning and regional administration. The extraction of urban land use from very fine spatial resolution (VFSR) remotely sensed imagery has, therefore, drawn much attention in the remote sensing community. Nevertheless, classifying urban land use from VFSR images remains a challenging task, due to the extreme difficulties in differentiating complex spatial patterns to derive high-level semantic labels. Deep convolutional neural networks (CNNs) offer great potential to extract high-level spatial features, thanks to its hierarchical nature with multiple levels of abstraction. However, blurred object boundaries and geometric distortion, as well as huge computational redundancy, severely restrict the potential application of CNN for the classification of urban land use. In this paper, a novel object-based convolutional neural network (OCNN) is proposed for urban land use classification using VFSR images. Rather than pixel-wise convolutional processes, the OCNN relies on segmented objects as its functional units, and CNN networks are used to analyse and label objects such as to partition within-object and between-object variation. Two CNN networks with different model structures and window sizes are developed to predict linearly shaped objects (e.g. Highway, Canal) and general (other non-linearly shaped) objects. Then a rule-based decision fusion is performed to integrate the class-specific classification results. The effectiveness of the proposed OCNN method was tested on aerial photography of two large urban scenes in Southampton and Manchester in Great Britain. The OCNN combined with large and small window sizes achieved excellent classification accuracy and computational efficiency, consistently outperforming its sub-modules, as well as other benchmark comparators, including the pixel-wise CNN, contextual-based MRF and object-based OBIA-SVM methods. The proposed method provides the first object-based CNN framework to effectively and efficiently address the complicated problem of urban land use classification from VFSR images.

AB - Urban land use information is essential for a variety of urban-related applications such as urban planning and regional administration. The extraction of urban land use from very fine spatial resolution (VFSR) remotely sensed imagery has, therefore, drawn much attention in the remote sensing community. Nevertheless, classifying urban land use from VFSR images remains a challenging task, due to the extreme difficulties in differentiating complex spatial patterns to derive high-level semantic labels. Deep convolutional neural networks (CNNs) offer great potential to extract high-level spatial features, thanks to its hierarchical nature with multiple levels of abstraction. However, blurred object boundaries and geometric distortion, as well as huge computational redundancy, severely restrict the potential application of CNN for the classification of urban land use. In this paper, a novel object-based convolutional neural network (OCNN) is proposed for urban land use classification using VFSR images. Rather than pixel-wise convolutional processes, the OCNN relies on segmented objects as its functional units, and CNN networks are used to analyse and label objects such as to partition within-object and between-object variation. Two CNN networks with different model structures and window sizes are developed to predict linearly shaped objects (e.g. Highway, Canal) and general (other non-linearly shaped) objects. Then a rule-based decision fusion is performed to integrate the class-specific classification results. The effectiveness of the proposed OCNN method was tested on aerial photography of two large urban scenes in Southampton and Manchester in Great Britain. The OCNN combined with large and small window sizes achieved excellent classification accuracy and computational efficiency, consistently outperforming its sub-modules, as well as other benchmark comparators, including the pixel-wise CNN, contextual-based MRF and object-based OBIA-SVM methods. The proposed method provides the first object-based CNN framework to effectively and efficiently address the complicated problem of urban land use classification from VFSR images.

KW - convolutional neural network

KW - OBIA

KW - urban land use classification

KW - VFSR remotely sensed imagery

KW - high-level feature representations

U2 - 10.1016/j.rse.2018.06.034

DO - 10.1016/j.rse.2018.06.034

M3 - Journal article

VL - 216

SP - 57

EP - 70

JO - Remote Sensing of Environment

JF - Remote Sensing of Environment

SN - 0034-4257

ER -