Home > Research > Publications & Outputs > An Improved eXplainable Point Cloud Classifier ...

Electronic data

  • An_Improved_eXplainable_Point_Cloud_Classifier_XPCC

    Rights statement: ©2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Accepted author manuscript, 2.84 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

An Improved eXplainable Point Cloud Classifier (XPCC)

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

An Improved eXplainable Point Cloud Classifier (XPCC). / Arnold, Nicholas; Angelov, Plamen; Atkinson, Peter.
In: IEEE Transactions on Artificial Intelligence, Vol. 4, No. 1, 01.02.2023, p. 71-80.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Arnold N, Angelov P, Atkinson P. An Improved eXplainable Point Cloud Classifier (XPCC). IEEE Transactions on Artificial Intelligence. 2023 Feb 1;4(1):71-80. Epub 2022 Feb 15. doi: 10.1109/TAI.2022.3150647

Author

Arnold, Nicholas ; Angelov, Plamen ; Atkinson, Peter. / An Improved eXplainable Point Cloud Classifier (XPCC). In: IEEE Transactions on Artificial Intelligence. 2023 ; Vol. 4, No. 1. pp. 71-80.

Bibtex

@article{ded1d7ef235e4f47855964efffd06cf3,
title = "An Improved eXplainable Point Cloud Classifier (XPCC)",
abstract = "Classification of objects from 3-D point clouds has become an increasingly relevant task across many computer-vision applications. However, few studies have investigated explainable methods. In this article, a new prototype-based and explainable classification method called eXplainable point cloud classifier (XPCC) is proposed. The XPCC method offers several advantages over previous explainable and nonexplainable methods. First, the XPCC method uses local densities and global multivariate generative distributions. Therefore, the XPCC provides comprehensive and interpretable object-based classification. Furthermore, the proposed method is built on recursive calculations, thus, is computationally very efficient. Second, the model learns continuously without the need for complete retraining and is domain transferable. Third, the proposed XPCC expands on the underlying learning method explainable deep neural networks (xDNN), and is specific to 3-D. As such, the following three new layers are added to the original xDNN architecture: 1) the 3-D point cloud feature extraction, 2) the global compound prototype weighting, and 3) the SoftMax function. Experiments were performed with the ModelNet40 benchmark, which demonstrated that XPCC is the only one to increase classification accuracy relative to the base algorithm when applied to the same problem. In addition, this article proposes a novel prototype-based visual representation that provides model- and object-based explanations. The prototype objects are superimposed to create a prototypical class representation of their data density within the feature space, called the compound prototype cloud. They allow a user to visualize the explainable aspects of the model and identify object regions that contribute to the classification in a human-understandable way.",
keywords = "3D, AI, Classification, Deep learning, Explainable AI, Point cloud data",
author = "Nicholas Arnold and Plamen Angelov and Peter Atkinson",
note = "{\textcopyright}2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. ",
year = "2023",
month = feb,
day = "1",
doi = "10.1109/TAI.2022.3150647",
language = "English",
volume = "4",
pages = "71--80",
journal = "IEEE Transactions on Artificial Intelligence",
issn = "2691-4581",
publisher = "IEEE",
number = "1",

}

RIS

TY - JOUR

T1 - An Improved eXplainable Point Cloud Classifier (XPCC)

AU - Arnold, Nicholas

AU - Angelov, Plamen

AU - Atkinson, Peter

N1 - ©2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

PY - 2023/2/1

Y1 - 2023/2/1

N2 - Classification of objects from 3-D point clouds has become an increasingly relevant task across many computer-vision applications. However, few studies have investigated explainable methods. In this article, a new prototype-based and explainable classification method called eXplainable point cloud classifier (XPCC) is proposed. The XPCC method offers several advantages over previous explainable and nonexplainable methods. First, the XPCC method uses local densities and global multivariate generative distributions. Therefore, the XPCC provides comprehensive and interpretable object-based classification. Furthermore, the proposed method is built on recursive calculations, thus, is computationally very efficient. Second, the model learns continuously without the need for complete retraining and is domain transferable. Third, the proposed XPCC expands on the underlying learning method explainable deep neural networks (xDNN), and is specific to 3-D. As such, the following three new layers are added to the original xDNN architecture: 1) the 3-D point cloud feature extraction, 2) the global compound prototype weighting, and 3) the SoftMax function. Experiments were performed with the ModelNet40 benchmark, which demonstrated that XPCC is the only one to increase classification accuracy relative to the base algorithm when applied to the same problem. In addition, this article proposes a novel prototype-based visual representation that provides model- and object-based explanations. The prototype objects are superimposed to create a prototypical class representation of their data density within the feature space, called the compound prototype cloud. They allow a user to visualize the explainable aspects of the model and identify object regions that contribute to the classification in a human-understandable way.

AB - Classification of objects from 3-D point clouds has become an increasingly relevant task across many computer-vision applications. However, few studies have investigated explainable methods. In this article, a new prototype-based and explainable classification method called eXplainable point cloud classifier (XPCC) is proposed. The XPCC method offers several advantages over previous explainable and nonexplainable methods. First, the XPCC method uses local densities and global multivariate generative distributions. Therefore, the XPCC provides comprehensive and interpretable object-based classification. Furthermore, the proposed method is built on recursive calculations, thus, is computationally very efficient. Second, the model learns continuously without the need for complete retraining and is domain transferable. Third, the proposed XPCC expands on the underlying learning method explainable deep neural networks (xDNN), and is specific to 3-D. As such, the following three new layers are added to the original xDNN architecture: 1) the 3-D point cloud feature extraction, 2) the global compound prototype weighting, and 3) the SoftMax function. Experiments were performed with the ModelNet40 benchmark, which demonstrated that XPCC is the only one to increase classification accuracy relative to the base algorithm when applied to the same problem. In addition, this article proposes a novel prototype-based visual representation that provides model- and object-based explanations. The prototype objects are superimposed to create a prototypical class representation of their data density within the feature space, called the compound prototype cloud. They allow a user to visualize the explainable aspects of the model and identify object regions that contribute to the classification in a human-understandable way.

KW - 3D

KW - AI

KW - Classification

KW - Deep learning

KW - Explainable AI

KW - Point cloud data

U2 - 10.1109/TAI.2022.3150647

DO - 10.1109/TAI.2022.3150647

M3 - Journal article

VL - 4

SP - 71

EP - 80

JO - IEEE Transactions on Artificial Intelligence

JF - IEEE Transactions on Artificial Intelligence

SN - 2691-4581

IS - 1

ER -