Home > Research > Publications & Outputs > An Improved eXplainable Point Cloud Classifier ...

Electronic data

  • An_Improved_eXplainable_Point_Cloud_Classifier_XPCC

    Rights statement: ©2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Accepted author manuscript, 2.84 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

An Improved eXplainable Point Cloud Classifier (XPCC)

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
<mark>Journal publication date</mark>1/02/2023
<mark>Journal</mark>IEEE Transactions on Artificial Intelligence
Issue number1
Volume4
Number of pages10
Pages (from-to)71-80
Publication StatusPublished
Early online date15/02/22
<mark>Original language</mark>English

Abstract

Classification of objects from 3-D point clouds has become an increasingly relevant task across many computer-vision applications. However, few studies have investigated explainable methods. In this article, a new prototype-based and explainable classification method called eXplainable point cloud classifier (XPCC) is proposed. The XPCC method offers several advantages over previous explainable and nonexplainable methods. First, the XPCC method uses local densities and global multivariate generative distributions. Therefore, the XPCC provides comprehensive and interpretable object-based classification. Furthermore, the proposed method is built on recursive calculations, thus, is computationally very efficient. Second, the model learns continuously without the need for complete retraining and is domain transferable. Third, the proposed XPCC expands on the underlying learning method explainable deep neural networks (xDNN), and is specific to 3-D. As such, the following three new layers are added to the original xDNN architecture: 1) the 3-D point cloud feature extraction, 2) the global compound prototype weighting, and 3) the SoftMax function. Experiments were performed with the ModelNet40 benchmark, which demonstrated that XPCC is the only one to increase classification accuracy relative to the base algorithm when applied to the same problem. In addition, this article proposes a novel prototype-based visual representation that provides model- and object-based explanations. The prototype objects are superimposed to create a prototypical class representation of their data density within the feature space, called the compound prototype cloud. They allow a user to visualize the explainable aspects of the model and identify object regions that contribute to the classification in a human-understandable way.

Bibliographic note

©2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.