Home > Research > Publications & Outputs > Towards explainable deep neural networks (xDNN)

Electronic data

  • xDNN_Neural_Network_Journal_Revised

    Rights statement: This is the author’s version of a work that was accepted for publication in Neural Networks. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Neural Networks, 130, 2020 DOI: 10.1016/j.neunet.2020.07.010

    Accepted author manuscript, 2.46 MB, PDF document

    Available under license: CC BY-NC-ND

Links

Text available via DOI:

View graph of relations

Towards explainable deep neural networks (xDNN)

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Towards explainable deep neural networks (xDNN). / Angelov, Plamen; Soares, Eduardo.
In: Neural Networks, Vol. 130, 01.10.2020, p. 185-194.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Angelov P, Soares E. Towards explainable deep neural networks (xDNN). Neural Networks. 2020 Oct 1;130:185-194. Epub 2020 Jul 11. doi: 10.1016/j.neunet.2020.07.010

Author

Angelov, Plamen ; Soares, Eduardo. / Towards explainable deep neural networks (xDNN). In: Neural Networks. 2020 ; Vol. 130. pp. 185-194.

Bibtex

@article{9b21e7da813943c9b3bb993be6807a41,
title = "Towards explainable deep neural networks (xDNN)",
abstract = "In this paper, we propose an elegant solution that is directly addressing the bottlenecks of the traditional deep learning approaches and offers an explainable internal architecture that can outperform the existing methods, requires very little computational resources (no need for GPUs) and short training times (in the order of seconds). The proposed approach, xDNN is using prototypes. Prototypes are actual training data samples (images), which are local peaks of the empirical data distribution called typicality as well as of the data density. This generative model is identified in a closed form and equates to the pdf but is derived automatically and entirely from the training data with no user- or problem-specific thresholds, parameters or intervention. The proposed xDNN offers a new deep learning architecture that combines reasoning and learning in a synergy. It is non-iterative and non-parametric, which explains its efficiency in terms of time and computational resources. From the user perspective, the proposed approach is clearly understandable to human users. We tested it on challenging problems as the classification of different lighting conditions for driving scenes (iROADS), object detection (Caltech-256, and Caltech-101), and SARS-CoV-2 identification via computed tomography scan (COVID CT-scans dataset). xDNN outperforms the other methods including deep learning in terms of accuracy, time to train and offers an explainable classifier.",
keywords = "Explainable AI, Interpretability, Prototype-based models, Deep-learning",
author = "Plamen Angelov and Eduardo Soares",
note = "This is the author{\textquoteright}s version of a work that was accepted for publication in Neural Networks. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Neural Networks, 130, 2020 DOI: 10.1016/j.neunet.2020.07.010",
year = "2020",
month = oct,
day = "1",
doi = "10.1016/j.neunet.2020.07.010",
language = "English",
volume = "130",
pages = "185--194",
journal = "Neural Networks",
issn = "0893-6080",
publisher = "Elsevier Ltd",

}

RIS

TY - JOUR

T1 - Towards explainable deep neural networks (xDNN)

AU - Angelov, Plamen

AU - Soares, Eduardo

N1 - This is the author’s version of a work that was accepted for publication in Neural Networks. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Neural Networks, 130, 2020 DOI: 10.1016/j.neunet.2020.07.010

PY - 2020/10/1

Y1 - 2020/10/1

N2 - In this paper, we propose an elegant solution that is directly addressing the bottlenecks of the traditional deep learning approaches and offers an explainable internal architecture that can outperform the existing methods, requires very little computational resources (no need for GPUs) and short training times (in the order of seconds). The proposed approach, xDNN is using prototypes. Prototypes are actual training data samples (images), which are local peaks of the empirical data distribution called typicality as well as of the data density. This generative model is identified in a closed form and equates to the pdf but is derived automatically and entirely from the training data with no user- or problem-specific thresholds, parameters or intervention. The proposed xDNN offers a new deep learning architecture that combines reasoning and learning in a synergy. It is non-iterative and non-parametric, which explains its efficiency in terms of time and computational resources. From the user perspective, the proposed approach is clearly understandable to human users. We tested it on challenging problems as the classification of different lighting conditions for driving scenes (iROADS), object detection (Caltech-256, and Caltech-101), and SARS-CoV-2 identification via computed tomography scan (COVID CT-scans dataset). xDNN outperforms the other methods including deep learning in terms of accuracy, time to train and offers an explainable classifier.

AB - In this paper, we propose an elegant solution that is directly addressing the bottlenecks of the traditional deep learning approaches and offers an explainable internal architecture that can outperform the existing methods, requires very little computational resources (no need for GPUs) and short training times (in the order of seconds). The proposed approach, xDNN is using prototypes. Prototypes are actual training data samples (images), which are local peaks of the empirical data distribution called typicality as well as of the data density. This generative model is identified in a closed form and equates to the pdf but is derived automatically and entirely from the training data with no user- or problem-specific thresholds, parameters or intervention. The proposed xDNN offers a new deep learning architecture that combines reasoning and learning in a synergy. It is non-iterative and non-parametric, which explains its efficiency in terms of time and computational resources. From the user perspective, the proposed approach is clearly understandable to human users. We tested it on challenging problems as the classification of different lighting conditions for driving scenes (iROADS), object detection (Caltech-256, and Caltech-101), and SARS-CoV-2 identification via computed tomography scan (COVID CT-scans dataset). xDNN outperforms the other methods including deep learning in terms of accuracy, time to train and offers an explainable classifier.

KW - Explainable AI

KW - Interpretability

KW - Prototype-based models

KW - Deep-learning

U2 - 10.1016/j.neunet.2020.07.010

DO - 10.1016/j.neunet.2020.07.010

M3 - Journal article

C2 - 32682084

VL - 130

SP - 185

EP - 194

JO - Neural Networks

JF - Neural Networks

SN - 0893-6080

ER -