Home > Research > Publications & Outputs > Towards Explainable Deep Neural Networks (xDNN)

Electronic data

Links

Keywords

View graph of relations

Towards Explainable Deep Neural Networks (xDNN)

Research output: Contribution to journalJournal articlepeer-review

Published

Standard

Towards Explainable Deep Neural Networks (xDNN). / Angelov, Plamen; Soares, Eduardo.

In: arXiv, 05.12.2019, p. 1-9.

Research output: Contribution to journalJournal articlepeer-review

Harvard

APA

Vancouver

Author

Bibtex

@article{ca514e54b16543aa9b54d6a60481621f,
title = "Towards Explainable Deep Neural Networks (xDNN)",
abstract = " In this paper, we propose an elegant solution that is directly addressing the bottlenecks of the traditional deep learning approaches and offers a clearly explainable internal architecture that can outperform the existing methods, requires very little computational resources (no need for GPUs) and short training times (in the order of seconds). The proposed approach, xDNN is using prototypes. Prototypes are actual training data samples (images), which are local peaks of the empirical data distribution called typicality as well as of the data density. This generative model is identified in a closed form and equates to the pdf but is derived automatically and entirely from the training data with no user- or problem-specific thresholds, parameters or intervention. The proposed xDNN offers a new deep learning architecture that combines reasoning and learning in a synergy. It is non-iterative and non-parametric, which explains its efficiency in terms of time and computational resources. From the user perspective, the proposed approach is clearly understandable to human users. We tested it on some well-known benchmark data sets such as iRoads and Caltech-256. xDNN outperforms the other methods including deep learning in terms of accuracy, time to train and offers a clearly explainable classifier. In fact, the result on the very hard Caltech-256 problem (which has 257 classes) represents a world record. ",
keywords = "cs.LG, cs.AI, cs.CV",
author = "Plamen Angelov and Eduardo Soares",
note = "Preprint submitted to the Neural Networks Journal for publication",
year = "2019",
month = dec,
day = "5",
language = "English",
pages = "1--9",
journal = "arXiv",
issn = "2331-8422",

}

RIS

TY - JOUR

T1 - Towards Explainable Deep Neural Networks (xDNN)

AU - Angelov, Plamen

AU - Soares, Eduardo

N1 - Preprint submitted to the Neural Networks Journal for publication

PY - 2019/12/5

Y1 - 2019/12/5

N2 - In this paper, we propose an elegant solution that is directly addressing the bottlenecks of the traditional deep learning approaches and offers a clearly explainable internal architecture that can outperform the existing methods, requires very little computational resources (no need for GPUs) and short training times (in the order of seconds). The proposed approach, xDNN is using prototypes. Prototypes are actual training data samples (images), which are local peaks of the empirical data distribution called typicality as well as of the data density. This generative model is identified in a closed form and equates to the pdf but is derived automatically and entirely from the training data with no user- or problem-specific thresholds, parameters or intervention. The proposed xDNN offers a new deep learning architecture that combines reasoning and learning in a synergy. It is non-iterative and non-parametric, which explains its efficiency in terms of time and computational resources. From the user perspective, the proposed approach is clearly understandable to human users. We tested it on some well-known benchmark data sets such as iRoads and Caltech-256. xDNN outperforms the other methods including deep learning in terms of accuracy, time to train and offers a clearly explainable classifier. In fact, the result on the very hard Caltech-256 problem (which has 257 classes) represents a world record.

AB - In this paper, we propose an elegant solution that is directly addressing the bottlenecks of the traditional deep learning approaches and offers a clearly explainable internal architecture that can outperform the existing methods, requires very little computational resources (no need for GPUs) and short training times (in the order of seconds). The proposed approach, xDNN is using prototypes. Prototypes are actual training data samples (images), which are local peaks of the empirical data distribution called typicality as well as of the data density. This generative model is identified in a closed form and equates to the pdf but is derived automatically and entirely from the training data with no user- or problem-specific thresholds, parameters or intervention. The proposed xDNN offers a new deep learning architecture that combines reasoning and learning in a synergy. It is non-iterative and non-parametric, which explains its efficiency in terms of time and computational resources. From the user perspective, the proposed approach is clearly understandable to human users. We tested it on some well-known benchmark data sets such as iRoads and Caltech-256. xDNN outperforms the other methods including deep learning in terms of accuracy, time to train and offers a clearly explainable classifier. In fact, the result on the very hard Caltech-256 problem (which has 257 classes) represents a world record.

KW - cs.LG

KW - cs.AI

KW - cs.CV

M3 - Journal article

SP - 1

EP - 9

JO - arXiv

JF - arXiv

SN - 2331-8422

ER -