Home > Research > Publications & Outputs > Explainable-by-Design Deep Learning

Electronic data

Text available via DOI:

View graph of relations

Explainable-by-Design Deep Learning

Research output: ThesisDoctoral Thesis

Published

Standard

Explainable-by-Design Deep Learning. / Almeida Soares, Eduardo.
Lancaster University, 2022. 160 p.

Research output: ThesisDoctoral Thesis

Harvard

APA

Almeida Soares, E. (2022). Explainable-by-Design Deep Learning. [Doctoral Thesis, Lancaster University]. Lancaster University. https://doi.org/10.17635/lancaster/thesis/1724

Vancouver

Almeida Soares E. Explainable-by-Design Deep Learning. Lancaster University, 2022. 160 p. doi: 10.17635/lancaster/thesis/1724

Author

Almeida Soares, Eduardo. / Explainable-by-Design Deep Learning. Lancaster University, 2022. 160 p.

Bibtex

@phdthesis{88919effaf014b93b49a40a83085b825,
title = "Explainable-by-Design Deep Learning",
abstract = "Machine learning, and more specifically, deep learning, have attracted the attention of media and the broader public in the last decade due to its potential to revolutionize industries, public services, and society. Deep learning achieved or even surpassed human experts{\textquoteright} performance in terms of accuracy for different challenging problems such as image recognition, speech, and language translation. However, deep learning models are often characterized as a “black box” as these models are composed of many millions of parameters, which are extremely difficult to interpret by specialists. Complex “black box” models can easily fool users unable to inspect the algorithm{\textquoteright}s decision, which can lead to dangerous or catastrophic events. Therefore, auditable explainable AI approaches are crucial for developing safe systems, complying with regulations, and accepting this new technology within society. This thesis tries to answer the following research question: Is it possible to provide an approach that has a performance compared to a Deep Learning and the same time has a transparent structure (non-black box)? To this end, it introduces a novel framework of explainable- by-design Deep Learning architectures that offers transparency and high accuracy, helping humans understand why a particular machine decision has been reached and whether or not it is trustworthy. Moreover, the proposed prototype-based framework has a flexible structure that allows the unsupervised detection of new classes and situations. The approaches proposed in thesis have been applied to multiple use cases, including image classification, fairness, deep recursive learning interpretation, and novelty detection.",
author = "{Almeida Soares}, Eduardo",
year = "2022",
month = aug,
day = "15",
doi = "10.17635/lancaster/thesis/1724",
language = "English",
publisher = "Lancaster University",
school = "Lancaster University",

}

RIS

TY - BOOK

T1 - Explainable-by-Design Deep Learning

AU - Almeida Soares, Eduardo

PY - 2022/8/15

Y1 - 2022/8/15

N2 - Machine learning, and more specifically, deep learning, have attracted the attention of media and the broader public in the last decade due to its potential to revolutionize industries, public services, and society. Deep learning achieved or even surpassed human experts’ performance in terms of accuracy for different challenging problems such as image recognition, speech, and language translation. However, deep learning models are often characterized as a “black box” as these models are composed of many millions of parameters, which are extremely difficult to interpret by specialists. Complex “black box” models can easily fool users unable to inspect the algorithm’s decision, which can lead to dangerous or catastrophic events. Therefore, auditable explainable AI approaches are crucial for developing safe systems, complying with regulations, and accepting this new technology within society. This thesis tries to answer the following research question: Is it possible to provide an approach that has a performance compared to a Deep Learning and the same time has a transparent structure (non-black box)? To this end, it introduces a novel framework of explainable- by-design Deep Learning architectures that offers transparency and high accuracy, helping humans understand why a particular machine decision has been reached and whether or not it is trustworthy. Moreover, the proposed prototype-based framework has a flexible structure that allows the unsupervised detection of new classes and situations. The approaches proposed in thesis have been applied to multiple use cases, including image classification, fairness, deep recursive learning interpretation, and novelty detection.

AB - Machine learning, and more specifically, deep learning, have attracted the attention of media and the broader public in the last decade due to its potential to revolutionize industries, public services, and society. Deep learning achieved or even surpassed human experts’ performance in terms of accuracy for different challenging problems such as image recognition, speech, and language translation. However, deep learning models are often characterized as a “black box” as these models are composed of many millions of parameters, which are extremely difficult to interpret by specialists. Complex “black box” models can easily fool users unable to inspect the algorithm’s decision, which can lead to dangerous or catastrophic events. Therefore, auditable explainable AI approaches are crucial for developing safe systems, complying with regulations, and accepting this new technology within society. This thesis tries to answer the following research question: Is it possible to provide an approach that has a performance compared to a Deep Learning and the same time has a transparent structure (non-black box)? To this end, it introduces a novel framework of explainable- by-design Deep Learning architectures that offers transparency and high accuracy, helping humans understand why a particular machine decision has been reached and whether or not it is trustworthy. Moreover, the proposed prototype-based framework has a flexible structure that allows the unsupervised detection of new classes and situations. The approaches proposed in thesis have been applied to multiple use cases, including image classification, fairness, deep recursive learning interpretation, and novelty detection.

U2 - 10.17635/lancaster/thesis/1724

DO - 10.17635/lancaster/thesis/1724

M3 - Doctoral Thesis

PB - Lancaster University

ER -