Home > Research > Publications & Outputs > Explainable Machine Learning with Prior Knowledge

Electronic data

  • 2105.10172v1

    Submitted manuscript, 421 KB, PDF document

Links

Keywords

View graph of relations

Explainable Machine Learning with Prior Knowledge: An Overview

Research output: Working paperPreprint

Published

Standard

Explainable Machine Learning with Prior Knowledge: An Overview. / Beckh, Katharina; Müller, Sebastian; Jakobs, Matthias et al.
Arxiv, 2021.

Research output: Working paperPreprint

Harvard

Beckh, K, Müller, S, Jakobs, M, Toborek, V, Tan, H, Fischer, R, Welke, P, Houben, S & von Rueden, L 2021 'Explainable Machine Learning with Prior Knowledge: An Overview' Arxiv. <https://arxiv.org/abs/2105.10172v1>

APA

Beckh, K., Müller, S., Jakobs, M., Toborek, V., Tan, H., Fischer, R., Welke, P., Houben, S., & von Rueden, L. (2021). Explainable Machine Learning with Prior Knowledge: An Overview. Arxiv. https://arxiv.org/abs/2105.10172v1

Vancouver

Beckh K, Müller S, Jakobs M, Toborek V, Tan H, Fischer R et al. Explainable Machine Learning with Prior Knowledge: An Overview. Arxiv. 2021 May 21.

Author

Beckh, Katharina ; Müller, Sebastian ; Jakobs, Matthias et al. / Explainable Machine Learning with Prior Knowledge : An Overview. Arxiv, 2021.

Bibtex

@techreport{433de91113df4a0ebe339af3d15dd7c0,
title = "Explainable Machine Learning with Prior Knowledge: An Overview",
abstract = "This survey presents an overview of integrating prior knowledge into machine learning systems in order to improve explainability. The complexity of machine learning models has elicited research to make them more explainable. However, most explainability methods cannot provide insight beyond the given data, requiring additional information about the context. We propose to harness prior knowledge to improve upon the explanation capabilities of machine learning models. In this paper, we present a categorization of current research into three main categories which either integrate knowledge into the machine learning pipeline, into the explainability method or derive knowledge from explanations. To classify the papers, we build upon the existing taxonomy of informed machine learning and extend it from the perspective of explainability. We conclude with open challenges and research directions. ",
keywords = "cs.LG",
author = "Katharina Beckh and Sebastian M{\"u}ller and Matthias Jakobs and Vanessa Toborek and Hanxiao Tan and Raphael Fischer and Pascal Welke and Sebastian Houben and {von Rueden}, Laura",
year = "2021",
month = may,
day = "21",
language = "English",
publisher = "Arxiv",
type = "WorkingPaper",
institution = "Arxiv",

}

RIS

TY - UNPB

T1 - Explainable Machine Learning with Prior Knowledge

T2 - An Overview

AU - Beckh, Katharina

AU - Müller, Sebastian

AU - Jakobs, Matthias

AU - Toborek, Vanessa

AU - Tan, Hanxiao

AU - Fischer, Raphael

AU - Welke, Pascal

AU - Houben, Sebastian

AU - von Rueden, Laura

PY - 2021/5/21

Y1 - 2021/5/21

N2 - This survey presents an overview of integrating prior knowledge into machine learning systems in order to improve explainability. The complexity of machine learning models has elicited research to make them more explainable. However, most explainability methods cannot provide insight beyond the given data, requiring additional information about the context. We propose to harness prior knowledge to improve upon the explanation capabilities of machine learning models. In this paper, we present a categorization of current research into three main categories which either integrate knowledge into the machine learning pipeline, into the explainability method or derive knowledge from explanations. To classify the papers, we build upon the existing taxonomy of informed machine learning and extend it from the perspective of explainability. We conclude with open challenges and research directions.

AB - This survey presents an overview of integrating prior knowledge into machine learning systems in order to improve explainability. The complexity of machine learning models has elicited research to make them more explainable. However, most explainability methods cannot provide insight beyond the given data, requiring additional information about the context. We propose to harness prior knowledge to improve upon the explanation capabilities of machine learning models. In this paper, we present a categorization of current research into three main categories which either integrate knowledge into the machine learning pipeline, into the explainability method or derive knowledge from explanations. To classify the papers, we build upon the existing taxonomy of informed machine learning and extend it from the perspective of explainability. We conclude with open challenges and research directions.

KW - cs.LG

M3 - Preprint

BT - Explainable Machine Learning with Prior Knowledge

PB - Arxiv

ER -