Accepted author manuscript, 1.93 MB, PDF document
Available under license: CC BY-NC-ND
Final published version
Licence: CC BY-NC-ND: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
Research output: Contribution to Journal/Magazine › Journal article › peer-review
Article number | e1424 |
---|---|
<mark>Journal publication date</mark> | 1/09/2021 |
<mark>Journal</mark> | WIREs Data Mining and Knowledge Discovery |
Issue number | 5 |
Volume | 11 |
Number of pages | 13 |
Publication Status | Published |
Early online date | 12/07/21 |
<mark>Original language</mark> | English |
This paper provides a brief analytical review of the current state-of-the-art in relation to the explainability of artificial intelligence in the context of recent advances in machine learning and deep learning. The paper starts with a brief historical introduction and a taxonomy, and formulates the main challenges in terms of explainability building on the recently formulated National Institute of Standards four principles of explainability. Recently published methods related to the topic are then critically reviewed and analyzed. Finally, future directions for research are suggested. This article is categorized under: Technologies > Artificial Intelligence Fundamental Concepts of Data and Knowledge > Explainable AI.