Home > Research > Publications & Outputs > A critical appraisal on deep neural networks
View graph of relations

A critical appraisal on deep neural networks: Bridge the gap between deep learning and neuroscience via XAI

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNChapter

Publication date30/09/2022
Host publicationHandbook on Computer Learning and Intelligence: Vol. 2: Deep Learning, Intelligent Control and Evolutionary Computation
EditorsPlamen Angelov
PublisherWorld Scientific Publishing Co.
Number of pages16
ISBN (Electronic)9789811247323
ISBN (Print)9789811245145
<mark>Original language</mark>English


Starting in the early 1940s, artificial intelligence (AI) has come a long way, and today, AI is a powerful research area with many possibilities. Deep neural networks (DNNs) are part of AI and consist of several layers-the input layers, the so-called hidden layers, and the output layers. The input layers receive data; the data are then converted into computable variables (i.e., vectors) and are passed on to the hidden layers, where they are computed. Each data point (neuron) is connected to another data point within a different layer that passes information back and forth. Adjusting the weights and bias at each hidden layer (having several iterations between those layers), such a network maps the input to output, thereby generalizing (learning) its knowledge. At the end, the deep neural network should have enough input to predict results for specific tasks successfully. The history of DNNs or neural networks is, in general, closely related to neuroscience, as the motivation of AI is to teach human intelligence to a machine. Thus, it is possible to use the knowledge of the human brain to develop algorithms that can simulate the human brain. This is performed with DNNs. The brain is considered an electrical network that sets off electrical impulses. During this process, information is carried from one synapse to another, just like it is done within neural networks. However, AI systems should be used carefully, which means that the researcher should always be capable of understanding the system he or she created, which is an issue discussed within explainable AI and DNNs.