Home > Research > Publications & Outputs > Delve into Neural Activations

Electronic data

  • Activation_TAI_Final

    Rights statement: ©2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

    Accepted author manuscript, 6.92 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Delve into Neural Activations: Towards Understanding Dying Neurons

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
Article number4
<mark>Journal publication date</mark>1/08/2023
<mark>Journal</mark>IEEE Transactions on Artificial Intelligence
Issue number4
Volume4
Number of pages13
Pages (from-to)959-971
Publication StatusPublished
Early online date9/06/22
<mark>Original language</mark>English

Abstract

Theoretically, a deep neuron network with nonlinear activation is able to approximate any function, while empirically the performance of the model with different activations varies widely. In this work, we investigate the expressivity of the network from an activation perspective. In particular, we introduce a generalized activation region/pattern to describe the functional relationship of the model with an arbitrary activation function and illustrate its fundamental properties. We then propose a metric named pattern similarity to evaluate the practical expressivity of neuron networks regarding datasets based on the neuron level reaction toward the input. We find an undocumented dying neuron issue that the postactivation value of most neurons remain in the same region for data with different labels, implying that the expressivity of the network with certain activations is greatly constrained. For instance, around 80% of postactivation values of a well-trained Sigmoid net or Tanh net are clustered in the same region given any test sample. This means most of the neurons fail to provide any useful information in distinguishing the data with different labels, suggesting that the practical expressivity of those networks is far below the theoretical. By evaluating our metrics and the test accuracy of the model, we show that the seriousness of the dying neuron issue is highly related to the model performance. At last, we also discussed the cause of the dying neuron issue, providing an explanation of the model performance gap caused by the choice of activation.

Bibliographic note

©2022 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.