Rights statement: This is the author’s version of a work that was accepted for publication in Cognition. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Cognition, 230, 2023 DOI: 10.1016/j.cognition.2022.105176
Accepted author manuscript, 715 KB, PDF document
Available under license: CC BY-NC-ND: Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
Final published version
Research output: Contribution to Journal/Magazine › Journal article › peer-review
Research output: Contribution to Journal/Magazine › Journal article › peer-review
}
TY - JOUR
T1 - No Need to Forget, Just Keep the Balance
T2 - Hebbian Neural Networks for Statistical Learning
AU - Tovar, Angel E.
AU - Westermann, Gert
N1 - This is the author’s version of a work that was accepted for publication in Cognition. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Cognition, 230, 2023 DOI: 10.1016/j.cognition.2022.105176
PY - 2023/1/31
Y1 - 2023/1/31
N2 - Language processing in humans has long been proposed to rely on sophisticated learning abilities including statistical learning. Endress and Johnson (E&J, 2021) recently presented a neural network model for statistical learning based on Hebbian learning principles. This model accounts for word segmentation tasks, one primary paradigm in statistical learning. In this discussion paper we review this model and compare it with the Hebbian model previously presented by Tovar and Westermann (T&W, 2017a; 2017b; 2018) that has accounted for serial reaction time tasks, cross-situational learning, and categorization paradigms, all relevant in the study of statistical learning. We discuss the similarities and differences between both models, and their key findings. From our analysis, we question the concept of “forgetting” in the model of E&J and their suggestion of considering forgetting as the critical ingredient for successful statistical learning. We instead suggest that a set of simple but well-balanced mechanisms including spreading activation, activation persistence, and synaptic weight decay, all based on biologically grounded principles, allow modeling statistical learning in Hebbian neural networks, as demonstrated in the T&W model which successfully covers learning of nonadjacent dependencies and accounts for differences between typical and atypical populations, both aspects that have not been fully demonstrated in the E&J model. We outline the main computational and theoretical differences between the E&J and T&W approaches, present new simulation results, and discuss implications for the development of a computational cognitive theory of statistical learning.
AB - Language processing in humans has long been proposed to rely on sophisticated learning abilities including statistical learning. Endress and Johnson (E&J, 2021) recently presented a neural network model for statistical learning based on Hebbian learning principles. This model accounts for word segmentation tasks, one primary paradigm in statistical learning. In this discussion paper we review this model and compare it with the Hebbian model previously presented by Tovar and Westermann (T&W, 2017a; 2017b; 2018) that has accounted for serial reaction time tasks, cross-situational learning, and categorization paradigms, all relevant in the study of statistical learning. We discuss the similarities and differences between both models, and their key findings. From our analysis, we question the concept of “forgetting” in the model of E&J and their suggestion of considering forgetting as the critical ingredient for successful statistical learning. We instead suggest that a set of simple but well-balanced mechanisms including spreading activation, activation persistence, and synaptic weight decay, all based on biologically grounded principles, allow modeling statistical learning in Hebbian neural networks, as demonstrated in the T&W model which successfully covers learning of nonadjacent dependencies and accounts for differences between typical and atypical populations, both aspects that have not been fully demonstrated in the E&J model. We outline the main computational and theoretical differences between the E&J and T&W approaches, present new simulation results, and discuss implications for the development of a computational cognitive theory of statistical learning.
KW - Statistical learning
KW - Hebbian learning
KW - Artificial neural networks
KW - Language processing
KW - Computational modeling
U2 - 10.1016/j.cognition.2022.105176
DO - 10.1016/j.cognition.2022.105176
M3 - Journal article
VL - 230
JO - Cognition
JF - Cognition
SN - 0010-0277
M1 - 105176
ER -