Home > Research > Publications & Outputs > Neurocomputational models capture the effect of...

Electronic data

Links

Text available via DOI:

View graph of relations

Neurocomputational models capture the effect of learned labels on infants' object and category representations

Research output: Contribution to journalJournal article

E-pub ahead of print
<mark>Journal publication date</mark>29/11/2018
<mark>Journal</mark>IEEE Transactions on Cognitive and Developmental Systems
Number of pages9
Publication statusE-pub ahead of print
Early online date29/11/18
Original languageEnglish

Abstract

The effect of labels on non-linguistic representations is the focus of substantial theoretical debate in the developmental literature. A recent empirical study demonstrated that ten month-old infants respond differently to objects for which they know a label relative to unlabeled objects. One account of these results is that infants’ label representations are incorporated into their object representations, such that when the object is seen without its label, a novelty response is elicited. These data are compatible with two recent theories of integrated label object representations, one of which assumes labels are features of object representations, and one which assumes labels are
represented separately, but become closely associated across learning. Here, we implement both of these accounts in an autoencoder neurocomputational model. Simulation data support
an account in which labels are features of objects, with the same representational status as the objects’ visual and haptic characteristics. Then, we use our model to make predictions about the effect of labels on infants’ broader category representations.
Overall, we show that the generally accepted link between internal representations and looking times may be more complex than previously thought.