Home > Research > Publications & Outputs > Generating realistic semantic codes for use in ...

Links

View graph of relations

Generating realistic semantic codes for use in neural network models

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Close
Publication date2012
Host publicationProceedings of the 34th Annual Conference of the Cognitive Science Society
EditorsNaomi Miyake, David Peebles, Richard Cooper
Place of PublicationAustin, Tx
PublisherCognitive Science Society
Pages198-203
Number of pages6
ISBN (print)978-0-9768318-8-4
<mark>Original language</mark>English

Abstract

Many psychologically interesting tasks (e.g., reading, lexical decision, semantic categorisation and synonym judgement) require the manipulation of semantic representations. To produce a good computational model of these tasks, it is important to represent semantic information in a realistic manner. This paper aimed to find a method for generating artificial semantic codes, which would be suitable for modelling semantic knowledge. The desired computational criteria for semantic representations included: (1) binary coding; (2) sparse coding; (3) fixed number of active units in a semantic vector; (4) scalable semantic vectors and (5) preservation of realistic internal semantic structure. Several existing methods for generating semantic representations were evaluated against the criteria. The correlated occurrence analogue to the lexical semantics (COALS) system (Rohde, Gonnerman & Plaut, 2006) was selected as the most suitable candidate because it satisfied most of the desired criteria. Semantic vectors generated from the COALS system were converted into binary representations and assessed on their ability to reproduce human semantic category judgements using stimuli from a previous study (Garrard, Lambon Ralph, Hodges & Patterson, 2001). Intriguingly the best performing sets of semantic vectors included 5 positive features and 15 negative features. Positive features are elements that encode the likely presence of a particular attribute whereas negative features encode its absence. These results suggest that including both positive and negative attributes generates a better category structure than the more traditional method of selecting only positive attributes.