Home > Research > Publications & Outputs > Integrating conceptual knowledge within and acr...
View graph of relations

Integrating conceptual knowledge within and across representational modalities

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Integrating conceptual knowledge within and across representational modalities. / McNorgan, Chris; Reid, Jackie; Mcrae, Ken.
In: Cognition, Vol. 118, No. 2, 02.2011, p. 211-233.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

McNorgan C, Reid J, Mcrae K. Integrating conceptual knowledge within and across representational modalities. Cognition. 2011 Feb;118(2):211-233. doi: 10.1016/j.cognition.2010.10.017

Author

McNorgan, Chris ; Reid, Jackie ; Mcrae, Ken. / Integrating conceptual knowledge within and across representational modalities. In: Cognition. 2011 ; Vol. 118, No. 2. pp. 211-233.

Bibtex

@article{7a3480697c694eb0960b995e5cf2c26c,
title = "Integrating conceptual knowledge within and across representational modalities",
abstract = "Research suggests that concepts are distributed across brain regions specialized for processing information from different sensorimotor modalities. Multimodal semantic models fall into one of two broad classes differentiated by the assumed hierarchy of convergence zones over which information is integrated. In shallow models, communication within- and between-modality is accomplished using either direct connectivity, or a central semantic hub. In deep models, modalities are connected via cascading integration sites with successively wider receptive fields. Four experiments provide the first direct behavioral tests of these models using speeded tasks involving feature inference and concept activation. Shallow models predict no within-modal versus cross-modal difference in either task, whereas deep models predict a within-modal advantage for feature inference, but a cross-modal advantage for concept activation. Experiments 1 and 2 used relatedness judgments to tap participants{\textquoteright} knowledge of relations for within- and cross-modal feature pairs. Experiments 3 and 4 used a dual-feature verification task. The pattern of decision latencies across Experiments 1–4 is consistent with a deep integration hierarchy.",
keywords = "Semantic memory, Multimodal representations, Binding problem , Embodied cognition",
author = "Chris McNorgan and Jackie Reid and Ken Mcrae",
year = "2011",
month = feb,
doi = "10.1016/j.cognition.2010.10.017",
language = "English",
volume = "118",
pages = "211--233",
journal = "Cognition",
issn = "0010-0277",
publisher = "Elsevier",
number = "2",

}

RIS

TY - JOUR

T1 - Integrating conceptual knowledge within and across representational modalities

AU - McNorgan, Chris

AU - Reid, Jackie

AU - Mcrae, Ken

PY - 2011/2

Y1 - 2011/2

N2 - Research suggests that concepts are distributed across brain regions specialized for processing information from different sensorimotor modalities. Multimodal semantic models fall into one of two broad classes differentiated by the assumed hierarchy of convergence zones over which information is integrated. In shallow models, communication within- and between-modality is accomplished using either direct connectivity, or a central semantic hub. In deep models, modalities are connected via cascading integration sites with successively wider receptive fields. Four experiments provide the first direct behavioral tests of these models using speeded tasks involving feature inference and concept activation. Shallow models predict no within-modal versus cross-modal difference in either task, whereas deep models predict a within-modal advantage for feature inference, but a cross-modal advantage for concept activation. Experiments 1 and 2 used relatedness judgments to tap participants’ knowledge of relations for within- and cross-modal feature pairs. Experiments 3 and 4 used a dual-feature verification task. The pattern of decision latencies across Experiments 1–4 is consistent with a deep integration hierarchy.

AB - Research suggests that concepts are distributed across brain regions specialized for processing information from different sensorimotor modalities. Multimodal semantic models fall into one of two broad classes differentiated by the assumed hierarchy of convergence zones over which information is integrated. In shallow models, communication within- and between-modality is accomplished using either direct connectivity, or a central semantic hub. In deep models, modalities are connected via cascading integration sites with successively wider receptive fields. Four experiments provide the first direct behavioral tests of these models using speeded tasks involving feature inference and concept activation. Shallow models predict no within-modal versus cross-modal difference in either task, whereas deep models predict a within-modal advantage for feature inference, but a cross-modal advantage for concept activation. Experiments 1 and 2 used relatedness judgments to tap participants’ knowledge of relations for within- and cross-modal feature pairs. Experiments 3 and 4 used a dual-feature verification task. The pattern of decision latencies across Experiments 1–4 is consistent with a deep integration hierarchy.

KW - Semantic memory

KW - Multimodal representations

KW - Binding problem

KW - Embodied cognition

UR - http://www.scopus.com/inward/record.url?scp=78651067359&partnerID=8YFLogxK

U2 - 10.1016/j.cognition.2010.10.017

DO - 10.1016/j.cognition.2010.10.017

M3 - Journal article

VL - 118

SP - 211

EP - 233

JO - Cognition

JF - Cognition

SN - 0010-0277

IS - 2

ER -