Home > Research > Publications & Outputs > An amodal shared resource model of language-med...

Electronic data

  • fpsyg-04-00528

    Rights statement: Copyright © 2013 Smith, Monaghan and Huettig. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    Final published version, 2.61 MB, PDF document

    Available under license: CC BY

Links

Text available via DOI:

View graph of relations

An amodal shared resource model of language-mediated visual attention

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

An amodal shared resource model of language-mediated visual attention. / Smith, Alastair C; Monaghan, Padraic; Huettig, Falk.
In: Frontiers in Psychology, Vol. 4, 528, 16.08.2013, p. 1-16.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Smith, A. C., Monaghan, P., & Huettig, F. (2013). An amodal shared resource model of language-mediated visual attention. Frontiers in Psychology, 4, 1-16. Article 528. https://doi.org/10.3389/fpsyg.2013.00528

Vancouver

Smith AC, Monaghan P, Huettig F. An amodal shared resource model of language-mediated visual attention. Frontiers in Psychology. 2013 Aug 16;4:1-16. 528. doi: 10.3389/fpsyg.2013.00528

Author

Smith, Alastair C ; Monaghan, Padraic ; Huettig, Falk. / An amodal shared resource model of language-mediated visual attention. In: Frontiers in Psychology. 2013 ; Vol. 4. pp. 1-16.

Bibtex

@article{642a3f95f54b4761a74b123fde21fd22,
title = "An amodal shared resource model of language-mediated visual attention",
abstract = "Language-mediated visual attention describes the interaction of two fundamental components of the human cognitive system, language and vision. Within this paper we present an amodal shared resource model of language-mediated visual attention that offers a description of the information and processes involved in this complex multimodal behavior and a potential explanation for how this ability is acquired. We demonstrate that the model is not only sufficient to account for the experimental effects of Visual World Paradigm studies but also that these effects are emergent properties of the architecture of the model itself, rather than requiring separate information processing channels or modular processing systems. The model provides an explicit description of the connection between the modality-specific input from language and vision and the distribution of eye gaze in language-mediated visual attention. The paper concludes by discussing future applications for the model, specifically its potential for investigating the factors driving observed individual differences in language-mediated eye gaze.",
keywords = "language, vision, computational modeling, attention, eye movements, semantics, SPOKEN-WORD-RECOGNITION, ANTERIOR TEMPORAL-LOBES, EYE-MOVEMENTS, SEMANTIC DEMENTIA, CONNECTIONIST MODEL, SPEECH-PERCEPTION, TIME-COURSE, MEMORY, CONSTRAINTS, PARADIGM",
author = "Smith, {Alastair C} and Padraic Monaghan and Falk Huettig",
note = "Copyright {\textcopyright} 2013 Smith, Monaghan and Huettig. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.",
year = "2013",
month = aug,
day = "16",
doi = "10.3389/fpsyg.2013.00528",
language = "English",
volume = "4",
pages = "1--16",
journal = "Frontiers in Psychology",
issn = "1664-1078",
publisher = "Frontiers Media S.A.",

}

RIS

TY - JOUR

T1 - An amodal shared resource model of language-mediated visual attention

AU - Smith, Alastair C

AU - Monaghan, Padraic

AU - Huettig, Falk

N1 - Copyright © 2013 Smith, Monaghan and Huettig. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

PY - 2013/8/16

Y1 - 2013/8/16

N2 - Language-mediated visual attention describes the interaction of two fundamental components of the human cognitive system, language and vision. Within this paper we present an amodal shared resource model of language-mediated visual attention that offers a description of the information and processes involved in this complex multimodal behavior and a potential explanation for how this ability is acquired. We demonstrate that the model is not only sufficient to account for the experimental effects of Visual World Paradigm studies but also that these effects are emergent properties of the architecture of the model itself, rather than requiring separate information processing channels or modular processing systems. The model provides an explicit description of the connection between the modality-specific input from language and vision and the distribution of eye gaze in language-mediated visual attention. The paper concludes by discussing future applications for the model, specifically its potential for investigating the factors driving observed individual differences in language-mediated eye gaze.

AB - Language-mediated visual attention describes the interaction of two fundamental components of the human cognitive system, language and vision. Within this paper we present an amodal shared resource model of language-mediated visual attention that offers a description of the information and processes involved in this complex multimodal behavior and a potential explanation for how this ability is acquired. We demonstrate that the model is not only sufficient to account for the experimental effects of Visual World Paradigm studies but also that these effects are emergent properties of the architecture of the model itself, rather than requiring separate information processing channels or modular processing systems. The model provides an explicit description of the connection between the modality-specific input from language and vision and the distribution of eye gaze in language-mediated visual attention. The paper concludes by discussing future applications for the model, specifically its potential for investigating the factors driving observed individual differences in language-mediated eye gaze.

KW - language

KW - vision

KW - computational modeling

KW - attention

KW - eye movements

KW - semantics

KW - SPOKEN-WORD-RECOGNITION

KW - ANTERIOR TEMPORAL-LOBES

KW - EYE-MOVEMENTS

KW - SEMANTIC DEMENTIA

KW - CONNECTIONIST MODEL

KW - SPEECH-PERCEPTION

KW - TIME-COURSE

KW - MEMORY

KW - CONSTRAINTS

KW - PARADIGM

U2 - 10.3389/fpsyg.2013.00528

DO - 10.3389/fpsyg.2013.00528

M3 - Journal article

C2 - 23966967

VL - 4

SP - 1

EP - 16

JO - Frontiers in Psychology

JF - Frontiers in Psychology

SN - 1664-1078

M1 - 528

ER -