Home > Research > Publications & Outputs > The relevance of the availability of visual spe...

Electronic data

Links

Text available via DOI:

View graph of relations

The relevance of the availability of visual speech cues during adaptation to noise-vocoded speech

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

The relevance of the availability of visual speech cues during adaptation to noise-vocoded speech. / Trotter, Anthony S.; Banks, Briony; Adank, Patti.
In: Journal of Speech, Language, and Hearing Research, Vol. 64, No. 7, 16.07.2021, p. 2513-2528.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Trotter, AS, Banks, B & Adank, P 2021, 'The relevance of the availability of visual speech cues during adaptation to noise-vocoded speech', Journal of Speech, Language, and Hearing Research, vol. 64, no. 7, pp. 2513-2528. https://doi.org/10.1044/2021_JSLHR-20-00575

APA

Trotter, A. S., Banks, B., & Adank, P. (2021). The relevance of the availability of visual speech cues during adaptation to noise-vocoded speech. Journal of Speech, Language, and Hearing Research, 64(7), 2513-2528. https://doi.org/10.1044/2021_JSLHR-20-00575

Vancouver

Trotter AS, Banks B, Adank P. The relevance of the availability of visual speech cues during adaptation to noise-vocoded speech. Journal of Speech, Language, and Hearing Research. 2021 Jul 16;64(7):2513-2528. Epub 2021 Jun 23. doi: 10.1044/2021_JSLHR-20-00575

Author

Trotter, Anthony S. ; Banks, Briony ; Adank, Patti. / The relevance of the availability of visual speech cues during adaptation to noise-vocoded speech. In: Journal of Speech, Language, and Hearing Research. 2021 ; Vol. 64, No. 7. pp. 2513-2528.

Bibtex

@article{b549feae6f544d9594e93469030962d2,
title = "The relevance of the availability of visual speech cues during adaptation to noise-vocoded speech",
abstract = "Purpose This study first aimed to establish whether viewing specific parts of the speaker's face (eyes or mouth), compared to viewing the whole face, affected adaptation to distorted noise-vocoded sentences. Second, this study also aimed to replicate results on processing of distorted speech from lab-based experiments in an online setup.Method We monitored recognition accuracy online while participants were listening to noise-vocoded sentences. We first established if participants were able to perceive and adapt to audiovisual four-band noise-vocoded sentences when the entire moving face was visible (AV Full). Four further groups were then tested: a group in which participants viewed the moving lower part of the speaker's face (AV Mouth), a group in which participants only see the moving upper part of the face (AV Eyes), a group in which participants could not see the moving lower or upper face (AV Blocked), and a group in which participants saw an image of a still face (AV Still).Results Participants repeated around 40% of the key words correctly and adapted during the experiment, but only when the moving mouth was visible. In contrast, performance was at floor level, and no adaptation took place, in conditions when the moving mouth was occluded.Conclusions The results show the importance of being able to observe relevant visual speech information from the speaker's mouth region, but not the eyes/upper face region, when listening and adapting to distorted sentences online. Second, the results also demonstrated that it is feasible to run speech perception and adaptation studies online, but that not all findings reported for lab studies replicate.",
author = "Trotter, {Anthony S.} and Briony Banks and Patti Adank",
year = "2021",
month = jul,
day = "16",
doi = "10.1044/2021_JSLHR-20-00575",
language = "English",
volume = "64",
pages = "2513--2528",
journal = "Journal of Speech, Language, and Hearing Research",
issn = "1092-4388",
publisher = "American Speech-Language-Hearing Association (ASHA)",
number = "7",

}

RIS

TY - JOUR

T1 - The relevance of the availability of visual speech cues during adaptation to noise-vocoded speech

AU - Trotter, Anthony S.

AU - Banks, Briony

AU - Adank, Patti

PY - 2021/7/16

Y1 - 2021/7/16

N2 - Purpose This study first aimed to establish whether viewing specific parts of the speaker's face (eyes or mouth), compared to viewing the whole face, affected adaptation to distorted noise-vocoded sentences. Second, this study also aimed to replicate results on processing of distorted speech from lab-based experiments in an online setup.Method We monitored recognition accuracy online while participants were listening to noise-vocoded sentences. We first established if participants were able to perceive and adapt to audiovisual four-band noise-vocoded sentences when the entire moving face was visible (AV Full). Four further groups were then tested: a group in which participants viewed the moving lower part of the speaker's face (AV Mouth), a group in which participants only see the moving upper part of the face (AV Eyes), a group in which participants could not see the moving lower or upper face (AV Blocked), and a group in which participants saw an image of a still face (AV Still).Results Participants repeated around 40% of the key words correctly and adapted during the experiment, but only when the moving mouth was visible. In contrast, performance was at floor level, and no adaptation took place, in conditions when the moving mouth was occluded.Conclusions The results show the importance of being able to observe relevant visual speech information from the speaker's mouth region, but not the eyes/upper face region, when listening and adapting to distorted sentences online. Second, the results also demonstrated that it is feasible to run speech perception and adaptation studies online, but that not all findings reported for lab studies replicate.

AB - Purpose This study first aimed to establish whether viewing specific parts of the speaker's face (eyes or mouth), compared to viewing the whole face, affected adaptation to distorted noise-vocoded sentences. Second, this study also aimed to replicate results on processing of distorted speech from lab-based experiments in an online setup.Method We monitored recognition accuracy online while participants were listening to noise-vocoded sentences. We first established if participants were able to perceive and adapt to audiovisual four-band noise-vocoded sentences when the entire moving face was visible (AV Full). Four further groups were then tested: a group in which participants viewed the moving lower part of the speaker's face (AV Mouth), a group in which participants only see the moving upper part of the face (AV Eyes), a group in which participants could not see the moving lower or upper face (AV Blocked), and a group in which participants saw an image of a still face (AV Still).Results Participants repeated around 40% of the key words correctly and adapted during the experiment, but only when the moving mouth was visible. In contrast, performance was at floor level, and no adaptation took place, in conditions when the moving mouth was occluded.Conclusions The results show the importance of being able to observe relevant visual speech information from the speaker's mouth region, but not the eyes/upper face region, when listening and adapting to distorted sentences online. Second, the results also demonstrated that it is feasible to run speech perception and adaptation studies online, but that not all findings reported for lab studies replicate.

U2 - 10.1044/2021_JSLHR-20-00575

DO - 10.1044/2021_JSLHR-20-00575

M3 - Journal article

VL - 64

SP - 2513

EP - 2528

JO - Journal of Speech, Language, and Hearing Research

JF - Journal of Speech, Language, and Hearing Research

SN - 1092-4388

IS - 7

ER -