Home > Research > Publications & Outputs > Eye Gaze and Perceptual Adaptation to Audiovisu...

Electronic data

Links

Text available via DOI:

View graph of relations

Eye Gaze and Perceptual Adaptation to Audiovisual Degraded Speech

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Eye Gaze and Perceptual Adaptation to Audiovisual Degraded Speech. / Banks, Briony; Gowen, Emma; Munro, Kevin et al.
In: Journal of Speech, Language, and Hearing Research, Vol. 64, No. 9, 14.09.2021, p. 3432-3445.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

Banks, B, Gowen, E, Munro, K & Adank, P 2021, 'Eye Gaze and Perceptual Adaptation to Audiovisual Degraded Speech', Journal of Speech, Language, and Hearing Research, vol. 64, no. 9, pp. 3432-3445. https://doi.org/10.1044/2021_JSLHR-21-00106

APA

Banks, B., Gowen, E., Munro, K., & Adank, P. (2021). Eye Gaze and Perceptual Adaptation to Audiovisual Degraded Speech. Journal of Speech, Language, and Hearing Research, 64(9), 3432-3445. https://doi.org/10.1044/2021_JSLHR-21-00106

Vancouver

Banks B, Gowen E, Munro K, Adank P. Eye Gaze and Perceptual Adaptation to Audiovisual Degraded Speech. Journal of Speech, Language, and Hearing Research. 2021 Sept 14;64(9):3432-3445. Epub 2021 Aug 31. doi: 10.1044/2021_JSLHR-21-00106

Author

Banks, Briony ; Gowen, Emma ; Munro, Kevin et al. / Eye Gaze and Perceptual Adaptation to Audiovisual Degraded Speech. In: Journal of Speech, Language, and Hearing Research. 2021 ; Vol. 64, No. 9. pp. 3432-3445.

Bibtex

@article{2aadeafe9ab144d3a7ecb46966cc2725,
title = "Eye Gaze and Perceptual Adaptation to Audiovisual Degraded Speech",
abstract = "PurposeVisual cues from a speaker's face may benefit perceptual adaptation to degraded speech, but current evidence is limited. We aimed to replicate results from previous studies to establish the extent to which visual speech cues can lead to greater adaptation over time, extending existing results to a real-time adaptation paradigm (i.e., without a separate training period). A second aim was to investigate whether eye gaze patterns toward the speaker's mouth were related to better perception, hypothesizing that listeners who looked more at the speaker's mouth would show greater adaptation.MethodA group of listeners (n = 30) was presented with 90 noise-vocoded sentences in audiovisual format, whereas a control group (n = 29) was presented with the audio signal only. Recognition accuracy was measured throughout and eye tracking was used to measure fixations toward the speaker's eyes and mouth in the audiovisual group.ResultsPrevious studies were partially replicated: The audiovisual group had better recognition throughout and adapted slightly more rapidly, but both groups showed an equal amount of improvement overall. Longer fixations on the speaker's mouth in the audiovisual group were related to better overall accuracy. An exploratory analysis further demonstrated that the duration of fixations to the speaker's mouth decreased over time.ConclusionsThe results suggest that visual cues may not benefit adaptation to degraded speech as much as previously thought. Longer fixations on a speaker's mouth may play a role in successfully decoding visual speech cues; however, this will need to be confirmed in future research to fully understand how patterns of eye gaze are related to audiovisual speech recognition. All materials, data, and code are available at https://osf.io/2wqkf/.",
author = "Briony Banks and Emma Gowen and Kevin Munro and Patti Adank",
year = "2021",
month = sep,
day = "14",
doi = "10.1044/2021_JSLHR-21-00106",
language = "English",
volume = "64",
pages = "3432--3445",
journal = "Journal of Speech, Language, and Hearing Research",
issn = "1092-4388",
publisher = "American Speech-Language-Hearing Association (ASHA)",
number = "9",

}

RIS

TY - JOUR

T1 - Eye Gaze and Perceptual Adaptation to Audiovisual Degraded Speech

AU - Banks, Briony

AU - Gowen, Emma

AU - Munro, Kevin

AU - Adank, Patti

PY - 2021/9/14

Y1 - 2021/9/14

N2 - PurposeVisual cues from a speaker's face may benefit perceptual adaptation to degraded speech, but current evidence is limited. We aimed to replicate results from previous studies to establish the extent to which visual speech cues can lead to greater adaptation over time, extending existing results to a real-time adaptation paradigm (i.e., without a separate training period). A second aim was to investigate whether eye gaze patterns toward the speaker's mouth were related to better perception, hypothesizing that listeners who looked more at the speaker's mouth would show greater adaptation.MethodA group of listeners (n = 30) was presented with 90 noise-vocoded sentences in audiovisual format, whereas a control group (n = 29) was presented with the audio signal only. Recognition accuracy was measured throughout and eye tracking was used to measure fixations toward the speaker's eyes and mouth in the audiovisual group.ResultsPrevious studies were partially replicated: The audiovisual group had better recognition throughout and adapted slightly more rapidly, but both groups showed an equal amount of improvement overall. Longer fixations on the speaker's mouth in the audiovisual group were related to better overall accuracy. An exploratory analysis further demonstrated that the duration of fixations to the speaker's mouth decreased over time.ConclusionsThe results suggest that visual cues may not benefit adaptation to degraded speech as much as previously thought. Longer fixations on a speaker's mouth may play a role in successfully decoding visual speech cues; however, this will need to be confirmed in future research to fully understand how patterns of eye gaze are related to audiovisual speech recognition. All materials, data, and code are available at https://osf.io/2wqkf/.

AB - PurposeVisual cues from a speaker's face may benefit perceptual adaptation to degraded speech, but current evidence is limited. We aimed to replicate results from previous studies to establish the extent to which visual speech cues can lead to greater adaptation over time, extending existing results to a real-time adaptation paradigm (i.e., without a separate training period). A second aim was to investigate whether eye gaze patterns toward the speaker's mouth were related to better perception, hypothesizing that listeners who looked more at the speaker's mouth would show greater adaptation.MethodA group of listeners (n = 30) was presented with 90 noise-vocoded sentences in audiovisual format, whereas a control group (n = 29) was presented with the audio signal only. Recognition accuracy was measured throughout and eye tracking was used to measure fixations toward the speaker's eyes and mouth in the audiovisual group.ResultsPrevious studies were partially replicated: The audiovisual group had better recognition throughout and adapted slightly more rapidly, but both groups showed an equal amount of improvement overall. Longer fixations on the speaker's mouth in the audiovisual group were related to better overall accuracy. An exploratory analysis further demonstrated that the duration of fixations to the speaker's mouth decreased over time.ConclusionsThe results suggest that visual cues may not benefit adaptation to degraded speech as much as previously thought. Longer fixations on a speaker's mouth may play a role in successfully decoding visual speech cues; however, this will need to be confirmed in future research to fully understand how patterns of eye gaze are related to audiovisual speech recognition. All materials, data, and code are available at https://osf.io/2wqkf/.

U2 - 10.1044/2021_JSLHR-21-00106

DO - 10.1044/2021_JSLHR-21-00106

M3 - Journal article

VL - 64

SP - 3432

EP - 3445

JO - Journal of Speech, Language, and Hearing Research

JF - Journal of Speech, Language, and Hearing Research

SN - 1092-4388

IS - 9

ER -