Home > Research > Publications & Outputs > Audiovisual cues benefit recognition of accente...

Links

Text available via DOI:

View graph of relations

Audiovisual cues benefit recognition of accented speech in noise but not perceptual adaptation

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
Close
Article number422
<mark>Journal publication date</mark>3/08/2015
<mark>Journal</mark>Frontiers in Human Neuroscience
Volume9
Number of pages13
Publication StatusPublished
<mark>Original language</mark>English

Abstract

Perceptual adaptation allows humans to recognize different varieties of accented speech. We investigated whether perceptual adaptation to accented speech is facilitated if listeners can see a speaker’s facial and mouth movements. In Study 1, participants listened to sentences in a novel accent and underwent a period of training with audiovisual or audio-only speech cues, presented in quiet or in background noise. A control group also underwent training with visual-only (speech-reading) cues. We observed no significant difference in perceptual adaptation between any of the groups. To address a number of remaining questions, we carried out a second study using a different accent, speaker and experimental design, in which participants listened to sentences in a non-native (Japanese) accent with audiovisual or audio-only cues, without separate training. Participants’ eye gaze was recorded to verify that they looked at the speaker’s face during audiovisual trials. Recognition accuracy was significantly better for audiovisual than for audio-only stimuli; however, no statistical difference in perceptual adaptation was observed between the two modalities. Furthermore, Bayesian analysis suggested that the data supported the null hypothesis. Our results suggest that although the availability of visual speech cues may be immediately beneficial for recognition of unfamiliar accented speech in noise, it does not improve perceptual adaptation.