Home > Research > Publications & Outputs > Integration of visual and auditory stimuli in t...
View graph of relations

Integration of visual and auditory stimuli in the perception of emotional expression in virtual characters

Research output: Contribution to Journal/MagazineMeeting abstractpeer-review

Published

Standard

Integration of visual and auditory stimuli in the perception of emotional expression in virtual characters. / Volkova, E. P.; Linkenauger, S. A.; Alexandrova, I. et al.
In: Perception, Vol. 40, No. ECVP Abstract Supplement, 2011, p. 138-138.

Research output: Contribution to Journal/MagazineMeeting abstractpeer-review

Harvard

Volkova, EP, Linkenauger, SA, Alexandrova, I, Buelthoff, HH & Mohler, BJ 2011, 'Integration of visual and auditory stimuli in the perception of emotional expression in virtual characters', Perception, vol. 40, no. ECVP Abstract Supplement, pp. 138-138. https://doi.org/10.1068/v110451

APA

Volkova, E. P., Linkenauger, S. A., Alexandrova, I., Buelthoff, H. H., & Mohler, B. J. (2011). Integration of visual and auditory stimuli in the perception of emotional expression in virtual characters. Perception, 40(ECVP Abstract Supplement), 138-138. https://doi.org/10.1068/v110451

Vancouver

Volkova EP, Linkenauger SA, Alexandrova I, Buelthoff HH, Mohler BJ. Integration of visual and auditory stimuli in the perception of emotional expression in virtual characters. Perception. 2011;40(ECVP Abstract Supplement):138-138. doi: 10.1068/v110451

Author

Volkova, E. P. ; Linkenauger, S. A. ; Alexandrova, I. et al. / Integration of visual and auditory stimuli in the perception of emotional expression in virtual characters. In: Perception. 2011 ; Vol. 40, No. ECVP Abstract Supplement. pp. 138-138.

Bibtex

@article{e9a228e329c5478881e7841eb826fe0e,
title = "Integration of visual and auditory stimuli in the perception of emotional expression in virtual characters",
abstract = "Virtual characters are a potentially valuable tool for creating stimuli for research investigating the perception of emotion. We conducted an audio-visual experiment to investigate the effectiveness of our stimuli to convey the intended emotion. We used dynamic virtual faces in addition to pre-recorded (Burkhardt et al, 2005, Interspeech'2005, 1517–1520) and synthesized speech to create audio-visual stimuli which conveyed all possible combinations of stimuli. Each voice and face stimuli aimed to express one of seven different emotional categories. The participants made judgments of the prevalent emotion. For the pre-recorded voice, the vocalized emotion influenced participants{\textquoteright} emotion judgment more than the facial expression. However, for the synthesized voice, facial expression influenced participants{\textquoteright} emotion judgment more than vocalized emotion. While participants rather accurately labeled (>76%) the stimuli when face and voice emotion were the same, they performed worse overall on correctly identifying the stimuli when the voice was synthesized. We further analyzed the difference between the emotional categories in each stimulus and found that valence distance in the emotion of the face and voice significantly impacted recognition of the emotion judgment for both natural and synthesized voices. This experimental design provides a method to improve virtual character emotional expression.",
author = "Volkova, {E. P.} and Linkenauger, {S. A.} and I. Alexandrova and Buelthoff, {H. H.} and Mohler, {B. J.}",
year = "2011",
doi = "10.1068/v110451",
language = "English",
volume = "40",
pages = "138--138",
journal = "Perception",
issn = "0301-0066",
publisher = "Pion Ltd.",
number = "ECVP Abstract Supplement",

}

RIS

TY - JOUR

T1 - Integration of visual and auditory stimuli in the perception of emotional expression in virtual characters

AU - Volkova, E. P.

AU - Linkenauger, S. A.

AU - Alexandrova, I.

AU - Buelthoff, H. H.

AU - Mohler, B. J.

PY - 2011

Y1 - 2011

N2 - Virtual characters are a potentially valuable tool for creating stimuli for research investigating the perception of emotion. We conducted an audio-visual experiment to investigate the effectiveness of our stimuli to convey the intended emotion. We used dynamic virtual faces in addition to pre-recorded (Burkhardt et al, 2005, Interspeech'2005, 1517–1520) and synthesized speech to create audio-visual stimuli which conveyed all possible combinations of stimuli. Each voice and face stimuli aimed to express one of seven different emotional categories. The participants made judgments of the prevalent emotion. For the pre-recorded voice, the vocalized emotion influenced participants’ emotion judgment more than the facial expression. However, for the synthesized voice, facial expression influenced participants’ emotion judgment more than vocalized emotion. While participants rather accurately labeled (>76%) the stimuli when face and voice emotion were the same, they performed worse overall on correctly identifying the stimuli when the voice was synthesized. We further analyzed the difference between the emotional categories in each stimulus and found that valence distance in the emotion of the face and voice significantly impacted recognition of the emotion judgment for both natural and synthesized voices. This experimental design provides a method to improve virtual character emotional expression.

AB - Virtual characters are a potentially valuable tool for creating stimuli for research investigating the perception of emotion. We conducted an audio-visual experiment to investigate the effectiveness of our stimuli to convey the intended emotion. We used dynamic virtual faces in addition to pre-recorded (Burkhardt et al, 2005, Interspeech'2005, 1517–1520) and synthesized speech to create audio-visual stimuli which conveyed all possible combinations of stimuli. Each voice and face stimuli aimed to express one of seven different emotional categories. The participants made judgments of the prevalent emotion. For the pre-recorded voice, the vocalized emotion influenced participants’ emotion judgment more than the facial expression. However, for the synthesized voice, facial expression influenced participants’ emotion judgment more than vocalized emotion. While participants rather accurately labeled (>76%) the stimuli when face and voice emotion were the same, they performed worse overall on correctly identifying the stimuli when the voice was synthesized. We further analyzed the difference between the emotional categories in each stimulus and found that valence distance in the emotion of the face and voice significantly impacted recognition of the emotion judgment for both natural and synthesized voices. This experimental design provides a method to improve virtual character emotional expression.

U2 - 10.1068/v110451

DO - 10.1068/v110451

M3 - Meeting abstract

VL - 40

SP - 138

EP - 138

JO - Perception

JF - Perception

SN - 0301-0066

IS - ECVP Abstract Supplement

ER -