Home > Research > Publications & Outputs > Integrating Gaze and Speech for Enabling Implic...

Links

Text available via DOI:

View graph of relations

Integrating Gaze and Speech for Enabling Implicit Interactions

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Close
Publication date29/04/2022
Host publicationCHI '22: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems
Place of PublicationNew York
PublisherACM
Pages1-14
Number of pages14
ISBN (electronic)9781450391573
<mark>Original language</mark>English

Abstract

Gaze and speech are rich contextual sources of information that, when combined, can result in effective and rich multimodal interactions. This paper proposes a machine learning-based pipeline that leverages and combines users’ natural gaze activity, the semantic knowledge from their vocal utterances and the synchronicity between gaze and speech data to facilitate users’ interaction. We evaluated our proposed approach on an existing dataset, which involved 32 participants recording voice notes while reading an academic paper. Using a Logistic Regression classifier, we demonstrate that our proposed multimodal approach maps voice notes with accurate text passages with an average F1-Score of 0.90. Our proposed pipeline motivates the design of multimodal interfaces that combines natural gaze and speech patterns to enable robust interactions.