Home > Research > Publications & Outputs > Speech-Augmented Cone-of-Vision for Exploratory...

Electronic data

Text available via DOI:

View graph of relations

Speech-Augmented Cone-of-Vision for Exploratory Data Analysis

Research output: Contribution to conference - Without ISBN/ISSN Conference paperpeer-review

Published
Close
Publication date19/04/2023
Number of pages18
Pages162:1-162:18
<mark>Original language</mark>English
Event2023 ACM CHI Conference on Human Factors in Computing Systems - Congress Center Hamburg (CCH), Hamburg, Germany
Duration: 23/04/202328/04/2023
https://chi2023.acm.org/

Conference

Conference2023 ACM CHI Conference on Human Factors in Computing Systems
Abbreviated titleCHI 2023
Country/TerritoryGermany
CityHamburg
Period23/04/2328/04/23
Internet address

Abstract

Mutual awareness of visual attention is crucial for successful collaboration. Previous research has explored various ways to represent visual attention, such as field-of-view visualizations and cursor visualizations based on eye-tracking, but these methods have limitations. Verbal communication is often utilized as a complementary strategy to overcome such disadvantages. This paper proposes a novel method that combines verbal communication with the Cone of Vision to improve gaze inference and mutual awareness in VR. We conducted a within-group study with pairs of participants who performed a collaborative analysis of data visualizations in VR. We found that our proposed method provides a better approximation of eye gaze than the approximation provided by head direction. Furthermore, we release the first collaborative head, eyes, and verbal behaviour dataset. The results of this study provide a foundation for investigating the potential of verbal communication as a tool for enhancing visual cues for joint attention.