Home > Research > Publications & Outputs > Speech-Augmented Cone-of-Vision for Exploratory...

Electronic data

Text available via DOI:

View graph of relations

Speech-Augmented Cone-of-Vision for Exploratory Data Analysis

Research output: Contribution to conference - Without ISBN/ISSN Conference paperpeer-review

Published

Standard

Speech-Augmented Cone-of-Vision for Exploratory Data Analysis. / Bovo, Riccardo; Giunchi, Daniele; Sidenmark, Ludwig et al.
2023. 162:1-162:18 Paper presented at 2023 ACM CHI Conference on Human Factors in Computing Systems, Hamburg, Hamburg, Germany.

Research output: Contribution to conference - Without ISBN/ISSN Conference paperpeer-review

Harvard

Bovo, R, Giunchi, D, Sidenmark, L, Newn, J, Gellersen, H, Costanza, E & Heinis, T 2023, 'Speech-Augmented Cone-of-Vision for Exploratory Data Analysis', Paper presented at 2023 ACM CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23/04/23 - 28/04/23 pp. 162:1-162:18. https://doi.org/10.1145/3544548.3581283

APA

Bovo, R., Giunchi, D., Sidenmark, L., Newn, J., Gellersen, H., Costanza, E., & Heinis, T. (2023). Speech-Augmented Cone-of-Vision for Exploratory Data Analysis. 162:1-162:18. Paper presented at 2023 ACM CHI Conference on Human Factors in Computing Systems, Hamburg, Hamburg, Germany. https://doi.org/10.1145/3544548.3581283

Vancouver

Bovo R, Giunchi D, Sidenmark L, Newn J, Gellersen H, Costanza E et al.. Speech-Augmented Cone-of-Vision for Exploratory Data Analysis. 2023. Paper presented at 2023 ACM CHI Conference on Human Factors in Computing Systems, Hamburg, Hamburg, Germany. doi: 10.1145/3544548.3581283

Author

Bovo, Riccardo ; Giunchi, Daniele ; Sidenmark, Ludwig et al. / Speech-Augmented Cone-of-Vision for Exploratory Data Analysis. Paper presented at 2023 ACM CHI Conference on Human Factors in Computing Systems, Hamburg, Hamburg, Germany.18 p.

Bibtex

@conference{4b71ea2965014a799b2d9f7527fd9011,
title = "Speech-Augmented Cone-of-Vision for Exploratory Data Analysis",
abstract = "Mutual awareness of visual attention is crucial for successful collaboration. Previous research has explored various ways to represent visual attention, such as field-of-view visualizations and cursor visualizations based on eye-tracking, but these methods have limitations. Verbal communication is often utilized as a complementary strategy to overcome such disadvantages. This paper proposes a novel method that combines verbal communication with the Cone of Vision to improve gaze inference and mutual awareness in VR. We conducted a within-group study with pairs of participants who performed a collaborative analysis of data visualizations in VR. We found that our proposed method provides a better approximation of eye gaze than the approximation provided by head direction. Furthermore, we release the first collaborative head, eyes, and verbal behaviour dataset. The results of this study provide a foundation for investigating the potential of verbal communication as a tool for enhancing visual cues for joint attention.",
author = "Riccardo Bovo and Daniele Giunchi and Ludwig Sidenmark and Joshua Newn and Hans Gellersen and Enrico Costanza and Thomas Heinis",
year = "2023",
month = apr,
day = "19",
doi = "10.1145/3544548.3581283",
language = "English",
pages = "162:1--162:18",
note = "2023 ACM CHI Conference on Human Factors in Computing Systems, CHI 2023 ; Conference date: 23-04-2023 Through 28-04-2023",
url = "https://chi2023.acm.org/",

}

RIS

TY - CONF

T1 - Speech-Augmented Cone-of-Vision for Exploratory Data Analysis

AU - Bovo, Riccardo

AU - Giunchi, Daniele

AU - Sidenmark, Ludwig

AU - Newn, Joshua

AU - Gellersen, Hans

AU - Costanza, Enrico

AU - Heinis, Thomas

PY - 2023/4/19

Y1 - 2023/4/19

N2 - Mutual awareness of visual attention is crucial for successful collaboration. Previous research has explored various ways to represent visual attention, such as field-of-view visualizations and cursor visualizations based on eye-tracking, but these methods have limitations. Verbal communication is often utilized as a complementary strategy to overcome such disadvantages. This paper proposes a novel method that combines verbal communication with the Cone of Vision to improve gaze inference and mutual awareness in VR. We conducted a within-group study with pairs of participants who performed a collaborative analysis of data visualizations in VR. We found that our proposed method provides a better approximation of eye gaze than the approximation provided by head direction. Furthermore, we release the first collaborative head, eyes, and verbal behaviour dataset. The results of this study provide a foundation for investigating the potential of verbal communication as a tool for enhancing visual cues for joint attention.

AB - Mutual awareness of visual attention is crucial for successful collaboration. Previous research has explored various ways to represent visual attention, such as field-of-view visualizations and cursor visualizations based on eye-tracking, but these methods have limitations. Verbal communication is often utilized as a complementary strategy to overcome such disadvantages. This paper proposes a novel method that combines verbal communication with the Cone of Vision to improve gaze inference and mutual awareness in VR. We conducted a within-group study with pairs of participants who performed a collaborative analysis of data visualizations in VR. We found that our proposed method provides a better approximation of eye gaze than the approximation provided by head direction. Furthermore, we release the first collaborative head, eyes, and verbal behaviour dataset. The results of this study provide a foundation for investigating the potential of verbal communication as a tool for enhancing visual cues for joint attention.

U2 - 10.1145/3544548.3581283

DO - 10.1145/3544548.3581283

M3 - Conference paper

SP - 162:1-162:18

T2 - 2023 ACM CHI Conference on Human Factors in Computing Systems

Y2 - 23 April 2023 through 28 April 2023

ER -