Accepted author manuscript, 4.78 MB, PDF document
Available under license: CC BY: Creative Commons Attribution 4.0 International License
Final published version
Research output: Contribution to Journal/Magazine › Journal article › peer-review
Research output: Contribution to Journal/Magazine › Journal article › peer-review
}
TY - JOUR
T1 - Comparing Gaze, Head and Controller Selection of Dynamically Revealed Targets in Head-mounted Displays
AU - Sidenmark, Ludwig
AU - Prummer, Franziska
AU - Newn, Joshua
AU - Gellersen, Hans
PY - 2023/11/30
Y1 - 2023/11/30
N2 - This paper presents a head-mounted virtual reality study that compared gaze, head, and controller pointing for selection of dynamically revealed targets. Existing studies on head-mounted 3D interaction have focused on pointing and selection tasks where all targets are visible to the user. Our study compared the effects of screen width (field of view), target amplitude and width, and prior knowledge of target location on modality performance. Results show that gaze and controller pointing are significantly faster than head pointing and that increased screen width only positively impacts performance up to a certain point. We further investigated the applicability of existing pointing models. Our analysis confirmed the suitability of previously proposed two-component models for all modalities while uncovering differences for gaze at known and unknown target positions. Our findings provide new empirical evidence for understanding input with gaze, head, and controller and are significant for applications that extend around the user.
AB - This paper presents a head-mounted virtual reality study that compared gaze, head, and controller pointing for selection of dynamically revealed targets. Existing studies on head-mounted 3D interaction have focused on pointing and selection tasks where all targets are visible to the user. Our study compared the effects of screen width (field of view), target amplitude and width, and prior knowledge of target location on modality performance. Results show that gaze and controller pointing are significantly faster than head pointing and that increased screen width only positively impacts performance up to a certain point. We further investigated the applicability of existing pointing models. Our analysis confirmed the suitability of previously proposed two-component models for all modalities while uncovering differences for gaze at known and unknown target positions. Our findings provide new empirical evidence for understanding input with gaze, head, and controller and are significant for applications that extend around the user.
KW - 3D Interaction
KW - Lenses
KW - Mathematical models
KW - Performance evaluation
KW - Pointing
KW - Resists
KW - Selection Performance
KW - Task analysis
KW - Three-dimensional displays
KW - Virtual Reality
KW - Visualization
U2 - 10.1109/TVCG.2023.3320235
DO - 10.1109/TVCG.2023.3320235
M3 - Journal article
VL - 29
SP - 4740
EP - 4750
JO - IEEE Transactions on Visualization and Computer Graphics
JF - IEEE Transactions on Visualization and Computer Graphics
SN - 1077-2626
IS - 11
ER -