Final published version
Licence: CC BY: Creative Commons Attribution 4.0 International License
Research output: Contribution to Journal/Magazine › Journal article › peer-review
Research output: Contribution to Journal/Magazine › Journal article › peer-review
}
TY - JOUR
T1 - Gaze, Wall, and Racket
T2 - Combining Gaze and Hand-Controlled Plane for 3D Selection in Virtual Reality
AU - Wagner, Uta
AU - Albrecht, Matthias
AU - Jacobsen, Andreas Asferg
AU - Wang, Haopeng
AU - Gellersen, Hans
AU - Pfeuffer, Ken
PY - 2024/12/31
Y1 - 2024/12/31
N2 - Raypointing, the status-quo pointing technique for virtual reality, is challenging with many occluded and overlapping objects. In this work, we investigate how eye-tracking input can assist the gestural ray pointing in the disambiguation of targets in densely populated scenes. We explore the concept of Gaze + Plane, where the intersection between the user's gaze and a hand-controlled plane facilitates 3D position specification. In particular, two techniques are investigated: Gaze&Wall, which employs an indirect plane positioned in depth using a hand ray, and Gaze&Racket, featuring a hand-held and rotatable plane. In a first experiment, we reveal the speed-error trade-offs between Gaze + Plane techniques. In a second study, we compared the best techniques to newly designed gesture-only techniques, finding that Gaze&Wall is less error-prone and significantly faster. Our research has relevance for spatial interaction, specifically on advanced techniques for complex 3D tasks.
AB - Raypointing, the status-quo pointing technique for virtual reality, is challenging with many occluded and overlapping objects. In this work, we investigate how eye-tracking input can assist the gestural ray pointing in the disambiguation of targets in densely populated scenes. We explore the concept of Gaze + Plane, where the intersection between the user's gaze and a hand-controlled plane facilitates 3D position specification. In particular, two techniques are investigated: Gaze&Wall, which employs an indirect plane positioned in depth using a hand ray, and Gaze&Racket, featuring a hand-held and rotatable plane. In a first experiment, we reveal the speed-error trade-offs between Gaze + Plane techniques. In a second study, we compared the best techniques to newly designed gesture-only techniques, finding that Gaze&Wall is less error-prone and significantly faster. Our research has relevance for spatial interaction, specifically on advanced techniques for complex 3D tasks.
KW - complex 3D tasks
KW - disambiguation
KW - eye-tracking
KW - gaze interaction
KW - object selection
KW - occlusion
U2 - 10.1145/3698134
DO - 10.1145/3698134
M3 - Journal article
VL - 8
SP - 189
EP - 213
JO - Proceedings of the ACM on Human-Computer Interaction
JF - Proceedings of the ACM on Human-Computer Interaction
IS - ISS
M1 - 534
ER -