Final published version, 4.62 MB, PDF document
Available under license: CC BY: Creative Commons Attribution 4.0 International License
Final published version
Licence: CC BY: Creative Commons Attribution 4.0 International License
Research output: Contribution to Journal/Magazine › Journal article › peer-review
Research output: Contribution to Journal/Magazine › Journal article › peer-review
}
TY - JOUR
T1 - GazeSwitch
T2 - Automatic Eye-Head Mode Switching for Optimised Hands-Free Pointing
AU - Hou, Baosheng James
AU - Newn, Joshua
AU - Sidenmark, Ludwig
AU - Khan, Anam Ahmad
AU - Gellersen, Hans
PY - 2024/5/28
Y1 - 2024/5/28
N2 - This paper contributes GazeSwitch, an ML-based technique that optimises the real-time switching between eye and head modes for fast and precise hands-free pointing. GazeSwitch reduces false positives from natural head movements and efficiently detects head gestures for input, resulting in an effective hands-free and adaptive technique for interaction. We conducted two user studies to evaluate its performance and user experience. Comparative analyses with baseline switching techniques, Eye+Head Pinpointing (manual) and BimodalGaze (threshold-based) revealed several trade-offs. We found that GazeSwitch provides a natural and effortless experience but trades off control and stability compared to manual mode switching, and requires less head movement compared to BimodalGaze. This work demonstrates the effectiveness of machine learning approach to learn and adapt to patterns in head movement, allowing us to better leverage the synergistic relation between eye and head input modalities for interaction in mixed and extended reality.
AB - This paper contributes GazeSwitch, an ML-based technique that optimises the real-time switching between eye and head modes for fast and precise hands-free pointing. GazeSwitch reduces false positives from natural head movements and efficiently detects head gestures for input, resulting in an effective hands-free and adaptive technique for interaction. We conducted two user studies to evaluate its performance and user experience. Comparative analyses with baseline switching techniques, Eye+Head Pinpointing (manual) and BimodalGaze (threshold-based) revealed several trade-offs. We found that GazeSwitch provides a natural and effortless experience but trades off control and stability compared to manual mode switching, and requires less head movement compared to BimodalGaze. This work demonstrates the effectiveness of machine learning approach to learn and adapt to patterns in head movement, allowing us to better leverage the synergistic relation between eye and head input modalities for interaction in mixed and extended reality.
KW - Gaze interaction
KW - Refinement
KW - Eye Tracking
KW - Eye-head Coordination
KW - Computational Interaction
KW - Machine Learning
U2 - 10.1145/3655601
DO - 10.1145/3655601
M3 - Journal article
VL - 8
SP - 1
EP - 20
JO - Proceedings of the ACM on Human-Computer Interaction
JF - Proceedings of the ACM on Human-Computer Interaction
M1 - 227
ER -