Home > Research > Publications & Outputs > GazeSwitch

Electronic data

Text available via DOI:

View graph of relations

GazeSwitch: Automatic Eye-Head Mode Switching for Optimised Hands-Free Pointing

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Forthcoming

Standard

GazeSwitch: Automatic Eye-Head Mode Switching for Optimised Hands-Free Pointing. / Hou, Baosheng James; Newn, Joshua; Sidenmark, Ludwig et al.
In: Proceedings of the ACM on Human-Computer Interaction, Vol. 8, 22.03.2024.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Hou BJ, Newn J, Sidenmark L, Khan AA, Gellersen H. GazeSwitch: Automatic Eye-Head Mode Switching for Optimised Hands-Free Pointing. Proceedings of the ACM on Human-Computer Interaction. 2024 Mar 22;8. doi: 10.1145/3655601

Author

Bibtex

@article{10ac972044e64db79c41f06b32528351,
title = "GazeSwitch: Automatic Eye-Head Mode Switching for Optimised Hands-Free Pointing",
abstract = "This paper contributes GazeSwitch, an ML-based technique that optimises the real-time switching between eye and head modes for fast and precise hands-free pointing. GazeSwitch reduces false positives from natural head movements and efficiently detects head gestures for input, resulting in an effective hands-free and adaptive technique for interaction. We conducted two user studies to evaluate its performance and user experience. Comparative analyses with baseline switching techniques, Eye+Head Pinpointing (manual) and BimodalGaze (threshold-based) revealed several trade-offs. We found that GazeSwitch provides a natural and effortless experience but trades off control and stability compared to manual mode switching, and requires less head movement compared to BimodalGaze. This work demonstrates the effectiveness of machine learning approach to learn and adapt to patterns in head movement, allowing us to better leverage the synergistic relation between eye and head input modalities for interaction in mixed and extended reality.",
keywords = "Gaze interaction, Refinement, Eye Tracking, Eye-head Coordination, Computational Interaction, Machine Learning",
author = "Hou, {Baosheng James} and Joshua Newn and Ludwig Sidenmark and Khan, {Anam Ahmad} and Hans Gellersen",
year = "2024",
month = mar,
day = "22",
doi = "10.1145/3655601",
language = "English",
volume = "8",
journal = "Proceedings of the ACM on Human-Computer Interaction",
publisher = "ACM",

}

RIS

TY - JOUR

T1 - GazeSwitch

T2 - Automatic Eye-Head Mode Switching for Optimised Hands-Free Pointing

AU - Hou, Baosheng James

AU - Newn, Joshua

AU - Sidenmark, Ludwig

AU - Khan, Anam Ahmad

AU - Gellersen, Hans

PY - 2024/3/22

Y1 - 2024/3/22

N2 - This paper contributes GazeSwitch, an ML-based technique that optimises the real-time switching between eye and head modes for fast and precise hands-free pointing. GazeSwitch reduces false positives from natural head movements and efficiently detects head gestures for input, resulting in an effective hands-free and adaptive technique for interaction. We conducted two user studies to evaluate its performance and user experience. Comparative analyses with baseline switching techniques, Eye+Head Pinpointing (manual) and BimodalGaze (threshold-based) revealed several trade-offs. We found that GazeSwitch provides a natural and effortless experience but trades off control and stability compared to manual mode switching, and requires less head movement compared to BimodalGaze. This work demonstrates the effectiveness of machine learning approach to learn and adapt to patterns in head movement, allowing us to better leverage the synergistic relation between eye and head input modalities for interaction in mixed and extended reality.

AB - This paper contributes GazeSwitch, an ML-based technique that optimises the real-time switching between eye and head modes for fast and precise hands-free pointing. GazeSwitch reduces false positives from natural head movements and efficiently detects head gestures for input, resulting in an effective hands-free and adaptive technique for interaction. We conducted two user studies to evaluate its performance and user experience. Comparative analyses with baseline switching techniques, Eye+Head Pinpointing (manual) and BimodalGaze (threshold-based) revealed several trade-offs. We found that GazeSwitch provides a natural and effortless experience but trades off control and stability compared to manual mode switching, and requires less head movement compared to BimodalGaze. This work demonstrates the effectiveness of machine learning approach to learn and adapt to patterns in head movement, allowing us to better leverage the synergistic relation between eye and head input modalities for interaction in mixed and extended reality.

KW - Gaze interaction

KW - Refinement

KW - Eye Tracking

KW - Eye-head Coordination

KW - Computational Interaction

KW - Machine Learning

U2 - 10.1145/3655601

DO - 10.1145/3655601

M3 - Journal article

VL - 8

JO - Proceedings of the ACM on Human-Computer Interaction

JF - Proceedings of the ACM on Human-Computer Interaction

ER -