Final published version
Licence: CC BY-NC-SA: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSN › Conference contribution/Paper › peer-review
Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSN › Conference contribution/Paper › peer-review
}
TY - GEN
T1 - Online-EYE: Multimodal Implicit Eye Tracking Calibration for XR
AU - Hou, Baosheng James
AU - Abramyan, Lucy
AU - Gurumurthy, Prasanthi
AU - Adams, Haley
AU - Tosic Rodgers, Ivana
AU - Gonzalez, Eric J
AU - Patel, Khushman
AU - Colaço, Andrea
AU - Pfeuffer, Ken
AU - Gellersen, Hans
AU - Ahuja, Karan
AU - Gonzalez-Franco, Mar
PY - 2025/4/25
Y1 - 2025/4/25
N2 - Unlike other inputs for extended reality (XR) that work out of the box, eye tracking typically requires custom calibration per user or session. We present a multimodal inputs approach for implicit calibration of eye tracker in VR, leveraging UI interaction for continuous, background calibration. Our method analyzes gaze data alongside controller interaction with UI elements, and employing ML techniques it continuously refines the calibration matrix without interrupting users from their current tasks. Potentially eliminating the need for explicit calibration. We demonstrate the accuracy and effectiveness of this implicit approach across various tasks and real time applications achieving comparable eye tracking accuracy to native, explicit calibration. While our evaluation focuses on VR and controller-based interactions, we anticipate the broader applicability of this approach to various XR devices and input modalities.
AB - Unlike other inputs for extended reality (XR) that work out of the box, eye tracking typically requires custom calibration per user or session. We present a multimodal inputs approach for implicit calibration of eye tracker in VR, leveraging UI interaction for continuous, background calibration. Our method analyzes gaze data alongside controller interaction with UI elements, and employing ML techniques it continuously refines the calibration matrix without interrupting users from their current tasks. Potentially eliminating the need for explicit calibration. We demonstrate the accuracy and effectiveness of this implicit approach across various tasks and real time applications achieving comparable eye tracking accuracy to native, explicit calibration. While our evaluation focuses on VR and controller-based interactions, we anticipate the broader applicability of this approach to various XR devices and input modalities.
KW - Gaze estimation
KW - implicit calibration
KW - eye tracking
U2 - 10.1145/3706598.3713461
DO - 10.1145/3706598.3713461
M3 - Conference contribution/Paper
T3 - CHI '25
SP - 1
EP - 16
BT - Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems
PB - The Association for Computing Machinery
CY - New York, NY, USA
ER -