Home > Research > Publications & Outputs > Resolving Target Ambiguity in 3D Gaze Interacti...

Electronic data

  • vor_depth_CHI-2

    Rights statement: © ACM, 2019. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems http://doi.acm.org/10.1145/3290605.3300842

    Accepted author manuscript, 1.1 MB, PDF document

View graph of relations

Resolving Target Ambiguity in 3D Gaze Interaction through VOR Depth Estimation

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paper

Published
NullPointerException

Abstract

Target disambiguation is a common problem in gaze interfaces, as eye tracking has accuracy and precision limitations. In 3D environments this is compounded by objects overlapping in the field of view, as a result of their positioning at different depth with partial occlusion. We introduce \textit{VOR depth estimation}, a method based on the vestibulo-ocular reflex of the eyes in compensation of head movement, and explore its application to resolve target ambiguity. The method estimates gaze depth by comparing the rotations of the eye and the head when the users look at a target and deliberately rotate their head. We show that VOR eye movement presents an alternative to vergence for gaze depth estimation, that is feasible also with monocular tracking. In an evaluation of its use for target disambiguation, our method outperforms vergence for targets presented at greater depth.

Bibliographic note

© ACM, 2019. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems http://doi.acm.org/10.1145/3290605.3300842