Home > Research > Publications & Outputs > Resolving Target Ambiguity in 3D Gaze Interacti...

Electronic data

  • vor_depth_CHI-2

    Rights statement: © ACM, 2019. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems http://doi.acm.org/10.1145/3290605.3300842

    Accepted author manuscript, 1.1 MB, PDF document

View graph of relations

Resolving Target Ambiguity in 3D Gaze Interaction through VOR Depth Estimation

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Publication date4/05/2019
Host publicationProceedings of the 2019 CHI Conference on Human Factors in Computing Systems
PublisherACM
Number of pages12
ISBN (print)9781450359702
<mark>Original language</mark>English
Event2019 CHI Conference on Human Factors in Computing Systems, CHI EA 2019 - Glasgow, United Kingdom
Duration: 4/05/20199/05/2019

Conference

Conference2019 CHI Conference on Human Factors in Computing Systems, CHI EA 2019
Country/TerritoryUnited Kingdom
CityGlasgow
Period4/05/199/05/19

Conference

Conference2019 CHI Conference on Human Factors in Computing Systems, CHI EA 2019
Country/TerritoryUnited Kingdom
CityGlasgow
Period4/05/199/05/19

Abstract

Target disambiguation is a common problem in gaze interfaces, as eye tracking has accuracy and precision limitations. In 3D environments this is compounded by objects overlapping in the field of view, as a result of their positioning at different depth with partial occlusion. We introduce \textit{VOR depth estimation}, a method based on the vestibulo-ocular reflex of the eyes in compensation of head movement, and explore its application to resolve target ambiguity. The method estimates gaze depth by comparing the rotations of the eye and the head when the users look at a target and deliberately rotate their head. We show that VOR eye movement presents an alternative to vergence for gaze depth estimation, that is feasible also with monocular tracking. In an evaluation of its use for target disambiguation, our method outperforms vergence for targets presented at greater depth.

Bibliographic note

© ACM, 2019. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems http://doi.acm.org/10.1145/3290605.3300842