Home > Research > Publications & Outputs > Monocular gaze depth estimation using the vesti...

Electronic data

  • vor_depth_ETRA (4)

    Rights statement: © ACM, 2019. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ETRA '19 Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications http://doi.acm.org/10.1145/3314111.3319822

    Accepted author manuscript, 1.86 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Monocular gaze depth estimation using the vestibulo-ocular reflex

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Publication date25/06/2019
Host publicationProceedings - ETRA 2019: 2019 ACM Symposium On Eye Tracking Research and Applications
EditorsStephen N. Spencer
Place of PublicationNew York
PublisherACM
Number of pages9
ISBN (electronic)9781450367097
ISBN (print)9781450367097
<mark>Original language</mark>English

Publication series

NameEye Tracking Research and Applications Symposium (ETRA)

Abstract

Gaze depth estimation presents a challenge for eye tracking in 3D. This work investigates a novel approach to the problem based on eye movement mediated by the vestibulo-ocular reflex (VOR). VOR stabilises gaze on a target during head movement, with eye movement in the opposite direction, and the VOR gain increases the closer the fixated target is to the viewer. We present a theoretical analysis of the relationship between VOR gain and depth which we investigate with empirical data collected in a user study (N=10). We show that VOR gain can be captured using pupil centres, and propose and evaluate a practical method for gaze depth estimation based on a generic function of VOR gain and two-point depth calibration. The results show that VOR gain is comparable with vergence in capturing depth while only requiring one eye, and provide insight into open challenges in harnessing VOR gain as a robust measure.

Bibliographic note

© ACM, 2019. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ETRA '19 Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications http://doi.acm.org/10.1145/3314111.3319822