Home > Research > Publications & Outputs > Gaze-Hand Alignment

Electronic data

  • nocopyright-145-CameraReady_Gaze_Hand_Alignment_ETRA2022

    Rights statement: © ACM, 2022. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Proceedings of the ACM on Human-Computer Interaction, 2022 http://doi.acm.org/10.1145/3530886

    Accepted author manuscript, 5.91 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Gaze-Hand Alignment: Combining Eye Gaze and Mid-Air Pointing for Interacting with Menus in Augmented Reality

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
  • Mathias Lystbæk
  • Peter Rosenberg
  • Ken Pfeuffer
  • Jens Emil Grønbæk
  • Hans Gellersen
Close
<mark>Journal publication date</mark>13/05/2022
<mark>Journal</mark>Proceedings of the ACM on Human-Computer Interaction
Issue numberETRA
Volume6
Number of pages18
Pages (from-to)145:1-145:18
Publication StatusPublished
<mark>Original language</mark>English

Abstract

Gaze and freehand gestures suit Augmented Reality as users can interact with objects at a distance without need for a separate input device. We propose Gaze-Hand Alignment as a novel multimodal selection principle, defined by concurrent use of both gaze and hand for pointing and alignment of their input on an object as selection trigger. Gaze naturally precedes manual action and is leveraged for pre-selection, and manual crossing of a pre-selected target completes the selection. We demonstrate the principle in two novel techniques, Gaze&Finger for input by direct alignment of hand and finger raised into the line of sight, and Gaze&Hand for input by indirect alignment of a cursor with relative hand movement. In a menu selection experiment, we evaluate the techniques in comparison with Gaze&Pinch and a hands-only baseline. The study showed the gaze-assisted techniques to outperform hands-only input, and gives insight into trade-offs in combining gaze with direct or indirect, and spatial or semantic freehand gestures.

Bibliographic note

© ACM, 2022. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Proceedings of the ACM on Human-Computer Interaction, 2022 http://doi.acm.org/10.1145/3530886