Home > Research > Publications & Outputs > Comparing Gaze, Head and Controller Selection o...

Electronic data

Links

Text available via DOI:

View graph of relations

Comparing Gaze, Head and Controller Selection of Dynamically Revealed Targets in Head-mounted Displays

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
<mark>Journal publication date</mark>30/11/2023
<mark>Journal</mark>IEEE Transactions on Visualization and Computer Graphics
Issue number11
Volume29
Number of pages11
Pages (from-to)4740 - 4750
Publication StatusPublished
Early online date2/10/23
<mark>Original language</mark>English

Abstract

This paper presents a head-mounted virtual reality study that compared gaze, head, and controller pointing for selection of dynamically revealed targets. Existing studies on head-mounted 3D interaction have focused on pointing and selection tasks where all targets are visible to the user. Our study compared the effects of screen width (field of view), target amplitude and width, and prior knowledge of target location on modality performance. Results show that gaze and controller pointing are significantly faster than head pointing and that increased screen width only positively impacts performance up to a certain point. We further investigated the applicability of existing pointing models. Our analysis confirmed the suitability of previously proposed two-component models for all modalities while uncovering differences for gaze at known and unknown target positions. Our findings provide new empirical evidence for understanding input with gaze, head, and controller and are significant for applications that extend around the user.