Home > Research > Publications & Outputs > Comparing Gaze, Head and Controller Selection o...

Electronic data

Links

Text available via DOI:

View graph of relations

Comparing Gaze, Head and Controller Selection of Dynamically Revealed Targets in Head-mounted Displays

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Comparing Gaze, Head and Controller Selection of Dynamically Revealed Targets in Head-mounted Displays. / Sidenmark, Ludwig; Prummer, Franziska; Newn, Joshua et al.
In: IEEE Transactions on Visualization and Computer Graphics, Vol. 29, No. 11, 30.11.2023, p. 4740 - 4750.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Sidenmark L, Prummer F, Newn J, Gellersen H. Comparing Gaze, Head and Controller Selection of Dynamically Revealed Targets in Head-mounted Displays. IEEE Transactions on Visualization and Computer Graphics. 2023 Nov 30;29(11):4740 - 4750. Epub 2023 Oct 2. doi: 10.1109/TVCG.2023.3320235

Author

Sidenmark, Ludwig ; Prummer, Franziska ; Newn, Joshua et al. / Comparing Gaze, Head and Controller Selection of Dynamically Revealed Targets in Head-mounted Displays. In: IEEE Transactions on Visualization and Computer Graphics. 2023 ; Vol. 29, No. 11. pp. 4740 - 4750.

Bibtex

@article{0515aff60eb14dca9665b431616632d0,
title = "Comparing Gaze, Head and Controller Selection of Dynamically Revealed Targets in Head-mounted Displays",
abstract = "This paper presents a head-mounted virtual reality study that compared gaze, head, and controller pointing for selection of dynamically revealed targets. Existing studies on head-mounted 3D interaction have focused on pointing and selection tasks where all targets are visible to the user. Our study compared the effects of screen width (field of view), target amplitude and width, and prior knowledge of target location on modality performance. Results show that gaze and controller pointing are significantly faster than head pointing and that increased screen width only positively impacts performance up to a certain point. We further investigated the applicability of existing pointing models. Our analysis confirmed the suitability of previously proposed two-component models for all modalities while uncovering differences for gaze at known and unknown target positions. Our findings provide new empirical evidence for understanding input with gaze, head, and controller and are significant for applications that extend around the user.",
keywords = "3D Interaction, Lenses, Mathematical models, Performance evaluation, Pointing, Resists, Selection Performance, Task analysis, Three-dimensional displays, Virtual Reality, Visualization",
author = "Ludwig Sidenmark and Franziska Prummer and Joshua Newn and Hans Gellersen",
year = "2023",
month = nov,
day = "30",
doi = "10.1109/TVCG.2023.3320235",
language = "English",
volume = "29",
pages = "4740 -- 4750",
journal = "IEEE Transactions on Visualization and Computer Graphics",
issn = "1077-2626",
publisher = "IEEE Computer Society",
number = "11",

}

RIS

TY - JOUR

T1 - Comparing Gaze, Head and Controller Selection of Dynamically Revealed Targets in Head-mounted Displays

AU - Sidenmark, Ludwig

AU - Prummer, Franziska

AU - Newn, Joshua

AU - Gellersen, Hans

PY - 2023/11/30

Y1 - 2023/11/30

N2 - This paper presents a head-mounted virtual reality study that compared gaze, head, and controller pointing for selection of dynamically revealed targets. Existing studies on head-mounted 3D interaction have focused on pointing and selection tasks where all targets are visible to the user. Our study compared the effects of screen width (field of view), target amplitude and width, and prior knowledge of target location on modality performance. Results show that gaze and controller pointing are significantly faster than head pointing and that increased screen width only positively impacts performance up to a certain point. We further investigated the applicability of existing pointing models. Our analysis confirmed the suitability of previously proposed two-component models for all modalities while uncovering differences for gaze at known and unknown target positions. Our findings provide new empirical evidence for understanding input with gaze, head, and controller and are significant for applications that extend around the user.

AB - This paper presents a head-mounted virtual reality study that compared gaze, head, and controller pointing for selection of dynamically revealed targets. Existing studies on head-mounted 3D interaction have focused on pointing and selection tasks where all targets are visible to the user. Our study compared the effects of screen width (field of view), target amplitude and width, and prior knowledge of target location on modality performance. Results show that gaze and controller pointing are significantly faster than head pointing and that increased screen width only positively impacts performance up to a certain point. We further investigated the applicability of existing pointing models. Our analysis confirmed the suitability of previously proposed two-component models for all modalities while uncovering differences for gaze at known and unknown target positions. Our findings provide new empirical evidence for understanding input with gaze, head, and controller and are significant for applications that extend around the user.

KW - 3D Interaction

KW - Lenses

KW - Mathematical models

KW - Performance evaluation

KW - Pointing

KW - Resists

KW - Selection Performance

KW - Task analysis

KW - Three-dimensional displays

KW - Virtual Reality

KW - Visualization

U2 - 10.1109/TVCG.2023.3320235

DO - 10.1109/TVCG.2023.3320235

M3 - Journal article

VL - 29

SP - 4740

EP - 4750

JO - IEEE Transactions on Visualization and Computer Graphics

JF - IEEE Transactions on Visualization and Computer Graphics

SN - 1077-2626

IS - 11

ER -