Home > Research > Publications & Outputs > A Fitts’ Law Study of Gaze-Hand Alignment for S...

Electronic data

  • Fitts_Law_Uta

    Accepted author manuscript, 7.01 MB, PDF document

Text available via DOI:

View graph of relations

A Fitts’ Law Study of Gaze-Hand Alignment for Selection in 3D User Interfaces

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

A Fitts’ Law Study of Gaze-Hand Alignment for Selection in 3D User Interfaces. / Wagner, Uta; Lystbæk, Mathias; Manakhov, Pavel et al.
2023 CHI Conference on Human Factors in Computing Systems. New York: ACM, 2023.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Wagner, U, Lystbæk, M, Manakhov, P, Grønbæk, JE, Pfeuffer, K & Gellersen, H 2023, A Fitts’ Law Study of Gaze-Hand Alignment for Selection in 3D User Interfaces. in 2023 CHI Conference on Human Factors in Computing Systems. ACM, New York. https://doi.org/10.1145/3544548.3581423

APA

Wagner, U., Lystbæk, M., Manakhov, P., Grønbæk, J. E., Pfeuffer, K., & Gellersen, H. (2023). A Fitts’ Law Study of Gaze-Hand Alignment for Selection in 3D User Interfaces. In 2023 CHI Conference on Human Factors in Computing Systems ACM. https://doi.org/10.1145/3544548.3581423

Vancouver

Wagner U, Lystbæk M, Manakhov P, Grønbæk JE, Pfeuffer K, Gellersen H. A Fitts’ Law Study of Gaze-Hand Alignment for Selection in 3D User Interfaces. In 2023 CHI Conference on Human Factors in Computing Systems. New York: ACM. 2023 doi: 10.1145/3544548.3581423

Author

Wagner, Uta ; Lystbæk, Mathias ; Manakhov, Pavel et al. / A Fitts’ Law Study of Gaze-Hand Alignment for Selection in 3D User Interfaces. 2023 CHI Conference on Human Factors in Computing Systems. New York : ACM, 2023.

Bibtex

@inproceedings{e3232139afc0425b8f6f634d5924c501,
title = "A Fitts{\textquoteright} Law Study of Gaze-Hand Alignment for Selection in 3D User Interfaces",
abstract = "Gaze-Hand Alignment has recently been proposed for multimodal selection in 3D. The technique takes advantage of gaze for target pre-selection, as it naturally precedes manual input. Selection is then completed when manual input aligns with gaze on the target, without need for an additional click method. In this work we evaluate two alignment techniques, Gaze&Finger and Gaze&Handray, combining gaze with image plane pointing versus raycasting, in comparison with hands-only baselines and Gaze&Pinch as established multimodal technique. We used Fitts{\textquoteright} Law study design with targets presented at different depths in the visual scene, to assess effect of parallax on performance. The alignment techniques outperformed their respective hands-only baselines. Gaze&Finger is efficient when targets are close to the image plane but less performant with increasing target depth due to parallax. ",
author = "Uta Wagner and Mathias Lystb{\ae}k and Pavel Manakhov and Gr{\o}nb{\ae}k, {Jens Emil} and Ken Pfeuffer and Hans Gellersen",
year = "2023",
month = apr,
day = "19",
doi = "10.1145/3544548.3581423",
language = "English",
booktitle = "2023 CHI Conference on Human Factors in Computing Systems",
publisher = "ACM",

}

RIS

TY - GEN

T1 - A Fitts’ Law Study of Gaze-Hand Alignment for Selection in 3D User Interfaces

AU - Wagner, Uta

AU - Lystbæk, Mathias

AU - Manakhov, Pavel

AU - Grønbæk, Jens Emil

AU - Pfeuffer, Ken

AU - Gellersen, Hans

PY - 2023/4/19

Y1 - 2023/4/19

N2 - Gaze-Hand Alignment has recently been proposed for multimodal selection in 3D. The technique takes advantage of gaze for target pre-selection, as it naturally precedes manual input. Selection is then completed when manual input aligns with gaze on the target, without need for an additional click method. In this work we evaluate two alignment techniques, Gaze&Finger and Gaze&Handray, combining gaze with image plane pointing versus raycasting, in comparison with hands-only baselines and Gaze&Pinch as established multimodal technique. We used Fitts’ Law study design with targets presented at different depths in the visual scene, to assess effect of parallax on performance. The alignment techniques outperformed their respective hands-only baselines. Gaze&Finger is efficient when targets are close to the image plane but less performant with increasing target depth due to parallax.

AB - Gaze-Hand Alignment has recently been proposed for multimodal selection in 3D. The technique takes advantage of gaze for target pre-selection, as it naturally precedes manual input. Selection is then completed when manual input aligns with gaze on the target, without need for an additional click method. In this work we evaluate two alignment techniques, Gaze&Finger and Gaze&Handray, combining gaze with image plane pointing versus raycasting, in comparison with hands-only baselines and Gaze&Pinch as established multimodal technique. We used Fitts’ Law study design with targets presented at different depths in the visual scene, to assess effect of parallax on performance. The alignment techniques outperformed their respective hands-only baselines. Gaze&Finger is efficient when targets are close to the image plane but less performant with increasing target depth due to parallax.

U2 - 10.1145/3544548.3581423

DO - 10.1145/3544548.3581423

M3 - Conference contribution/Paper

BT - 2023 CHI Conference on Human Factors in Computing Systems

PB - ACM

CY - New York

ER -