Home > Research > Publications & Outputs > Real-time head-based deep-learning model for ga...

Electronic data

  • Accepted Manuscript

    Rights statement: © ACM, 2022. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ETRA '22: 2022 Symposium on Eye Tracking Research and Applications https://dl.acm.org/doi/10.1145/3517031.3529642

    Accepted author manuscript, 1.52 MB, PDF document

    Available under license: CC BY-NC: Creative Commons Attribution-NonCommercial 4.0 International License

Links

Text available via DOI:

View graph of relations

Real-time head-based deep-learning model for gaze probability regions in collaborative VR

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Real-time head-based deep-learning model for gaze probability regions in collaborative VR. / Bovo, Riccardo; Giunchi, Daniele; Sidenmark, Ludwig et al.
ACM Symposium on Eye Tracking Research and Applications. New York: ACM, 2022. 6.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Bovo, R, Giunchi, D, Sidenmark, L, Costanza, E, Gellersen, H & Heinis, T 2022, Real-time head-based deep-learning model for gaze probability regions in collaborative VR. in ACM Symposium on Eye Tracking Research and Applications., 6, ACM, New York, ETRA '22: 2022 Symposium on Eye Tracking Research and Applications, Seattle, Washington, United States, 8/06/22. https://doi.org/10.1145/3517031.3529642

APA

Bovo, R., Giunchi, D., Sidenmark, L., Costanza, E., Gellersen, H., & Heinis, T. (2022). Real-time head-based deep-learning model for gaze probability regions in collaborative VR. In ACM Symposium on Eye Tracking Research and Applications Article 6 ACM. https://doi.org/10.1145/3517031.3529642

Vancouver

Bovo R, Giunchi D, Sidenmark L, Costanza E, Gellersen H, Heinis T. Real-time head-based deep-learning model for gaze probability regions in collaborative VR. In ACM Symposium on Eye Tracking Research and Applications. New York: ACM. 2022. 6 Epub 2022 Jun 8. doi: 10.1145/3517031.3529642

Author

Bovo, Riccardo ; Giunchi, Daniele ; Sidenmark, Ludwig et al. / Real-time head-based deep-learning model for gaze probability regions in collaborative VR. ACM Symposium on Eye Tracking Research and Applications. New York : ACM, 2022.

Bibtex

@inproceedings{1c85559b2deb42498e7907d9d9040ff6,
title = "Real-time head-based deep-learning model for gaze probability regions in collaborative VR",
abstract = "Eye behaviour has gained much interest in the VR research community as an interaction input and support for collaboration. Researchers implemented gaze inference models when eye-tracking is missing by using head behavior and saliency. However, these solutions are resource-demanding and thus unfit for untethered devices, and their angle accuracy is around 7°, which can be a problem in high-density informative areas. To address this issue, we propose a lightweight deep learning model that generates the probability density function of the gaze as a percentile contour. This solution allows us to introduce a visual attention representation based on a region rather than a point and manage a trade-off between the ambiguity of a region and the error of a point. We tested our model in untethered devices with real-time performances; we evaluated its accuracy which outperforms our identified baselines (average fixation map and head direction).",
keywords = "Neural networks, Visual attention, Gaze inference, Gaze prediction",
author = "Riccardo Bovo and Daniele Giunchi and Ludwig Sidenmark and Enrico Costanza and Hans Gellersen and Thomas Heinis",
note = "{\textcopyright} ACM, 2022. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ETRA '22: 2022 Symposium on Eye Tracking Research and Applications https://dl.acm.org/doi/10.1145/3517031.3529642; ETRA '22: 2022 Symposium on Eye Tracking Research and Applications ; Conference date: 08-06-2022 Through 11-06-2022",
year = "2022",
month = jun,
day = "11",
doi = "10.1145/3517031.3529642",
language = "English",
booktitle = "ACM Symposium on Eye Tracking Research and Applications",
publisher = "ACM",
url = "https://etra.acm.org/2022/",

}

RIS

TY - GEN

T1 - Real-time head-based deep-learning model for gaze probability regions in collaborative VR

AU - Bovo, Riccardo

AU - Giunchi, Daniele

AU - Sidenmark, Ludwig

AU - Costanza, Enrico

AU - Gellersen, Hans

AU - Heinis, Thomas

N1 - © ACM, 2022. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ETRA '22: 2022 Symposium on Eye Tracking Research and Applications https://dl.acm.org/doi/10.1145/3517031.3529642

PY - 2022/6/11

Y1 - 2022/6/11

N2 - Eye behaviour has gained much interest in the VR research community as an interaction input and support for collaboration. Researchers implemented gaze inference models when eye-tracking is missing by using head behavior and saliency. However, these solutions are resource-demanding and thus unfit for untethered devices, and their angle accuracy is around 7°, which can be a problem in high-density informative areas. To address this issue, we propose a lightweight deep learning model that generates the probability density function of the gaze as a percentile contour. This solution allows us to introduce a visual attention representation based on a region rather than a point and manage a trade-off between the ambiguity of a region and the error of a point. We tested our model in untethered devices with real-time performances; we evaluated its accuracy which outperforms our identified baselines (average fixation map and head direction).

AB - Eye behaviour has gained much interest in the VR research community as an interaction input and support for collaboration. Researchers implemented gaze inference models when eye-tracking is missing by using head behavior and saliency. However, these solutions are resource-demanding and thus unfit for untethered devices, and their angle accuracy is around 7°, which can be a problem in high-density informative areas. To address this issue, we propose a lightweight deep learning model that generates the probability density function of the gaze as a percentile contour. This solution allows us to introduce a visual attention representation based on a region rather than a point and manage a trade-off between the ambiguity of a region and the error of a point. We tested our model in untethered devices with real-time performances; we evaluated its accuracy which outperforms our identified baselines (average fixation map and head direction).

KW - Neural networks

KW - Visual attention

KW - Gaze inference

KW - Gaze prediction

U2 - 10.1145/3517031.3529642

DO - 10.1145/3517031.3529642

M3 - Conference contribution/Paper

BT - ACM Symposium on Eye Tracking Research and Applications

PB - ACM

CY - New York

T2 - ETRA '22: 2022 Symposium on Eye Tracking Research and Applications

Y2 - 8 June 2022 through 11 June 2022

ER -