Home > Research > Publications & Outputs > Cross-device gaze-supported point-to-point cont...
View graph of relations

Cross-device gaze-supported point-to-point content transfer

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Cross-device gaze-supported point-to-point content transfer. / Turner, Jayson; Bulling, Andreas; Alexander, Jason et al.
ETRA '14 Proceedings of the Symposium on Eye Tracking Research and Applications. New York: ACM, 2014. p. 19-26.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Turner, J, Bulling, A, Alexander, J & Gellersen, H 2014, Cross-device gaze-supported point-to-point content transfer. in ETRA '14 Proceedings of the Symposium on Eye Tracking Research and Applications. ACM, New York, pp. 19-26. https://doi.org/10.1145/2578153.2578155

APA

Turner, J., Bulling, A., Alexander, J., & Gellersen, H. (2014). Cross-device gaze-supported point-to-point content transfer. In ETRA '14 Proceedings of the Symposium on Eye Tracking Research and Applications (pp. 19-26). ACM. https://doi.org/10.1145/2578153.2578155

Vancouver

Turner J, Bulling A, Alexander J, Gellersen H. Cross-device gaze-supported point-to-point content transfer. In ETRA '14 Proceedings of the Symposium on Eye Tracking Research and Applications. New York: ACM. 2014. p. 19-26 doi: 10.1145/2578153.2578155

Author

Turner, Jayson ; Bulling, Andreas ; Alexander, Jason et al. / Cross-device gaze-supported point-to-point content transfer. ETRA '14 Proceedings of the Symposium on Eye Tracking Research and Applications. New York : ACM, 2014. pp. 19-26

Bibtex

@inproceedings{18d99d36b6f34e479c789b483262a484,
title = "Cross-device gaze-supported point-to-point content transfer",
abstract = "Within a pervasive computing environment, we see content on shared displays that we wish to acquire and use in a specific way i.e., with an application on a personal device, transferring from point-to-point. The eyes as input can indicate intention to interact with a service, providing implicit pointing as a result. In this paper we investigate the use of gaze and manual input for the positioning of gaze-acquired content on personal devices. We evaluate two main techniques, (1) Gaze Positioning, transfer of content using gaze with manual input to confirm actions, (2) Manual Positioning, content is selected with gaze but final positioning is performed by manual input, involving a switch of modalities from gaze to manual input. A first user study compares these techniques applied to direct and indirect manual input configurations, a tablet with touch input and a laptop with mouse input. A second study evaluated our techniques in an application scenario involving distractor targets. Our overall results showed general acceptance and understanding of all conditions, although there were clear individual user preferences dependent on familiarity and preference toward gaze, touch, or mouse input.",
author = "Jayson Turner and Andreas Bulling and Jason Alexander and Hans Gellersen",
year = "2014",
month = mar,
day = "26",
doi = "10.1145/2578153.2578155",
language = "English",
isbn = "9781450327510 ",
pages = "19--26",
booktitle = "ETRA '14 Proceedings of the Symposium on Eye Tracking Research and Applications",
publisher = "ACM",

}

RIS

TY - GEN

T1 - Cross-device gaze-supported point-to-point content transfer

AU - Turner, Jayson

AU - Bulling, Andreas

AU - Alexander, Jason

AU - Gellersen, Hans

PY - 2014/3/26

Y1 - 2014/3/26

N2 - Within a pervasive computing environment, we see content on shared displays that we wish to acquire and use in a specific way i.e., with an application on a personal device, transferring from point-to-point. The eyes as input can indicate intention to interact with a service, providing implicit pointing as a result. In this paper we investigate the use of gaze and manual input for the positioning of gaze-acquired content on personal devices. We evaluate two main techniques, (1) Gaze Positioning, transfer of content using gaze with manual input to confirm actions, (2) Manual Positioning, content is selected with gaze but final positioning is performed by manual input, involving a switch of modalities from gaze to manual input. A first user study compares these techniques applied to direct and indirect manual input configurations, a tablet with touch input and a laptop with mouse input. A second study evaluated our techniques in an application scenario involving distractor targets. Our overall results showed general acceptance and understanding of all conditions, although there were clear individual user preferences dependent on familiarity and preference toward gaze, touch, or mouse input.

AB - Within a pervasive computing environment, we see content on shared displays that we wish to acquire and use in a specific way i.e., with an application on a personal device, transferring from point-to-point. The eyes as input can indicate intention to interact with a service, providing implicit pointing as a result. In this paper we investigate the use of gaze and manual input for the positioning of gaze-acquired content on personal devices. We evaluate two main techniques, (1) Gaze Positioning, transfer of content using gaze with manual input to confirm actions, (2) Manual Positioning, content is selected with gaze but final positioning is performed by manual input, involving a switch of modalities from gaze to manual input. A first user study compares these techniques applied to direct and indirect manual input configurations, a tablet with touch input and a laptop with mouse input. A second study evaluated our techniques in an application scenario involving distractor targets. Our overall results showed general acceptance and understanding of all conditions, although there were clear individual user preferences dependent on familiarity and preference toward gaze, touch, or mouse input.

U2 - 10.1145/2578153.2578155

DO - 10.1145/2578153.2578155

M3 - Conference contribution/Paper

SN - 9781450327510

SP - 19

EP - 26

BT - ETRA '14 Proceedings of the Symposium on Eye Tracking Research and Applications

PB - ACM

CY - New York

ER -