Home > Research > Publications & Outputs > Eye Drop
View graph of relations

Eye Drop: an interaction concept for gaze-supported point-to-point content transfer

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published

Standard

Eye Drop: an interaction concept for gaze-supported point-to-point content transfer. / Turner, Jayson; Bulling, Andreas; Alexander, Jason et al.
MUM '13 Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia. New York: ACM, 2013.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Harvard

Turner, J, Bulling, A, Alexander, J & Gellersen, H 2013, Eye Drop: an interaction concept for gaze-supported point-to-point content transfer. in MUM '13 Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia. ACM, New York, 12th International Conference on Mobile and Ubiquitous Multimedia, Luleå, Sweden, 2/12/13. https://doi.org/10.1145/2541831.2541868

APA

Vancouver

Turner J, Bulling A, Alexander J, Gellersen H. Eye Drop: an interaction concept for gaze-supported point-to-point content transfer. In MUM '13 Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia. New York: ACM. 2013 doi: 10.1145/2541831.2541868

Author

Turner, Jayson ; Bulling, Andreas ; Alexander, Jason et al. / Eye Drop : an interaction concept for gaze-supported point-to-point content transfer. MUM '13 Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia. New York : ACM, 2013.

Bibtex

@inproceedings{83a9a45cbe7a4850814cda70f14be75c,
title = "Eye Drop: an interaction concept for gaze-supported point-to-point content transfer",
abstract = "The shared displays in our environment contain content that we desire. Furthermore, we often acquire content for a specific purpose, i.e., the acquisition of a phone number to place a call. We have developed a content transfer concept, Eye Drop. Eye Drop provides techniques that allow fluid content acquisition, transfer from shared displays, and local positioning on personal devices using gaze combined with manual input. The eyes naturally focus on content we desire. Our techniques use gaze to point remotely, removing the need for explicit pointing on the user's part. A manual trigger from a personal device confirms selection. Transfer is performed using gaze or manual input to smoothly transition content to a specific location on a personal device. This work demonstrates how techniques can be applied to acquire and apply actions to content through a natural sequence of interaction. We demonstrate a proof of concept prototype through five implemented application scenarios.",
author = "Jayson Turner and Andreas Bulling and Jason Alexander and Hans Gellersen",
year = "2013",
month = dec,
day = "2",
doi = "10.1145/2541831.2541868",
language = "English",
isbn = "9781450326483",
booktitle = "MUM '13 Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia",
publisher = "ACM",
note = "12th International Conference on Mobile and Ubiquitous Multimedia ; Conference date: 02-12-2013 Through 05-12-2013",

}

RIS

TY - GEN

T1 - Eye Drop

T2 - 12th International Conference on Mobile and Ubiquitous Multimedia

AU - Turner, Jayson

AU - Bulling, Andreas

AU - Alexander, Jason

AU - Gellersen, Hans

PY - 2013/12/2

Y1 - 2013/12/2

N2 - The shared displays in our environment contain content that we desire. Furthermore, we often acquire content for a specific purpose, i.e., the acquisition of a phone number to place a call. We have developed a content transfer concept, Eye Drop. Eye Drop provides techniques that allow fluid content acquisition, transfer from shared displays, and local positioning on personal devices using gaze combined with manual input. The eyes naturally focus on content we desire. Our techniques use gaze to point remotely, removing the need for explicit pointing on the user's part. A manual trigger from a personal device confirms selection. Transfer is performed using gaze or manual input to smoothly transition content to a specific location on a personal device. This work demonstrates how techniques can be applied to acquire and apply actions to content through a natural sequence of interaction. We demonstrate a proof of concept prototype through five implemented application scenarios.

AB - The shared displays in our environment contain content that we desire. Furthermore, we often acquire content for a specific purpose, i.e., the acquisition of a phone number to place a call. We have developed a content transfer concept, Eye Drop. Eye Drop provides techniques that allow fluid content acquisition, transfer from shared displays, and local positioning on personal devices using gaze combined with manual input. The eyes naturally focus on content we desire. Our techniques use gaze to point remotely, removing the need for explicit pointing on the user's part. A manual trigger from a personal device confirms selection. Transfer is performed using gaze or manual input to smoothly transition content to a specific location on a personal device. This work demonstrates how techniques can be applied to acquire and apply actions to content through a natural sequence of interaction. We demonstrate a proof of concept prototype through five implemented application scenarios.

U2 - 10.1145/2541831.2541868

DO - 10.1145/2541831.2541868

M3 - Conference contribution/Paper

SN - 9781450326483

BT - MUM '13 Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia

PB - ACM

CY - New York

Y2 - 2 December 2013 through 5 December 2013

ER -