Home > Research > Publications & Outputs > Eye Drop
View graph of relations

Eye Drop: an interaction concept for gaze-supported point-to-point content transfer

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Publication date2/12/2013
Host publicationMUM '13 Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia
Place of PublicationNew York
PublisherACM
Number of pages4
ISBN (print)9781450326483
<mark>Original language</mark>English
Event12th International Conference on Mobile and Ubiquitous Multimedia - Luleå, Sweden
Duration: 2/12/20135/12/2013

Conference

Conference12th International Conference on Mobile and Ubiquitous Multimedia
Country/TerritorySweden
CityLuleå
Period2/12/135/12/13

Conference

Conference12th International Conference on Mobile and Ubiquitous Multimedia
Country/TerritorySweden
CityLuleå
Period2/12/135/12/13

Abstract

The shared displays in our environment contain content that we desire. Furthermore, we often acquire content for a specific purpose, i.e., the acquisition of a phone number to place a call. We have developed a content transfer concept, Eye Drop. Eye Drop provides techniques that allow fluid content acquisition, transfer from shared displays, and local positioning on personal devices using gaze combined with manual input. The eyes naturally focus on content we desire. Our techniques use gaze to point remotely, removing the need for explicit pointing on the user's part. A manual trigger from a personal device confirms selection. Transfer is performed using gaze or manual input to smoothly transition content to a specific location on a personal device. This work demonstrates how techniques can be applied to acquire and apply actions to content through a natural sequence of interaction. We demonstrate a proof of concept prototype through five implemented application scenarios.