Home > Research > Publications & Outputs > Three-point interaction

Electronic data

  • AVI2016_3Point_Interaction

    Rights statement: © 2016 Copyright held by the owner/author(s). Publication rights licensed to ACM. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in AVI '16 Proceedings of the International Working Conference on Advanced Visual Interfaces http://dx.doi.org/10.1145/2909132.2909251

    Accepted author manuscript, 498 KB, PDF document

    Available under license: None

Links

Text available via DOI:

View graph of relations

Three-point interaction: combining bi-manual direct touch with gaze

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Publication date7/06/2016
Host publicationAVI '16 Proceedings of the International Working Conference on Advanced Visual Interfaces
Place of PublicationNew York
PublisherACM
Pages168-175
Number of pages8
ISBN (print)9781450341318
<mark>Original language</mark>English

Abstract

The benefits of two-point interaction for tasks that require users to simultaneously manipulate multiple entities or dimensions are widely known. Two-point interaction has become common, e.g., when zooming or pinching using two fingers on a smartphone. We propose a novel interaction technique that implements three-point interaction by augmenting two-finger direct touch with gaze as a third input channel. We evaluate two key characteristics of our technique in two multi-participant user studies. In the first, we used the technique for object selection. In the second, we evaluate it in a 3D matching task that requires simultaneous continuous input from fingers and the eyes. Our results show that in both cases participants learned to interact with three input channels without cognitive or mental overload. Participants' performance tended towards fast selection times in the first study and exhibited parallel interaction in the second. These results are promising and show that there is scope for additional input channels beyond two-point interaction.

Bibliographic note

© 2016 Copyright held by the owner/author(s). Publication rights licensed to ACM. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in AVI '16 Proceedings of the International Working Conference on Advanced Visual Interfaces http://dx.doi.org/10.1145/2909132.2909251