Home > Research > Publications & Outputs > Gaze+RST
View graph of relations

Gaze+RST: integrating Gaze and multitouch for remote Rotate-Scale-Translate tasks

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Publication date2015
Host publicationCHI '15 Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems
Place of PublicationNew York
PublisherACM
Pages4179-4188
Number of pages10
ISBN (print)9781450331456
<mark>Original language</mark>English

Abstract

Our work investigates the use of gaze and multitouch to fluidly perform rotate-scale-translate (RST) tasks on large displays. The work specifically aims to understand if gaze can provide benefit in such a task, how task complexity affects performance, and how gaze and multitouch can be combined to create an integral input structure suited to the task of RST. We present four techniques that individually strike a different balance between gaze-based and touch-based translation while maintaining concurrent rotation and scaling operations. A 16 participant empirical evaluation revealed that three of our four techniques present viable options for this scenario, and that larger distances and rotation/scaling operations can significantly affect a gaze-based translation configuration. Furthermore we uncover new insights regarding multimodal integrality, finding that gaze and touch can be combined into configurations that pertain to integral or separable input structures.