Home > Research > Publications & Outputs > TraceMatch

Electronic data

  • Tracematch

    Rights statement: Copyright is held by the owner/author(s)

    Accepted author manuscript, 1.15 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License


Text available via DOI:

View graph of relations

TraceMatch: a computer vision technique for user input by tracing of animated controls

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Publication date12/09/2016
Host publicationUbiComp '16 : Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing
Place of PublicationNew York
Number of pages6
ISBN (Print)9781450344616
<mark>Original language</mark>English


Recent works have explored the concept of movement correlation interfaces, in which moving objects can be selected by matching the movement of the input device to that of the desired object. Previous techniques relied on a single modality (e.g. gaze or mid-air gestures) and specific hardware to issue commands. TraceMatch is a computer vision technique that enables input by movement correlation while abstracting from any particular input modality. The technique relies only on a conventional webcam to enable users to produce matching gestures with any given body parts, even whilst holding objects. We describe an implementation of the technique for acquisition of orbiting targets, evaluate algorithm performance for different target sizes and frequencies, and demonstrate use of the technique for remote control of graphical as well as physical objects with different body parts.

Bibliographic note

Copyright is held by the owner/author(s)