Home > Research > Publications & Outputs > Vision-Based Detection of Mobile Smart Objects
View graph of relations

Vision-Based Detection of Mobile Smart Objects

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Publication date10/2008
Host publicationLecture Notes in Computer Science: Smart Sensing and Context
PublisherSpringer
Pages27-40
Number of pages14
Volume5279/2008
<mark>Original language</mark>English
EventEuroSSC 2008 - Zurich, CH
Duration: 1/01/1900 → …

Conference

ConferenceEuroSSC 2008
CityZurich, CH
Period1/01/00 → …

Conference

ConferenceEuroSSC 2008
CityZurich, CH
Period1/01/00 → …

Abstract

Molyneaux, David and Gellersen, Hans and Kortuem, Gerd and Schiele, Bernt (2007) Cooperative Augmentation of Smart Objects with Projector-Camera Systems. In: Proc. Ubicomp 2007: 9th International Conference on Ubiquitous Computing, Innsbruck, Austria. Block, Florian and Gellersen, Hans and Hazas, Mike and Molyneaux, David and Villar, Nicolas (2006) Locating Physical Interface Objects on Interactive We evaluate an approach for mobile smart objects to cooperate with projector-camera systems to achieve interactive projected displays on their surfaces without changing their appearance or function. Smart objects describe their appearance directly to the projector-camera system, enabling vision-based detection based on their natural appearance. This detection is a significant challenge, as objects differ in appearance and appear at varying distances and orientations with respect to a tracking camera. We investigate four detection approaches representing different appearance cues and contribute three experimental studies analysing the impact on detection performance, firstly of scale and rotation, secondly the combination of multiple appearance cues and thirdly the use of context information from the smart object. We find that the training of appearance descriptions must coincide with the scale and orientations providing the best detection performance, that multiple cues provide a clear performance gain over a single cue and that context sensing masks distractions and clutter, further improving detection performance.