We have over 12,000 students, from over 100 countries, within one of the safest campuses in the UK


97% of Lancaster students go into work or further study within six months of graduating

Home > Research > Publications & Outputs > Vision-Based Detection of Mobile Smart Objects
View graph of relations

« Back

Vision-Based Detection of Mobile Smart Objects

Research output: Contribution in Book/Report/ProceedingsPaper


Publication date10/2008
Host publicationLecture Notes in Computer Science: Smart Sensing and Context
Number of pages14
<mark>Original language</mark>English


ConferenceEuroSSC 2008
CityZurich, CH
Period1/01/00 → …


ConferenceEuroSSC 2008
CityZurich, CH
Period1/01/00 → …


Molyneaux, David and Gellersen, Hans and Kortuem, Gerd and Schiele, Bernt (2007) Cooperative Augmentation of Smart Objects with Projector-Camera Systems. In: Proc. Ubicomp 2007: 9th International Conference on Ubiquitous Computing, Innsbruck, Austria. Block, Florian and Gellersen, Hans and Hazas, Mike and Molyneaux, David and Villar, Nicolas (2006) Locating Physical Interface Objects on Interactive We evaluate an approach for mobile smart objects to cooperate with projector-camera systems to achieve interactive projected displays on their surfaces without changing their appearance or function. Smart objects describe their appearance directly to the projector-camera system, enabling vision-based detection based on their natural appearance. This detection is a significant challenge, as objects differ in appearance and appear at varying distances and orientations with respect to a tracking camera. We investigate four detection approaches representing different appearance cues and contribute three experimental studies analysing the impact on detection performance, firstly of scale and rotation, secondly the combination of multiple appearance cues and thirdly the use of context information from the smart object. We find that the training of appearance descriptions must coincide with the scale and orientations providing the best detection performance, that multiple cues provide a clear performance gain over a single cue and that context sensing masks distractions and clutter, further improving detection performance.