12,000

We have over 12,000 students, from over 100 countries, within one of the safest campuses in the UK

93%

93% of Lancaster students go into work or further study within six months of graduating

Home > Research > Publications & Outputs > Utilizing sensor fusion in markerless mobile au...
View graph of relations

« Back

Utilizing sensor fusion in markerless mobile augmented reality

Research output: Contribution in Book/Report/ProceedingsPaper

Published

Publication date08/2011
Host publicationMobileHCI '11 Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services
Place of publicationNew York
PublisherACM
Number of pages4
ISBN (Print)9781450305419
<mark>Original language</mark>English

Conference

Conference13th International Conference on Human-Computer Interaction with Mobile Devices and Services
CountrySweden
CityStockholm
Period30/08/112/09/11

Conference

Conference13th International Conference on Human-Computer Interaction with Mobile Devices and Services
CountrySweden
CityStockholm
Period30/08/112/09/11

Abstract

One of the key challenges of markerless Augmented Reality (AR) systems, where no a priori information of the environment is available, is map and scale initialization. In such systems, the scale is unknown as it is impossible to determine the scale from a sequence of images alone. Implementing scale is vital for ensuring that augmented objects are contextually sensitive to the environment they are projected upon. In this paper we demonstrate a sensor and vision fusion approach for robust and user-friendly initialization of map and scale. The map is initialized, using inbuilt accelerometers, whilst scale is initialized by the camera auto-focusing capability. The later is possible by applying the Depth From Focus (DFF) method, which was, till now, limited to high precision camera systems. The demonstrated illustrates benefits of such a system, which is running on a commercially available mobile phone Nokia N900.