Home > Research > Publications & Outputs > Utilizing sensor fusion in markerless mobile au...

Electronic data

Links

Text available via DOI:

View graph of relations

Utilizing sensor fusion in markerless mobile augmented reality

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paper

Published
Close
Publication date08/2011
Host publicationMobileHCI '11 Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services
Place of PublicationNew York
PublisherACM
Number of pages4
ISBN (Print)9781450305419
Original languageEnglish
Event13th International Conference on Human-Computer Interaction with Mobile Devices and Services - Stockholm, Sweden
Duration: 30/08/20112/09/2011

Conference

Conference13th International Conference on Human-Computer Interaction with Mobile Devices and Services
CountrySweden
CityStockholm
Period30/08/112/09/11

Conference

Conference13th International Conference on Human-Computer Interaction with Mobile Devices and Services
CountrySweden
CityStockholm
Period30/08/112/09/11

Abstract

One of the key challenges of markerless Augmented Reality (AR) systems, where no a priori information of the environment is available, is map and scale initialization. In such systems, the scale is unknown as it is impossible to determine the scale from a sequence of images alone. Implementing scale is vital for ensuring that augmented objects are contextually sensitive to the environment they are projected upon. In this paper we demonstrate a sensor and vision fusion approach for robust and user-friendly initialization of map and scale. The map is initialized, using inbuilt accelerometers, whilst scale is initialized by the camera auto-focusing capability. The later is possible by applying the Depth From Focus (DFF) method, which was, till now, limited to high precision camera systems. The demonstrated illustrates benefits of such a system, which is running on a commercially available mobile phone Nokia N900.