Home > Research > Publications & Outputs > Estimating scale using depth from focus for mob...
View graph of relations

Estimating scale using depth from focus for mobile augmented reality.

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Close
Publication date06/2011
Host publicationProceedings of the 3rd ACM SIGCHI symposium on Engineering interactive computing systems
PublisherACM
Pages1-6
Number of pages6
ISBN (print)978-1-4503-0670-6
<mark>Original language</mark>English
EventACM SIGCHI Symposium on Engineering Interactive Computing Systems -
Duration: 1/01/1900 → …

Conference

ConferenceACM SIGCHI Symposium on Engineering Interactive Computing Systems
Period1/01/00 → …

Conference

ConferenceACM SIGCHI Symposium on Engineering Interactive Computing Systems
Period1/01/00 → …

Abstract

Whilst there has been considerable progress in augmented reality over recent years it has principally been related to either marker based or apriori mapped systems which limits its opportunity for wide scale deployment. Recent advances in marker-less systems that have no apriori information using techniques borrowed from robotic vision are now finding their way into mobile augmented reality and are producing exciting results. However, unlike marker based and apriori tracking systems these techniques are independent of scale which is a vital component in ensuring that augmented objects are contextually sensitive to the environment they are projected upon. In this paper we address the problem of scale by adapting a Depth From Focus (DFF) technique, which has previously been limited to high-end cameras to a commercial mobile phone. The results clearly show that the technique is viable and with the ever-improving quality of camera phone optics, add considerably to the enhancement of mobile augmented reality solutions. Further as it simple require a platfrom with an auto-focusing camera the solution is applicable to other AR platforms.