Home > Research > Publications & Outputs > Automatic scene recognition for low-resource de...
View graph of relations

Automatic scene recognition for low-resource devices using evolving classifiers

Research output: Contribution in Book/Report/Proceedings - With ISBN/ISSNConference contribution/Paperpeer-review

Published
Close
Publication date09/2011
Host publication2011 IEEE International Conference on Fuzzy Systems (FUZZ)
PublisherIEEE
Pages2779-2785
Number of pages7
ISBN (electronic)978-1-4244-7316-8
ISBN (print)978-1-4244-7315-1
<mark>Original language</mark>English
EventIEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2011 - Taipei, Taiwan, Province of China
Duration: 19/11/2011 → …

Conference

ConferenceIEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2011
Country/TerritoryTaiwan, Province of China
CityTaipei
Period19/11/11 → …

Conference

ConferenceIEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2011
Country/TerritoryTaiwan, Province of China
CityTaipei
Period19/11/11 → …

Abstract

In this paper an original approach is proposed which makes possible autonomous scenes recognition performed on-line by an evolving self-learning classifier. Existing approaches for scene recognition are off-line and used in intelligent albums for picture categorization/selection. The emergence of powerful mobile platforms with camera on board and sensor-based autonomous (robotic) systems is pushing forward the requirement for efficient self-learning and adaptive/evolving algorithms. Fast real-time and online algorithms for categorisation of the real world environment based on live video stream are essential for understanding and situation awareness as well as for localization and context awareness. In scene analysis the critical problem is feature extraction mechanism for a quick description of the scene. In this paper we apply a well known technique called spatial envelop or GIST. Visual scenes can be quite different but very often they can be grouped in similar types/categories. For example, pictures from different cities across the Globe, e.g. Tokyo, Vancouver, New York Moscow, Dusseldorf, etc. bear the similar pattern of an urban scene high rise buildings, despite the differences in the architectural style. Same applies for the beaches of Miami, Maldives, Varna, Costa del Sol, etc. One assumption based on which such automatic video classifiers can be build is to pre-train them using a large number of such images from different groups. Variety of possible scenes suggests the limitations of such an approach. Therefore, we use in this paper the recently propose evolving fuzzy rule-based classifier, simpleClass, which is self learning and thus updates its rules and categories descriptions with each new image. In addition, it is fully recursive, computationally efficient and yet linguistically transparent.