The use of context in mobile devices is receiving increasing attention in mobile and ubiquitous computing research. In this article we consider how to augment mobile devices with awareness of their environment and situation as context. Most work to date has been based on integration of generic context sensors, in particular for location and visual context. We propose a different approach based on integration of multiple diverse sensors for awareness of situational context that can not be inferred from location, and targeted at mobile device platforms that typically do not permit processing of visual context. We have investigated multi-sensor context-awareness in a series of projects, and report experience from development of a number of device prototypes. These include development of an awareness module for augmentation of a mobile phone, of the Mediacup exemplifying context-enabled everyday artifacts, and of the Smart-Its platform for aware mobile devices. The prototypes have been explored in various applications to validate the multi-sensor approach to awareness, and to develop new perspectives of how embedded context-awareness can be applied in mobile and ubiquitous computing.
Computer-augmentation of everyday artefacts is at the core of many visions of ubiquitous computing. This paper presented pioneering work on the augmentation of mobile objects with a combination of sensors and embedded recognition such that they autonomously model their state ""in the real world"". This work is widely cited (150 entries in Google Scholar), and the concept of autonomously aware artefacts became the foundation for numerous follow-on research collaborations, e.g. investigating how ""cooperative artefacts"" can support monitoring of health and safety compliance in work environments (EPSRC WINES project NEMO). RAE_import_type : Journal article RAE_uoa_type : Computer Science and Informatics