Home > Research > Publications & Outputs > Exploiting visual quasi-periodicity for real-ti...

Links

Text available via DOI:

View graph of relations

Exploiting visual quasi-periodicity for real-time chewing event detection using active appearance models and support vector machines

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
Close
<mark>Journal publication date</mark>08/2012
<mark>Journal</mark>Personal and Ubiquitous Computing
Issue number6
Volume16
Number of pages11
Pages (from-to)729-739
Publication StatusPublished
Early online date22/07/11
<mark>Original language</mark>English

Abstract

Steady increases in healthcare costs and obesity have inspired recent studies into cost-effective, assistive systems capable of monitoring dietary habits. Few researchers, though, have investigated the use of video as a means of monitoring dietary activities. Video possesses several inherent qualities, such as passive acquisition, that merits its analysis as an input modality for such an application. To this end, we propose a method to automatically detect chewing events in surveillance video of a subject. Firstly, an Active Appearance Model (AAM) is used to track a subject's face across the video sequence. It is observed that the variations in the AAM parameters across chewing events demonstrate a distinct periodicity. We utilize this property to discriminate between chewing and non-chewing facial actions such as talking. A feature representation is constructed by applying spectral analysis to a temporal window of model parameter values. The estimated power spectra subsequently undergo non-linear dimensionality reduction. The low-dimensional embedding of the power spectra are employed to train a binary Support Vector Machine classifier to detect chewing events. To emulate the gradual onset and offset of chewing, smoothness is imposed over the class predictions of neighboring video frames in order to deter abrupt changes in the class labels. Experiments are conducted on a dataset consisting of 37 subjects performing each of five actions, namely, open- and closed-mouth chewing, clutter faces, talking, and still face. Experimental results yielded a cross-validated percentage agreement of 93.0%, indicating that the proposed system provides an efficient approach to automated chewing detection. © Springer-Verlag London Limited 2011.