Home > Research > Publications & Outputs > Exploiting visual quasi-periodicity for real-ti...

Links

Text available via DOI:

View graph of relations

Exploiting visual quasi-periodicity for real-time chewing event detection using active appearance models and support vector machines

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published

Standard

Exploiting visual quasi-periodicity for real-time chewing event detection using active appearance models and support vector machines. / Cadavid, S.; Abdel-Mottaleb, M.; Helal, Sumi.
In: Personal and Ubiquitous Computing, Vol. 16, No. 6, 08.2012, p. 729-739.

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Harvard

APA

Vancouver

Cadavid S, Abdel-Mottaleb M, Helal S. Exploiting visual quasi-periodicity for real-time chewing event detection using active appearance models and support vector machines. Personal and Ubiquitous Computing. 2012 Aug;16(6):729-739. Epub 2011 Jul 22. doi: 10.1007/s00779-011-0425-x

Author

Cadavid, S. ; Abdel-Mottaleb, M. ; Helal, Sumi. / Exploiting visual quasi-periodicity for real-time chewing event detection using active appearance models and support vector machines. In: Personal and Ubiquitous Computing. 2012 ; Vol. 16, No. 6. pp. 729-739.

Bibtex

@article{c932def46715459d8a451280d4df106c,
title = "Exploiting visual quasi-periodicity for real-time chewing event detection using active appearance models and support vector machines",
abstract = "Steady increases in healthcare costs and obesity have inspired recent studies into cost-effective, assistive systems capable of monitoring dietary habits. Few researchers, though, have investigated the use of video as a means of monitoring dietary activities. Video possesses several inherent qualities, such as passive acquisition, that merits its analysis as an input modality for such an application. To this end, we propose a method to automatically detect chewing events in surveillance video of a subject. Firstly, an Active Appearance Model (AAM) is used to track a subject's face across the video sequence. It is observed that the variations in the AAM parameters across chewing events demonstrate a distinct periodicity. We utilize this property to discriminate between chewing and non-chewing facial actions such as talking. A feature representation is constructed by applying spectral analysis to a temporal window of model parameter values. The estimated power spectra subsequently undergo non-linear dimensionality reduction. The low-dimensional embedding of the power spectra are employed to train a binary Support Vector Machine classifier to detect chewing events. To emulate the gradual onset and offset of chewing, smoothness is imposed over the class predictions of neighboring video frames in order to deter abrupt changes in the class labels. Experiments are conducted on a dataset consisting of 37 subjects performing each of five actions, namely, open- and closed-mouth chewing, clutter faces, talking, and still face. Experimental results yielded a cross-validated percentage agreement of 93.0%, indicating that the proposed system provides an efficient approach to automated chewing detection. {\textcopyright} Springer-Verlag London Limited 2011.",
keywords = "Active appearance models, Behavior detection, Dietary monitoring, Manifold learning, Support vector machines, Assistive system, Binary support vector machines, Class labels, Class prediction, Data sets, Event detection, Facial action, Feature representation, Health care costs, Model parameters, Nonlinear dimensionality reduction, Power-spectra, Quasi-periodicities, Surveillance video, Temporal windows, Use of video, Video frame, Video sequences, Health care, Image retrieval, Power spectrum, Security systems, Spectrum analysis, Ubiquitous computing",
author = "S. Cadavid and M. Abdel-Mottaleb and Sumi Helal",
year = "2012",
month = aug,
doi = "10.1007/s00779-011-0425-x",
language = "English",
volume = "16",
pages = "729--739",
journal = "Personal and Ubiquitous Computing",
issn = "1617-4909",
publisher = "Springer Verlag London Ltd",
number = "6",

}

RIS

TY - JOUR

T1 - Exploiting visual quasi-periodicity for real-time chewing event detection using active appearance models and support vector machines

AU - Cadavid, S.

AU - Abdel-Mottaleb, M.

AU - Helal, Sumi

PY - 2012/8

Y1 - 2012/8

N2 - Steady increases in healthcare costs and obesity have inspired recent studies into cost-effective, assistive systems capable of monitoring dietary habits. Few researchers, though, have investigated the use of video as a means of monitoring dietary activities. Video possesses several inherent qualities, such as passive acquisition, that merits its analysis as an input modality for such an application. To this end, we propose a method to automatically detect chewing events in surveillance video of a subject. Firstly, an Active Appearance Model (AAM) is used to track a subject's face across the video sequence. It is observed that the variations in the AAM parameters across chewing events demonstrate a distinct periodicity. We utilize this property to discriminate between chewing and non-chewing facial actions such as talking. A feature representation is constructed by applying spectral analysis to a temporal window of model parameter values. The estimated power spectra subsequently undergo non-linear dimensionality reduction. The low-dimensional embedding of the power spectra are employed to train a binary Support Vector Machine classifier to detect chewing events. To emulate the gradual onset and offset of chewing, smoothness is imposed over the class predictions of neighboring video frames in order to deter abrupt changes in the class labels. Experiments are conducted on a dataset consisting of 37 subjects performing each of five actions, namely, open- and closed-mouth chewing, clutter faces, talking, and still face. Experimental results yielded a cross-validated percentage agreement of 93.0%, indicating that the proposed system provides an efficient approach to automated chewing detection. © Springer-Verlag London Limited 2011.

AB - Steady increases in healthcare costs and obesity have inspired recent studies into cost-effective, assistive systems capable of monitoring dietary habits. Few researchers, though, have investigated the use of video as a means of monitoring dietary activities. Video possesses several inherent qualities, such as passive acquisition, that merits its analysis as an input modality for such an application. To this end, we propose a method to automatically detect chewing events in surveillance video of a subject. Firstly, an Active Appearance Model (AAM) is used to track a subject's face across the video sequence. It is observed that the variations in the AAM parameters across chewing events demonstrate a distinct periodicity. We utilize this property to discriminate between chewing and non-chewing facial actions such as talking. A feature representation is constructed by applying spectral analysis to a temporal window of model parameter values. The estimated power spectra subsequently undergo non-linear dimensionality reduction. The low-dimensional embedding of the power spectra are employed to train a binary Support Vector Machine classifier to detect chewing events. To emulate the gradual onset and offset of chewing, smoothness is imposed over the class predictions of neighboring video frames in order to deter abrupt changes in the class labels. Experiments are conducted on a dataset consisting of 37 subjects performing each of five actions, namely, open- and closed-mouth chewing, clutter faces, talking, and still face. Experimental results yielded a cross-validated percentage agreement of 93.0%, indicating that the proposed system provides an efficient approach to automated chewing detection. © Springer-Verlag London Limited 2011.

KW - Active appearance models

KW - Behavior detection

KW - Dietary monitoring

KW - Manifold learning

KW - Support vector machines

KW - Assistive system

KW - Binary support vector machines

KW - Class labels

KW - Class prediction

KW - Data sets

KW - Event detection

KW - Facial action

KW - Feature representation

KW - Health care costs

KW - Model parameters

KW - Nonlinear dimensionality reduction

KW - Power-spectra

KW - Quasi-periodicities

KW - Surveillance video

KW - Temporal windows

KW - Use of video

KW - Video frame

KW - Video sequences

KW - Health care

KW - Image retrieval

KW - Power spectrum

KW - Security systems

KW - Spectrum analysis

KW - Ubiquitous computing

U2 - 10.1007/s00779-011-0425-x

DO - 10.1007/s00779-011-0425-x

M3 - Journal article

VL - 16

SP - 729

EP - 739

JO - Personal and Ubiquitous Computing

JF - Personal and Ubiquitous Computing

SN - 1617-4909

IS - 6

ER -