Home > Research > Publications & Outputs > Hi4D-ADSIP 3-D dynamic facial articulation data...

Links

Text available via DOI:

View graph of relations

Hi4D-ADSIP 3-D dynamic facial articulation database

Research output: Contribution to Journal/MagazineJournal articlepeer-review

Published
Close
<mark>Journal publication date</mark>10/2012
<mark>Journal</mark>Image and Vision Computing
Issue number10
Volume30
Number of pages15
Pages (from-to)713-727
Publication StatusPublished
Early online date18/02/12
<mark>Original language</mark>English

Abstract

The face is an important medium used by humans to communicate, and facial articulation also reflects a person's emotional and awareness states, cognitive activity, personality or wellbeing. With the advances in 3-D imaging technology and ever increasing computing power, automatic analysis of facial articulation using 3-D sequences is becoming viable. This paper describes Hi4D-ADSIP - a comprehensive 3-D dynamic facial articulation database, containing scans with high spatial and temporal resolution. The database is designed not only to facilitate studies on facial expression analysis, but also to aid research into clinical diagnosis of facial dysfunctions. The database currently contains 3360 facial sequences captured from 80 healthy volunteers (control subjects) of various age, gender and ethnicity. The database has been validated using psychophysical experiments used to formally evaluate the accuracy of the recorded expressions. The results of baseline automatic facial expression recognition methods using Eigen- and Fisher-faces are also presented alongside some initial results obtained for clinical cases. This database is believed to be one of the most comprehensive repositories of facial 3-D dynamic articulations to date. The extension of this database is currently under construction aiming at building a comprehensive repository of representative facial dysfunctions exhibited by patients with stroke, Bell's palsy and Parkinson's disease.