Dynamics of facial actions for assessing smile genuineness

Citation metadata

Date: Jan. 5, 2021
From: PLoS ONE(Vol. 16, Issue 1)
Publisher: Public Library of Science
Document Type: Report
Length: 10,944 words
Lexile Measure: 1580L

Document controls

Main content

Abstract :

Applying computer vision techniques to distinguish between spontaneous and posed smiles is an active research topic of affective computing. Although there have been many works published addressing this problem and a couple of excellent benchmark databases created, the existing state-of-the-art approaches do not exploit the action units defined within the Facial Action Coding System that has become a standard in facial expression analysis. In this work, we explore the possibilities of extracting discriminative features directly from the dynamics of facial action units to differentiate between genuine and posed smiles. We report the results of our experimental study which shows that the proposed features offer competitive performance to those based on facial landmark analysis and on textural descriptors extracted from spatial-temporal blocks. We make these features publicly available for the UvA-NEMO and BBC databases, which will allow other researchers to further improve the classification scores, while preserving the interpretation capabilities attributed to the use of facial action units. Moreover, we have developed a new technique for identifying the smile phases, which is robust against the noise and allows for continuous analysis of facial videos.

Source Citation

Source Citation   

Gale Document Number: GALE|A647526316