2011 | OriginalPaper | Buchkapitel
Gabor-Like Image Filtering for Transient Feature Detection and Global Energy Estimation Applied to Multi-expression Classification
verfasst von : Zakia Hammal, Corentin Massot
Erschienen in: Computer Vision, Imaging and Computer Graphics. Theory and Applications
Verlag: Springer Berlin Heidelberg
Aktivieren Sie unsere intelligente Suche um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
An automatic system for facial expression recognition should be able to recognize on-line multiple facial expressions (i.e. “emotional segments”) without interruption. The current paper proposes a new method for the automatic segmentation of “emotional segments” and the dynamic recognition of the corresponding facial expressions in video sequences. First, a new spatial filtering method based on Log-Normal filters is introduced for the analysis of the whole face towards the automatic segmentation of the “emotional segments”. Secondly, a similar filtering-based method is applied to the automatic and precise segmentation of the transient facial features (such as nasal root wrinkles and nasolabial furrows) and the estimation of their orientation. Finally, a dynamic and progressive fusion process of the permanent and transient facial feature deformations is made inside each “emotional segment” for a temporal recognition of the corresponding facial expression. When tested for automatic detection of “emotional segment” in 96 sequences from the MMI and Hammal-Caplier facial expression databases, the proposed method achieved an accuracy of 89%. Tested on 1655 images the automatic detection of transient features achieved a mean precision of 70 % with an error of 2.5 for the estimation of the corresponding orientation. Finally compared to the original model for static facial expression classification, the introduction of transient features and the temporal information increases the precision of the classification of facial expression by 12% and compare favorably to human observers’ performances.