Facial expression is the most important channel for human nonverbal communication. This paper presents a novel and effective approach to automatic 3D Facial Expression Recognition, FER based on the Muscular Movement Model (MMM). In contrast to most of existing methods, MMM deals with such an issue in the viewpoint of anatomy. It first automatically segments the input face by localizing the corresponding points around each muscular region of the reference face using Iterative Closest Normal Pattern (ICNP). A set of shape features of multiple differential quantities, including coordinates, normals and shape index values, are then extracted to describe the geometry deformation of each segmented region. Therefore, MMM tends to combine both the advantages of the model based techniques as well as the feature based ones. Meanwhile, we analyze the importance of these muscular areas, and a score level fusion strategy which optimizes the weights of the muscular areas by using a Genetic Algorithm (GA) is proposed in the learning step. The muscular areas with their optimal weights are finally combined to predict the expression label. The experiments are carried out on the BU-3DFE database, and the results clearly demonstrate the effectiveness of the proposed method.
Weitere Kapitel dieses Buchs durch Wischen aufrufen
Bitte loggen Sie sich ein, um Zugang zu diesem Inhalt zu erhalten
Sie möchten Zugang zu diesem Inhalt erhalten? Dann informieren Sie sich jetzt über unsere Produkte:
- Muscular Movement Model Based Automatic 3D Facial Expression Recognition
- Springer International Publishing