2013 | OriginalPaper | Buchkapitel
3D Facial Expression Synthesis from a Single Image Using a Model Set
verfasst von : Zhixin Shu, Lei Huang, Changping Liu
Erschienen in: Computer Vision - ACCV 2012 Workshops
Verlag: Springer Berlin Heidelberg
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
In this paper, we present a system for synthesizing 3D human face models containing different expressions from a single facial image. Given a frontal image of the target face with neutral expression, we first detect several key points denoting the shape of the face by Active Shape Model (ASM). Then we apply a RBF-based scattered data interpolation to reconstruct a 3D target face using a neutral expression 3D face model as reference. By analyzing a series of 3D expression face model, we segment the 3D reference model into regions automatically that each region is correspondent to a facial organ. From the expression set we construct a motion model for each facial action with respect to the target face in a local consistent manner. At last, the reconstructed 3D target face model with neutral expression and the facial action motion model are combined to generate 3D target face of various expressions. There are 3 contributions of our work: (1) We employ a set of registered 3D facial expression models as input, which enabled us to generate more complex and visual-realistic expressions than other parameter-based approaches and 2D image-based methods. (2) On the basis of a clustering-based segmentation, we developed a localized linear expression model, which make it possible for us to generate different facial expressions both locally and globally, thusly enlarge the space of synthesize output and break the limitation by the limited scale of the input expression model set. (3) A local space transform procedure is included that the output expression can fit distinct facial shapes regardless of the scarcity of variation of the facial shapes (fat or thin) in the input model set.