ABSTRACT
Modeling a face and rendering it in a manner that appears realistic is a hard problem in itself, and remarkable progress to achieve realistic looking faces has been made from a modeling perspective [1, 6, 13, 15, 16, 2] as well as a rendering perspective [5, 11, 12]. At last years Siggraph 2005, the course of Digital Face Cloning described relevant material to this end. An even bigger problem is animating the digital face in a realistic and believable manner that stands up to close scrutiny, where even the slightest incorrectness in the animated performance becomes egregiously unacceptable. While good facial animation (stylized and realistic) can be attempted via traditional key frame techniques by skilled animators, it is complicated and often a time consuming task especially as the desired results approach realistic imagery. When an exact replica of an actor's performance is desired, many processes today work by tracking features on the actor face and using information derived from these tracked features to directly drive the digital character. These features, range from a few marker samples [3], curves or contours [15] on the face and even a deforming surface of the face [2, 16]. This may seem like a one stop process where the derived data of the performance of an act can be made to programmatically translate to animations on a digital CG face. On the contrary, given today's technologies in capture, retargeting and animation, this can turn out to be a rather involved process depending on the quality of data, the exactness/realness required in the final animation, facial calibration and often requires expertise of both artists (trackers, facial riggers, technical animators) and software technology to make the end product happen. Also, setting up a facial pipeline that involves many actors' performances captured simultaneously to ultimately produce hundreds of shots, with the need to embrace inputs and controls from artists/animators can be quite a challenge. This course documents attempts to explain some of the processes that we have understood and by which we have gained experience by working on Columbia's Monster House and other motion capture-reliant shows at Sony Pictures Imageworks.The document is organized by first explaining general ideas on what constitutes a performance in section 1. Section 2 explains how facial performance is captured using motion capture technologies at Imageworks. The next section 3 explains the background research that forms the basis of our facial system at Imageworks -- FACS, which was initially devised by Paul Eckman et al. Although FACS has been used widely in research and literature [7], at Sony Pictures Imageworks we have used it on motion captured facial data to drive character faces. The following sections 4, 5, 6 explain how motion captured facial data is processed, stabilized, cleaned and finally retargeted onto a digital face. Finally, we conclude with a motivating discussion that relates to artistic versus software problems in driving a digital face with a performance.
- . V. Blanz, C. Basso, T. Poggio, and T. Vetter. Reanimating faces in images and video. In Proc. Of Eurographics, 2003.Google ScholarCross Ref
- . George Borshukov, Dan Piponi, Oystein Larsen, J. P. Lewis, and Christina Tempelaar Lietz. Universal capture: image-based facial animation for "the matrix reloaded". In Proceedings of SIGGRAPH Conference on Sketches & applications. ACM Press, 2003. Google ScholarDigital Library
- . E. Chuang and C. Bregler. Performance driven facial animation using blendshape interpolation. CSTR- 2002-02, Department of Computer Science, Stanford University, 2002.Google Scholar
- . Cosker, D.P., Marshall, A.D., Rosin, P.L., Hicks, Y.A., Speech-driven facial animation using a hierarchical model, VISP(151), No. 4, August 2004, pp. 314-321Google Scholar
- . Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, and Mark Sagar. Acquiring the reflectance field of a human face. In SIGGRAPH 2000 Conference Proceedings, pages 35--42. ACM SIGGRAPH, July 2000. Google ScholarDigital Library
- . Eisert, P., and Girod, B. 1997. Model-based facial expression parameters from image sequences. In Proceedings of the IEEE International Conference on ImageProcessing (ICIP-97), 418-421.Google Scholar
- . P. Ekman and W.V. Friesen, Manual for the facial action coding system, Consulting Psychologists Press, Palo Alto, 1977.Google Scholar
- . I. A. Essa and A. P. Pentland Facial expression recognition using a dynamic model and motion energy. Proc. IEEE Int'l Conference on Computer Vision, pages 360--367, 1995. Google ScholarDigital Library
- . Theobald, B.J., Kruse, S.M., Bangham, J.A., Cawley, G.C., Towards a low bandwidth talking face using appearance models, IVC(21), No. 12-13, December 2003, pp. 1117- 1124.Google Scholar
- . Tim Hawkins, Andreas Wenger, Chris Tchou, Andrew Gardner, Fredrik Goransson, and Paul Debevec. Animatable facial reflectance fields. In Rendering Techniques 2004: 15th Eurographics Workshop on Rendering, pages 309--320, June 2004. Google ScholarDigital Library
- . Jensen H. W., Marshiner, S., Levoy, M., and Hanrahan, P. 2001. A practical model for subsurface light transport. Proceedings of SIGGRAPH '2001, 511--518. Google ScholarDigital Library
- . Jensen H. W. J Buhler, A Rapid Hierarchical Rendering Technique for translucent materials. In Proceedings of SIGGRAPH 2005. Google ScholarDigital Library
- . Jun Yong Noh and Ulrich Neumann. Expression cloning. In Proceedings of ACM SIGGRAPH 2001, Computer Graphics Proceedings, Annual Conference Series, pages 277--288, August 2001. Google ScholarDigital Library
- . Mark Sagar, Reflectance Field Rendering of Human Faces for "Spiderman 2". SIGGRAPH 2004. Google ScholarDigital Library
- . D. Terzopoulos and K. Waters. Techniques for realistic facial modeling and animation. In Nadia Magnenat Thalmann and Daniel Thalmann, editors, Computer Animation 91, pages 59--74. Springer-Verlag, Tokyo, 1991.Google ScholarCross Ref
- . Li Zhang, Noah Snavely, Brian Curless, and Steven M. Seitz. Spacetime faces: high resolution capture for modeling and animation. ACM Trans. Graph., 23(3):548--558, 2004 Google ScholarDigital Library
Index Terms
- Sony Pictures Imageworks
Recommendations
Sony pictures imageworks & sony pictures animation presents: the mitchells vs. the machines
SIGGRAPH '21: ACM SIGGRAPH 2021 Production SessionsJoin artists from Sony Pictures Imageworks and Sony Pictures Animation for an exclusive behind-the-scenes presentation of "The Mitchells vs. The Machines." This production session will focus on the artistic and technological challenges of creating a ...
Sony Pictures imageworks celebrating 25 years of innovation, imagination and creativity
SIGGRAPH '17: ACM SIGGRAPH 2017 Production SessionsSony Pictures Imageworks, the Academy Award-winning visual effects and animation studio, has created extraordinary and visually stunning images for more than 100 live-action and animated productions over its 25-year history. The studio is a leader in ...
Sony Pictures Imageworks Arnold
Special Issue On Production Rendering and Regular PapersSony Imageworks’ implementation of the Arnold renderer is a fork of the commercial product of the same name, which has evolved independently since around 2009. This article focuses on the design choices that are unique to this version and have tailored ...
Comments