skip to main content
10.1145/1185657.1185845acmconferencesArticle/Chapter ViewAbstractPublication PagessiggraphConference Proceedingsconference-collections
Article

Sony Pictures Imageworks

Published:30 July 2006Publication History

ABSTRACT

Modeling a face and rendering it in a manner that appears realistic is a hard problem in itself, and remarkable progress to achieve realistic looking faces has been made from a modeling perspective [1, 6, 13, 15, 16, 2] as well as a rendering perspective [5, 11, 12]. At last years Siggraph 2005, the course of Digital Face Cloning described relevant material to this end. An even bigger problem is animating the digital face in a realistic and believable manner that stands up to close scrutiny, where even the slightest incorrectness in the animated performance becomes egregiously unacceptable. While good facial animation (stylized and realistic) can be attempted via traditional key frame techniques by skilled animators, it is complicated and often a time consuming task especially as the desired results approach realistic imagery. When an exact replica of an actor's performance is desired, many processes today work by tracking features on the actor face and using information derived from these tracked features to directly drive the digital character. These features, range from a few marker samples [3], curves or contours [15] on the face and even a deforming surface of the face [2, 16]. This may seem like a one stop process where the derived data of the performance of an act can be made to programmatically translate to animations on a digital CG face. On the contrary, given today's technologies in capture, retargeting and animation, this can turn out to be a rather involved process depending on the quality of data, the exactness/realness required in the final animation, facial calibration and often requires expertise of both artists (trackers, facial riggers, technical animators) and software technology to make the end product happen. Also, setting up a facial pipeline that involves many actors' performances captured simultaneously to ultimately produce hundreds of shots, with the need to embrace inputs and controls from artists/animators can be quite a challenge. This course documents attempts to explain some of the processes that we have understood and by which we have gained experience by working on Columbia's Monster House and other motion capture-reliant shows at Sony Pictures Imageworks.The document is organized by first explaining general ideas on what constitutes a performance in section 1. Section 2 explains how facial performance is captured using motion capture technologies at Imageworks. The next section 3 explains the background research that forms the basis of our facial system at Imageworks -- FACS, which was initially devised by Paul Eckman et al. Although FACS has been used widely in research and literature [7], at Sony Pictures Imageworks we have used it on motion captured facial data to drive character faces. The following sections 4, 5, 6 explain how motion captured facial data is processed, stabilized, cleaned and finally retargeted onto a digital face. Finally, we conclude with a motivating discussion that relates to artistic versus software problems in driving a digital face with a performance.

References

  1. . V. Blanz, C. Basso, T. Poggio, and T. Vetter. Reanimating faces in images and video. In Proc. Of Eurographics, 2003.Google ScholarGoogle ScholarCross RefCross Ref
  2. . George Borshukov, Dan Piponi, Oystein Larsen, J. P. Lewis, and Christina Tempelaar Lietz. Universal capture: image-based facial animation for "the matrix reloaded". In Proceedings of SIGGRAPH Conference on Sketches & applications. ACM Press, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. . E. Chuang and C. Bregler. Performance driven facial animation using blendshape interpolation. CSTR- 2002-02, Department of Computer Science, Stanford University, 2002.Google ScholarGoogle Scholar
  4. . Cosker, D.P., Marshall, A.D., Rosin, P.L., Hicks, Y.A., Speech-driven facial animation using a hierarchical model, VISP(151), No. 4, August 2004, pp. 314-321Google ScholarGoogle Scholar
  5. . Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, and Mark Sagar. Acquiring the reflectance field of a human face. In SIGGRAPH 2000 Conference Proceedings, pages 35--42. ACM SIGGRAPH, July 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. . Eisert, P., and Girod, B. 1997. Model-based facial expression parameters from image sequences. In Proceedings of the IEEE International Conference on ImageProcessing (ICIP-97), 418-421.Google ScholarGoogle Scholar
  7. . P. Ekman and W.V. Friesen, Manual for the facial action coding system, Consulting Psychologists Press, Palo Alto, 1977.Google ScholarGoogle Scholar
  8. . I. A. Essa and A. P. Pentland Facial expression recognition using a dynamic model and motion energy. Proc. IEEE Int'l Conference on Computer Vision, pages 360--367, 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. . Theobald, B.J., Kruse, S.M., Bangham, J.A., Cawley, G.C., Towards a low bandwidth talking face using appearance models, IVC(21), No. 12-13, December 2003, pp. 1117- 1124.Google ScholarGoogle Scholar
  10. . Tim Hawkins, Andreas Wenger, Chris Tchou, Andrew Gardner, Fredrik Goransson, and Paul Debevec. Animatable facial reflectance fields. In Rendering Techniques 2004: 15th Eurographics Workshop on Rendering, pages 309--320, June 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. . Jensen H. W., Marshiner, S., Levoy, M., and Hanrahan, P. 2001. A practical model for subsurface light transport. Proceedings of SIGGRAPH '2001, 511--518. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. . Jensen H. W. J Buhler, A Rapid Hierarchical Rendering Technique for translucent materials. In Proceedings of SIGGRAPH 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. . Jun Yong Noh and Ulrich Neumann. Expression cloning. In Proceedings of ACM SIGGRAPH 2001, Computer Graphics Proceedings, Annual Conference Series, pages 277--288, August 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. . Mark Sagar, Reflectance Field Rendering of Human Faces for "Spiderman 2". SIGGRAPH 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. . D. Terzopoulos and K. Waters. Techniques for realistic facial modeling and animation. In Nadia Magnenat Thalmann and Daniel Thalmann, editors, Computer Animation 91, pages 59--74. Springer-Verlag, Tokyo, 1991.Google ScholarGoogle ScholarCross RefCross Ref
  16. . Li Zhang, Noah Snavely, Brian Curless, and Steven M. Seitz. Spacetime faces: high resolution capture for modeling and animation. ACM Trans. Graph., 23(3):548--558, 2004 Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Sony Pictures Imageworks

              Recommendations

              Comments

              Login options

              Check if you have access through your login credentials or your institution to get full access on this article.

              Sign in
              • Published in

                cover image ACM Conferences
                SIGGRAPH '06: ACM SIGGRAPH 2006 Courses
                July 2006
                83 pages
                ISBN:1595933646
                DOI:10.1145/1185657

                Copyright © 2006 ACM

                Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

                Publisher

                Association for Computing Machinery

                New York, NY, United States

                Publication History

                • Published: 30 July 2006

                Permissions

                Request permissions about this article.

                Request Permissions

                Check for updates

                Qualifiers

                • Article

                Acceptance Rates

                Overall Acceptance Rate1,822of8,601submissions,21%

                Upcoming Conference

                SIGGRAPH '24

              PDF Format

              View or Download as a PDF file.

              PDF

              eReader

              View online with eReader.

              eReader