Abstract
We introduce a real-time and calibration-free facial performance capture framework based on a sensor with video and depth input. In this framework, we develop an adaptive PCA model using shape correctives that adjust on-the-fly to the actor's expressions through incremental PCA-based learning. Since the fitting of the adaptive model progressively improves during the performance, we do not require an extra capture or training session to build this model. As a result, the system is highly deployable and easy to use: it can faithfully track any individual, starting from just a single face scan of the subject in a neutral pose. Like many real-time methods, we use a linear subspace to cope with incomplete input data and fast motion. To boost the training of our tracking model with reliable samples, we use a well-trained 2D facial feature tracker on the input video and an efficient mesh deformation algorithm to snap the result of the previous step to high frequency details in visible depth map regions. We show that the combination of dense depth maps and texture features around eyes and lips is essential in capturing natural dialogues and nuanced actor-specific emotions. We demonstrate that using an adaptive PCA model not only improves the fitting accuracy for tracking but also increases the expressiveness of the retargeted character.
Supplemental Material
Available for Download
Supplemental material.
- Alexander, O., Rogers, M., Lambeth, W., Chiang, M., and Debevec, P. 2009. The digital Emily project: photo-real facial modeling and animation. In ACM SIGGRAPH 2009 Courses, 12:1--12:15. Google ScholarDigital Library
- Beeler, T., Hahn, F., Bradley, D., Bickel, B., Beardsley, P., Gotsman, C., Sumner, R. W., and Gross, M. 2011. High-quality passive facial performance capture using anchor frames. ACM Trans. Graph. 30 (August), 75:1--75:10. Google ScholarDigital Library
- Bickel, B., Lang, M., Botsch, M., Otaduy, M. A., and Gross, M. 2008. Pose-space animation and transfer of facial details. In Proceedings of the 2008 ACM SIGGRAPH/Eurographics Symposium on Computer Animation. Google ScholarDigital Library
- Black, M. J., and Yacoob, Y. 1995. Tracking and recognizing rigid and non-rigid facial motions using local parametric models of image motion. In Proceedings of the Fifth International Conference on Computer Vision, IEEE Computer Society, Washington, DC, USA, ICCV '95, 374--. Google ScholarDigital Library
- Blanz, V., and Vetter, T. 1999. A morphable model for the synthesis of 3d faces. In Proceedings of ACM Siggraph 99, ACM Press/Addison-Wesley Publishing Co., 187--194. Google ScholarDigital Library
- Borshukov, G., Piponi, D., Larsen, O., Lewis, J. P., and Tempelaar-Lietz, C. 2005. Universal capture - image-based facial animation for "the matrix reloaded". In ACM SIGGRAPH 2005 Courses. Google ScholarDigital Library
- Botsch, M., and Sorkine, O. 2008. On linear variational surface deformation methods. IEEE Transactions on Visualization and Computer Graphics 14, 1 (Jan.), 213--230. Google ScholarDigital Library
- Bradley, D., Heidrich, W., Popa, T., and Sheffer, A. 2010. High resolution passive facial performance capture. ACM Trans. Graph. 29 (July), 41:1--41:10. Google ScholarDigital Library
- Bregler, C., and Omohundro, S. 1994. Surface learning with applications to lipreading. Advances in neural information processing systems, 43--43.Google Scholar
- Bregler, C., Covell, M., and Slaney, M. 1997. Video rewrite: Driving visual speech with audio. In Proceedings of Computer graphics and interactive techniques. Google ScholarDigital Library
- Chai, J.-x., Xiao, J., and Hodgins, J. 2003. Vision-based control of 3d facial animation. In Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation, Eurographics Association, Aire-la-Ville, Switzerland, Switzerland, SCA '03, 193--206. Google ScholarDigital Library
- Chandrasekaran, S., Manjunath, B. S., Wang, Y.-F., Winkeler, J., and Zhang, H. 1997. An eigenspace update algorithm for image analysis. CVGIP: Graphical Model and Image Processing, 5, 321--332. Google ScholarDigital Library
- Chuang, E., and Bregler, C. 2002. Performance driven facial animation using blendshape interpolation. Tech. rep., Stanford University.Google Scholar
- Collins, R., Liu, Y., and Leordeanu, M. 2005. Online selection of discriminative tracking features. Pattern Analysis and Machine Intelligence, IEEE Transactions on 27, 10, 1631--1643. Google ScholarDigital Library
- Cootes, T. F., Edwards, G. J., and Taylor, C. J. 1998. Active appearance models. In IEEE Transactions on Pattern Analysis and Machine Intelligence, Springer, 484--498. Google ScholarDigital Library
- Covell, M., and Bregler, C. 1996. Eigen-points. In Image Processing, 1996. Proceedings., International Conference on, vol. 3, IEEE, 471--474.Google Scholar
- Decarlo, D., and Metaxas, D. 2000. Optical flow constraints on deformable models with applications to face tracking. Int. J. Comput. Vision 38, 2 (July), 99--127. Google ScholarDigital Library
- Dempster, A., Laird, N., and Rubin, D. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society. Series B (Methodological), 1--38.Google ScholarCross Ref
- Ekman, P., and Friesen, W. 1978. Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto.Google Scholar
- Essa, I., Basu, S., Darrell, T., and Pentland, A. 1996. Modeling, tracking and interactive animation of faces and heads using input from video. In Proceedings of the Computer Animation, IEEE Computer Society, CA '96, 68--. Google ScholarDigital Library
- Furukawa, Y., and Ponce, J. 2009. Dense 3d motion capture for human faces. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2009), 20--25 June 2009, Miami, Florida, USA, IEEE, 1674--1681.Google Scholar
- Fyffe, G., Hawkins, T., Watts, C., Ma, W.-C., and Debevec, P. 2011. Comprehensive facial performance capture. In Eurographics 2011.Google Scholar
- Grabner, H., Leistner, C., and Bischof, H. 2008. Semi-supervised on-line boosting for robust tracking. In Proceedings of the 10th European Conference on Computer Vision: Part I, Springer-Verlag, Berlin, Heidelberg, ECCV '08, 234--247. Google ScholarDigital Library
- Gu, M., and Eisenstat, S. C. 1993. A Stable and Fast Algorithm for Updating the Singular Value Decomposition. Tech. Rep. YALEU/DCS/RR-966, Yale University, New Haven, CT.Google Scholar
- Guenter, B., Grimm, C., Wood, D., Malvar, H., and Pighin, F. 1998. Making faces. In Proceedings of SIGGRAPH '98, ACM, 55--66. Google ScholarDigital Library
- ImageMetrics. 2012. Image metrics live driver SDK http://www.image-metrics.com/livedriver/.Google Scholar
- Kalal, Z., Matas, J., and Mikolajczyk, K. 2009. Online learning of robust object detectors during unstable tracking. In In International Conference on Computer Vision.Google Scholar
- Kirby, M., and Sirovich, L. 1990. Application of the karhunen-loeve procedure for the characterization of human faces. Pattern Analysis and Machine Intelligence, IEEE Transactions on 12, 1, 103--108. Google ScholarDigital Library
- Li, H., Roivainen, P., and Forcheimer, R. 1993. 3-d motion estimation in model-based facial image coding. IEEE Transactions on PAMI 15, 6, 545--555. Google ScholarDigital Library
- Li, H., Adams, B., Guibas, L. J., and Pauly, M. 2009. Robust single-view geometry and motion reconstruction. ACM Transactions on Graphics (Proceedings SIGGRAPH Asia 2009) 28, 5. Google ScholarDigital Library
- Li, H., Weise, T., and Pauly, M. 2010. Example-based facial rigging. ACM Transactions on Graphics (Proceedings SIGGRAPH 2010) 29, 3 (July). Google ScholarDigital Library
- Paysan, P., Knothe, R., Amberg, B., Romdhani, S., and Vetter, T. 2009. A 3d face model for pose and illumination invariant face recognition.Google Scholar
- Pearson, K. 1901. Liii. on lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 2, 11, 559--572.Google ScholarCross Ref
- Pighin, F., and Lewis, J. P. 2006. Performance-driven facial animation. In ACM SIGGRAPH 2006 Courses, ACM, New York, NY, USA, SIGGRAPH '06.Google Scholar
- Pighin, F. H., Szeliski, R., and Salesin, D. 1999. Resynthesizing Facial Animation through 3D Model-based Tracking. In Proc. 7th International Conference on Computer Vision, Kerkyra, Greece, 143--150.Google Scholar
- Roweis, S. 1998. EM algorithms for pca and spca. In in Advances in Neural Information Processing Systems, MIT Press, 626--632. Google ScholarDigital Library
- Rusinkiewicz, S., and Levoy, M. 2001. Efficient variants of the icp algorithm. In International Conference on 3-D Digital Imaging and Modeling.Google Scholar
- Rusinkiewicz, S., Hall-Holt, O., and Levoy, M. 2002. Real-time 3D model acquisition. ACM Transactions on Graphics (Proc. SIGGRAPH) 21, 3 (July), 438--446. Google ScholarDigital Library
- Saragih, J. M., Lucey, S., and Cohn, J. F. 2011. Deformable model fitting by regularized landmark mean-shift. Int. J. Comput. Vision 91, 2 (Jan.), 200--215. Google ScholarDigital Library
- Skocaj, D., and Leonardis, A. 2003. Weighted and robust incremental method for subspace learning. In Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on, IEEE, 1494--1501. Google ScholarDigital Library
- Sugimoto, T., Fukushima, M., and Ibaraki, T. 1995. A parallel relaxation method for quadratic programming problems with interval constraints. Journal of Computational and Applied Mathematics 60, 12, 219--236. Google ScholarDigital Library
- Sumner, R. W., and Popović, J. 2004. Deformation transfer for triangle meshes. ACM Trans. Graph. 23, 3 (Aug.), 399--405. Google ScholarDigital Library
- Valgaerts, L., Wu, C., Bruhn, A., Seidel, H.-P., and Theobalt, C. 2012. Lightweight binocular facial performance capture under uncontrolled lighting. In ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2012), vol. 31. Google ScholarDigital Library
- Vlasic, D., Brand, M., Pfister, H., and Popović, J. 2005. Face transfer with multilinear models. In ACM SIGGRAPH 2005 Papers, ACM, New York, NY, USA, SIGGRAPH '05, 426--433. Google ScholarDigital Library
- Weise, T., Li, H., Gool, L. V., and Pauly, M. 2009. Face/off: Live facial puppetry. In Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer animation. Google ScholarDigital Library
- Weise, T., Bouaziz, S., Li, H., and Pauly, M. 2011. Real-time performance-based facial animation. ACM Transactions on Graphics (Proceedings SIGGRAPH 2011) 30, 4 (July). Google ScholarDigital Library
- Welch, G., and Bishop, G. 1995. An introduction to the kalman filter. Tech. rep., Chapel Hill, NC, USA. Google ScholarDigital Library
- Williams, L. 1990. Performance-driven facial animation. SIGGRAPH Comput. Graph. 24, 4 (Sept.), 235--242. Google ScholarDigital Library
- Zhang, S., and Huang, P. 2004. High-resolution, real-time 3d shape acquisition. In Proceedings of the 2004 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW'04) Volume 3 - Volume 03, IEEE Computer Society, Washington, DC, USA, CVPRW '04, 28--. Google ScholarDigital Library
- Zhang, L., Snavely, N., Curless, B., and Seitz, S. M. 2004. Spacetime faces: High-resolution capture for modeling and animation. In ACM Annual Conference on Computer Graphics, 548--558. Google ScholarDigital Library
Index Terms
- Realtime facial animation with on-the-fly correctives
Recommendations
Online modeling for realtime facial animation
We present a new algorithm for realtime face tracking on commodity RGB-D sensing devices. Our method requires no user-specific training or calibration, or any other form of manual assistance, thus enabling a range of new applications in performance-...
Realtime performance-based facial animation
This paper presents a system for performance-based character animation that enables any user to control the facial expressions of a digital avatar in realtime. The user is recorded in a natural environment using a non-intrusive, commercially available ...
High fidelity facial animation capture and retargeting with contours
SCA '13: Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer AnimationHuman beings are naturally sensitive to subtle cues in facial expressions, especially in areas of the eyes and mouth. Current facial motion capture methods fail to accurately reproduce motions in those areas due to multiple limitations. In this paper, ...
Comments