skip to main content
research-article
Open Access
Best Student Paper

FaceFolds: Meshed Radiance Manifolds for Efficient Volumetric Rendering of Dynamic Faces

Published:13 May 2024Publication History
Skip Abstract Section

Abstract

3D rendering of dynamic face captures is a challenging problem, and it demands improvements on several fronts---photorealism, efficiency, compatibility, and configurability. We present a novel representation that enables high-quality volumetric rendering of an actor's dynamic facial performances with minimal compute and memory footprint. It runs natively on commodity graphics soft- and hardware, and allows for a graceful trade-off between quality and efficiency. Our method utilizes recent advances in neural rendering, particularly learning discrete radiance manifolds to sparsely sample the scene to model volumetric effects. We achieve efficient modeling by learning a single set of manifolds for the entire dynamic sequence, while implicitly modeling appearance changes as temporal canonical texture. We export a single layered mesh and view-independent RGBA texture video that is compatible with legacy graphics renderers without additional ML integration. We demonstrate our method by rendering dynamic face captures of real actors in a game engine, at comparable photorealism to state-of-the-art neural rendering techniques at previously unseen frame rates.

Skip Supplemental Material Section

Supplemental Material

3651304-medin.mp4

Supplemental video

mp4

140.2 MB

References

  1. Matan Atzmon and Yaron Lipman. 2020. SAL: Sign Agnostic Learning of Shapes From Raw Data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2565--2574. https://doi.org/10.1109/CVPR42600.2020.00264Google ScholarGoogle ScholarCross RefCross Ref
  2. Ziqian Bai, Feitong Tan, Zeng Huang, Kripasindhu Sarkar, Danhang Tang, Di Qiu, Abhimitra Meka, Ruofei Du, Mingsong Dou, Sergio Orts-Escolano, Rohit Pandey, Ping Tan, Thabo Beeler, Sean Fanello, and Yinda Zhang. 2023. Learning Personalized High Quality Volumetric Head Avatars From Monocular RGB Videos. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). https://doi.org/10.1109/CVPR52729.2023.01620Google ScholarGoogle ScholarCross RefCross Ref
  3. Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, and Peter Hedman. 2022. Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields. CVPR (2022).Google ScholarGoogle Scholar
  4. Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, and Peter Hedman. 2023. Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields. In 2023 IEEE/CVF International Conference on Computer Vision (ICCV). https://doi.org/10.1109/ICCV51070.2023.01804Google ScholarGoogle ScholarCross RefCross Ref
  5. Alexander W. Bergman, Petr Kellnhofer, Wang Yifan, Eric R. Chan, David B. Lindell, and Gordon Wetzstein. 2023. Generative Neural Articulated Radiance Fields. arXiv:2206.14314 [cs.CV]Google ScholarGoogle Scholar
  6. Volker Blanz and Thomas Vetter. 1999. A Morphable Model for the Synthesis of 3D Faces. In SIGGRAPH. https://doi.org/10.1145/311535.311556Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Guy E Blelloch. 1990. Prefix Sums and Their Applications. (1990). https://doi.org/10.1109/ISTCS.1995.377028Google ScholarGoogle ScholarCross RefCross Ref
  8. Marcel C. Buehler, Abhimitra Meka, Gengyan Li, Thabo Beeler, and Otmar Hilliges. 2021. VariTex: Variational Neural Face Textures. In Proceedings of the IEEE/CVF International Conference on Computer Vision. https://doi.org/10.1109/ICCV48922.2021.01363Google ScholarGoogle ScholarCross RefCross Ref
  9. Marcel C Bühler, Kripasindhu Sarkar, Tanmay Shah, Gengyan Li, Daoye Wang, Leonhard Helminger, Sergio Orts-Escolano, Dmitry Lagun, Otmar Hilliges, Thabo Beeler, et al. 2023. Preface: A Data-Driven Volumetric Prior for Few-Shot Ultra High-Resolution Face Synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 3402--3413. https://doi.org/10.1109/ICCV51070.2023.00315Google ScholarGoogle ScholarCross RefCross Ref
  10. Chen Cao, Tomas Simon, Jin Kyu Kim, Gabe Schwartz, Michael Zollhoefer, Shun-Suke Saito, Stephen Lombardi, Shih-En Wei, Danielle Belko, Shoou-I Yu, Yaser Sheikh, and Jason Saragih. 2022. Authentic Volumetric Avatars From a Phone Scan. ACM Transactions on Graphics (2022). https://doi.org/10.1145/3528223.3530143Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Eric R. Chan, Connor Z. Lin, Matthew A. Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas Guibas, Jonathan Tremblay, Sameh Khamis, Tero Karras, and Gordon Wetzstein. 2021. Efficient Geometry-Aware 3D Generative Adversarial Networks. In ArXiv. https://doi.org/10.1109/CVPR52688.2022.01565Google ScholarGoogle ScholarCross RefCross Ref
  12. Zhiqin Chen, Thomas Funkhouser, Peter Hedman, and Andrea Tagliasacchi. 2023. MobileNeRF: Exploiting the Polygon Rasterization Pipeline for Efficient Neural Field Rendering on Mobile Architectures. In The Conference on Computer Vision and Pattern Recognition (CVPR). https://doi.org/10.1109/CVPR52729.2023.01590Google ScholarGoogle ScholarCross RefCross Ref
  13. Alvaro Collet, Ming Chuang, Pat Sweeney, Don Gillett, Dennis Evseev, David Calabrese, Hugues Hoppe, Adam Kirk, and Steve Sullivan. 2015. High-Quality Streamable Free-Viewpoint Video. ACM Trans. Graph 34, 4, Article 69 (jul 2015), 13 pages. https://doi.org/10.1145/2766945Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Michael Deering, Stephanie Winner, Bic Schediwy, Chris Duffy, and Neil Hunt. 1988. The Triangle Processor and Normal Vector Shader: A VLSI System for High Performance Graphics. ACM SIGGRAPH Computer Graphics 22, 4 (1988), 21--30. https://doi.org/10.1145/378456.378468Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Yu Deng, Baoyuan Wang, and Heung-Yeung Shum. 2023. Learning Detailed Radiance Manifolds for High-Fidelity and 3D-Consistent Portrait Synthesis From Monocular Image. https://doi.org/10.1109/CVPR52729.2023.02017 arXiv:2211.13901 [cs.CV]Google ScholarGoogle ScholarCross RefCross Ref
  16. Yu Deng, Jiaolong Yang, Jianfeng Xiang, and Xin Tong. 2022. GRAM: Generative Radiance Manifolds for 3D-Aware Image Generation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. https://doi.org/10.1145/3592460Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Ruofei Du, Ming Chuang, Wayne Chang, Hugues Hoppe, and Amitabh Varshney. 2018. Montage4D: Interactive Seamless Fusion of Multiview Video Textures. In Proceedings of ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (I3D). ACM, 124--133. https://doi.org/10.1145/3190834.3190843Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Ruofei Du, Ming Chuang, Wayne Chang, Hugues Hoppe, and Amitabh Varshney. 2019. Montage4D: Real-time Seamless Fusion and Stylization of Multiview Video Textures. Journal of Computer Graphics Techniques 8, 1 (17 Jan. 2019), 1--34. https://www.jcgt.org/published/0008/01/01/Google ScholarGoogle Scholar
  19. Hao-Bin Duan, Miao Wang, Jin-Chuan Shi, Xu-Chuan Chen, and Yan-Pei Cao. 2023. BakedAvatar: Baking Neural Fields for Real-Time Head Avatar Synthesis. ACM Trans. Graph. 42, 6, Article 225 (sep 2023), 14 pages. https://doi.org/10.1145/3618399Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Daniel Duckworth, Peter Hedman, Christian Reiser, Peter Zhizhin, Jean-François Thibert, Mario Lučić, Richard Szeliski, and Jonathan T. Barron. 2023. SMERF: Streamable Memory Efficient Radiance Fields for Real-Time Large-Scene Exploration. arXiv:2312.07541 [cs.CV]Google ScholarGoogle Scholar
  21. Bernhard Egger, William AP Smith, Ayush Tewari, Stefanie Wuhrer, Michael Zollhoefer, Thabo Beeler, Florian Bernard, Timo Bolkart, Adam Kortylewski, Sami Romdhani, et al. 2020. 3D Morphable Face Models---Past, Present, and Future. Transactions on Graphics (ToG) 39, 5 (2020), 1--38. https://doi.org/10.1145/3395208Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Graham Fyffe, Tim Hawkins, Chris Watts, Wan-Chun Ma, and Paul Debevec. 2011. Comprehensive Facial Performance Capture. Computer Graphics Forum 30, 2 (2011), 425--434. https://doi.org/10.1111/j.1467-8659.2011.01888.xGoogle ScholarGoogle ScholarCross RefCross Ref
  23. Stephan J. Garbin, Marek Kowalski, Virginia Estellers, Stanislaw Szymanowicz, Shideh Rezaeifar, Jingjing Shen, Matthew Johnson, and Julien Valentin. 2022. VolTeMorph: Realtime, Controllable and Generalisable Animation of Volumetric Representations. arXiv:2208.00949 [cs.GR]Google ScholarGoogle Scholar
  24. Stephan J. Garbin, Marek Kowalski, Matthew Johnson, Jamie Shotton, and Julien P. C. Valentin. 2021. FastNeRF: High-Fidelity Neural Rendering at 200FPS. In 2021 IEEE/CVF International Conference on Computer Vision (ICCV). 14326--14335. https://doi.org/10.1109/ICCV48922.2021.01408Google ScholarGoogle ScholarCross RefCross Ref
  25. Paulo Gotardo, Jérémy Riviere, Derek Bradley, Abhijeet Ghosh, and Thabo Beeler. 2018. Practical Dynamic Facial Appearance Modeling and Acquisition. ACM Trans. Graph 37, 6, Article 232 (dec 2018), 13 pages. https://doi.org/10.1145/3272127.3275073Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Philip-William Grassal, Malte Prinzler, Titus Leistner, Carsten Rother, Matthias Nießner, and Justus Thies. 2022. Neural Head Avatars From Monocular RGB Videos. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). https://doi.org/10.1109/CVPR52688.2022.01810Google ScholarGoogle ScholarCross RefCross Ref
  27. Amos Gropp, Lior Yariv, Niv Haim, Matan Atzmon, and Yaron Lipman. 2020. Implicit Geometric Regularization for Learning Shapes. ArXiv Preprint ArXiv:2002.10099 (2020). https://arxiv.org/pdf/2002.10099Google ScholarGoogle Scholar
  28. Kaiwen Guo, Peter Lincoln, Philip Davidson, Jay Busch, Xueming Yu, Matt Whalen, Geoff Harvey, Sergio Orts-Escolano, Rohit Pandey, Jason Dourgarian, Danhang Tang, Anastasia Tkach, Adarsh Kowdle, Emily Cooper, Mingsong Dou, Sean Fanello, Graham Fyffe, Christoph Rhemann, Jonathan Taylor, Paul Debevec, and Shahram Izadi. 2019. The Relightables: Volumetric Performance Capture of Humans With Realistic Relighting. (2019). https://doi.org/10.1145/3355089.3356571Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Peter Hedman, Pratul P. Srinivasan, Ben Mildenhall, Jonathan T. Barron, and Paul Debevec. 2021. Baking Neural Radiance Fields for Real-Time View Synthesis. In 10.1109/ICCV48922.2021.00582. 5855--5864. https://doi.org/10.1109/ICCV48922.2021.00582Google ScholarGoogle ScholarCross RefCross Ref
  30. Kacper Kania, Stephan J. Garbin, Andrea Tagliasacchi, Virginia Estellers, Kwang Moo Yi, Julien Valentin, Tomasz Trzciński, and Marek Kowalski. 2023. BlendFields: Few-Shot Example-Driven Facial Modeling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. https://doi.org/10.1109/CVPR52729.2023.00047Google ScholarGoogle ScholarCross RefCross Ref
  31. Michael Kazhdan, Matthew Bolitho, and Hugues Hoppe. 2006. Poisson Surface Reconstruction. In Proceedings of the Fourth Eurographics Symposium on Geometry Processing, Vol. 7. 0. https://doi.org/10.1145/2487228.2487237x26ampGoogle ScholarGoogle ScholarCross RefCross Ref
  32. Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, and George Drettakis. 2023. 3D Gaussian Splatting for Real-Time Radiance Field Rendering. ACM Transactions on Graphics 42, 4 (July 2023). https://doi.org/10.1145/3592433Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Taras Khakhulin, Vanessa Sklyarova, Victor Lempitsky, and Egor Zakharov. 2022. Realistic One-Shot Mesh-Based Head Avatars. In European Conference of Computer Vision (ECCV). https://doi.org/10.1007/978-3-031-20086-1_20Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Hyeongwoo Kim, Pablo Garrido, Ayush Tewari, Weipeng Xu, Justus Thies, Matthias Nießner, Patrick Pérez, Christian Richardt, Michael Zollöfer, and Christian Theobalt. 2018. Deep Video Portraits. ACM Transactions on Graphics (TOG) 37, 4 (2018), 163. https://doi.org/10.1145/3197517.3201283Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Diederik P Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. ArXiv (2014). https://doi.org/10.48550/arXiv.1412.6980Google ScholarGoogle ScholarCross RefCross Ref
  36. Tobias Kirschstein, Shenhan Qian, Simon Giebenhain, Tim Walter, and Matthias Nießner. 2023. NeRSemble: Multi-View Radiance Field Reconstruction of Human Heads. ACM Trans. Graph 42, 4, Article 161 (jul 2023), 14 pages. https://doi.org/10.1145/3592455Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Gengyan Li, Abhimitra Meka, Franziska Mueller, Marcel C. Buehler, Otmar Hilliges, and Thabo Beeler. 2022. EyeNeRF: a hybrid representation for photorealistic synthesis, animation and relighting of human eyes. ACM Trans. Graph. 41, 4, Article 166 (jul 2022), 16 pages. https://doi.org/10.1145/3528223.3530130Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Tianye Li, Timo Bolkart, Michael J Black, Hao Li, and Javier Romero. 2017. Learning a Model of Facial Shape and Expression From 4D Scans. ACM Trans. Graph (2017). https://doi.org/10.1145/3130800.3130813Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Zhaoshuo Li, Thomas Müller, Alex Evans, Russell H Taylor, Mathias Unberath, Ming-Yu Liu, and Chen-Hsuan Lin. 2023. Neuralangelo: High-Fidelity Neural Surface Reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 8456--8465. https://doi.org/10.1109/CVPR52729.2023.00817Google ScholarGoogle ScholarCross RefCross Ref
  40. Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, and Christian Theobalt. 2020. Neural Sparse Voxel Fields. In Proceedings of the 34th International Conference on Neural Information Processing Systems (Vancouver, BC, Canada) (NIPS'20). Curran Associates Inc., Red Hook, NY, USA, Article 1313, 13 pages. https://doi.org/10.5555/3495724.3497037Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, and Yaser Sheikh. 2019. Neural Volumes: Learning Dynamic Renderable Volumes From Images. ACM Trans. Graph 38, 4, Article 65 (2019), 14 pages. https://doi.org/10.1145/3306346.3323020Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Stephen Lombardi, Tomas Simon, Gabriel Schwartz, Michael Zollhoefer, Yaser Sheikh, and Jason Saragih. 2021. Mixture of Volumetric Primitives for Efficient Neural Rendering. ACM Trans. Graph 40, 4, Article 59 (jul 2021), 13 pages. https://doi.org/10.1145/3450626.3459863Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Safa C. Medin, Bernhard Egger, Anoop Cherian, Ye Wang, Joshua B. Tenenbaum, Xiaoming Liu, and Tim K. Marks. 2022. MOST-GAN: 3D Morphable StyleGAN for Disentangled Face Image Manipulation. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. 1962--1971. https://doi.org/10.1609/aaai.v36i2.20091Google ScholarGoogle ScholarCross RefCross Ref
  44. Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. 2020. NeRF: Representing Scenes As Neural Radiance Fields for View Synthesis. In ECCV. https://doi.org/10.1145/3503250Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. 2021. Nerf: Representing Scenes As Neural Radiance Fields for View Synthesis. Commun. ACM (2021). https://doi.org/10.1145/3503250Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. 2022. Instant Neural Graphics Primitives With a Multiresolution Hash Encoding. ACM Transactions on Graphics 41, 102 (2022). Issue 4. https://doi.org/10.1145/3528223.3530127Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Michael Niemeyer, Lars Mescheder, Michael Oechsle, and Andreas Geiger. 2020. Differentiable Volumetric Rendering: Learning Implicit 3d Representations Without 3d Supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 3504--3515. https://doi.org/10.1109/CVPR42600.2020.00356Google ScholarGoogle ScholarCross RefCross Ref
  48. Sergio Orts-Escolano, Christoph Rhemann, Sean Fanello, Wayne Chang, Adarsh Kowdle, Yury Degtyarev, David Kim, Philip Davidson, Sameh Khamis, Mingsong Dou, Vladimir Tankovich, Charles Loop, Qin Cai, Philip Chou, Sarah Mennicken, Julien Valentin, Vivek Pradeep, Shenlong Wang, Sing Bing Kang, Pushmeet Kohli, Yuliya Lutchyn, Cem Keskin, and Shahram Izadi. 2016. Holoportation: Virtual 3D Teleportation in Real-Time. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology. ACM. https://doi.org/10.1145/2984511.2984517Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Keunhong Park, Utkarsh Sinha, Jonathan T. Barron, Sofien Bouaziz, Dan B Goldman, Steven M. Seitz, and Ricardo Martin-Brualla. 2021a. Nerfies: Deformable Neural Radiance Fields. ICCV (2021). https://doi.org/10.1109/ICCV48922.2021.00581Google ScholarGoogle ScholarCross RefCross Ref
  50. Keunhong Park, Utkarsh Sinha, Peter Hedman, Jonathan T. Barron, Sofien Bouaziz, Dan B Goldman, Ricardo Martin-Brualla, and Steven M. Seitz. 2021b. HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields. ACM Trans. Graph 40, 6, Article 238 (dec 2021). https://doi.org/10.1145/3478513.3480487Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Shenhan Qian, Tobias Kirschstein, Liam Schoneveld, Davide Davoli, Simon Giebenhain, and Matthias Nießner. 2023. GaussianAvatars: Photorealistic Head Avatars With Rigged 3D Gaussians. ArXiv Preprint ArXiv:2312.02069 (2023). https://doi.org/10.48550/arXiv.2312.02069Google ScholarGoogle ScholarCross RefCross Ref
  52. Christian Reiser, Richard Szeliski, Dor Verbin, Pratul P. Srinivasan, Ben Mildenhall, Andreas Geiger, Jonathan T. Barron, and Peter Hedman. 2023. MERF: Memory-Efficient Radiance Fields for Real-Time View Synthesis in Unbounded Scenes. ACM Transactions on Graphics 42, 4 (2023). https://doi.org/10.1145/3592426Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Shunsuke Saito, Gabriel Schwartz, Tomas Simon, Junxuan Li, and Giljoo Nam. 2023. Relightable Gaussian Codec Avatars. (2023). arXiv:2312.03704 [cs.GR]Google ScholarGoogle Scholar
  54. Sara Fridovich-Keil and Alex Yu, Matthew Tancik, Qinhong Chen, Benjamin Recht, and Angjoo Kanazawa. 2022. Plenoxels: Radiance Fields Without Neural Networks. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. https://doi.org/10.1109/CVPR52688.2022.00542Google ScholarGoogle ScholarCross RefCross Ref
  55. Jingxiang Sun, Xuan Wang, Lizhen Wang, Xiaoyu Li, Yong Zhang, Hongwen Zhang, and Yebin Liu. 2023. Next3D: Generative Neural Texture Rasterization for 3D-Aware Head Avatars. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). https://doi.org/10.1109/CVPR52729.2023.02011Google ScholarGoogle ScholarCross RefCross Ref
  56. Matthew Tancik, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, and Ren Ng. 2020. Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains. NeurIPS (2020). https://doi.org/10.5555/3495724.3496356Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, and Wenping Wang. 2021. NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction. arXiv preprint arXiv:2106.10689 (2021). https://doi.org/10.48550/arXiv.2106.10689Google ScholarGoogle ScholarCross RefCross Ref
  58. Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. 2004. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Transactions on Image Processing (2004). https://doi.org/10.1109/tip.2003.819861Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Xin Wen, Miao Wang, Christian Richardt, Ze-Yin Chen, and Shi-Min Hu. 2020. Photorealistic Audio-driven Video Portraits. IEEE Transactions on Visualization and Computer Graphics 26, 12 (2020), 3457--3466. https://doi.org/10.1109/TVCG.2020.3023573Google ScholarGoogle ScholarCross RefCross Ref
  60. Yue Wu, Yu Deng, Jiaolong Yang, Fangyun Wei, Qifeng Chen, and Xin Tong. 2022. AniFaceGAN: Animatable 3D-Aware Face Image Generation for Video Avatars. NeurIPS (2022). https://doi.org/10.48550/arXiv.2210.06465Google ScholarGoogle ScholarCross RefCross Ref
  61. Cheng-hsin Wuu, Ningyuan Zheng, Scott Ardisson, Rohan Bali, Danielle Belko, Eric Brockmeyer, Lucas Evans, Timothy Godisart, Hyowon Ha, Xuhua Huang, Alexander Hypes, Taylor Koska, Steven Krenn, Stephen Lombardi, Xiaomin Luo, Kevyn McPhail, Laura Millerschoen, Michal Perdoch, Mark Pitts, Alexander Richard, Jason Saragih, Junko Saragih, Takaaki Shiratori, Tomas Simon, Matt Stewart, Autumn Trimble, Xinshuo Weng, David Whitewolf, Chenglei Wu, Shoou-I Yu, and Yaser Sheikh. 2022. Multiface: A Dataset for Neural Face Rendering. In ArXiv. https://doi.org/10.48550/ARXIV.2207.11243Google ScholarGoogle ScholarCross RefCross Ref
  62. Jianfeng Xiang, Jiaolong Yang, Yu Deng, and Xin Tong. 2023. GRAM-HD: 3D-Consistent Image Generation at High Resolution With Generative Radiance Manifolds. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 2195--2205. https://doi.org/10.1109/ICCV51070.2023.00209Google ScholarGoogle ScholarCross RefCross Ref
  63. Yu-Jie Yuan, Yang-Tian Sun, Yu-Kun Lai, Yuewen Ma, Rongfei Jia, and Lin Gao. 2022. NeRF-Editing: Geometry Editing of Neural Radiance Fields. arXiv:2205.04978 [cs.GR]Google ScholarGoogle Scholar
  64. Egor Zakharov, Aleksei Ivakhnenko, Aliaksandra Shysheya, and Victor S. Lempitsky. 2020. Fast Bi-Layer Neural Synthesis of One-Shot Realistic Head Avatars. In European Conference on Computer Vision. https://doi.org/10.1007/978-3-030-58610-_31Google ScholarGoogle ScholarCross RefCross Ref
  65. Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. 2018. The Unreasonable Effectiveness of Deep Features As a Perceptual Metric. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. https://doi.org/10.1109/CVPR.2018.00068Google ScholarGoogle ScholarCross RefCross Ref
  66. Yufeng Zheng, Victoria Fernández Abrevaya, Marcel C. Bühler, Xu Chen, Michael J. Black, and Otmar Hilliges. 2022. I M Avatar: Implicit Morphable Head Avatars From Videos. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. https://doi.org/10.1109/CVPR52688.2022.01318Google ScholarGoogle ScholarCross RefCross Ref
  67. Yufeng Zheng, Wang Yifan, Gordon Wetzstein, Michael J. Black, and Otmar Hilliges. 2023. PointAvatar: Deformable Point-Based Head Avatars From Videos. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). https://doi.org/10.1109/CVPR52729.2023.02017Google ScholarGoogle ScholarCross RefCross Ref
  68. Tinghui Zhou, Richard Tucker, John Flynn, Graham Fyffe, and Noah Snavely. 2018. Stereo Magnification: Learning view synthesis using multiplane images. ACM Trans. Graph. (Proc. SIGGRAPH) 37 (2018). https://arxiv.org/abs/1805.09817Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Wojciech Zielonka, Timo Bolkart, and Justus Thies. 2023. Instant Volumetric Head Avatars. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). https://doi.org/10.1109/CVPR52729.2023.00444Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. FaceFolds: Meshed Radiance Manifolds for Efficient Volumetric Rendering of Dynamic Faces

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Article Metrics

          • Downloads (Last 12 months)50
          • Downloads (Last 6 weeks)50

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader