skip to main content
research-article

Neural control variates

Published:27 November 2020Publication History
Skip Abstract Section

Abstract

We propose neural control variates (NCV) for unbiased variance reduction in parametric Monte Carlo integration. So far, the core challenge of applying the method of control variates has been finding a good approximation of the integrand that is cheap to integrate. We show that a set of neural networks can face that challenge: a normalizing flow that approximates the shape of the integrand and another neural network that infers the solution of the integral equation. We also propose to leverage a neural importance sampler to estimate the difference between the original integrand and the learned control variate. To optimize the resulting parametric estimator, we derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice. When applied to light transport simulation, neural control variates are capable of matching the state-of-the-art performance of other unbiased approaches, while providing means to develop more performant, practical solutions. Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of negligible visible bias.

Skip Supplemental Material Section

Supplemental Material

a243-mueller.mp4

mp4

135 MB

3414685.3417804.mp4

mp4

196.4 MB

References

  1. Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, et al. 2015. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. http://tensorflow.org/Google ScholarGoogle Scholar
  2. Roland Assaraf and Michel Caffarel. 1999. Zero-Variance Principle for Monte Carlo Algorithms. Phys. Rev. Lett. 83 (Dec 1999), 4682--4685. Issue 23. Google ScholarGoogle ScholarCross RefCross Ref
  3. Andrea Barth, Christoph Schwab, and Nathaniel Zollinger. 2011. Multi-level Monte Carlo Finite Element method for elliptic PDEs with Stochastic Coefficients. Numer. Math. 119, 1 (2011), 123--161. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Philippe Bekaert, Philipp Slusallek, Ronald Cools, Vlastimil Havran, and Hans-Peter Seidel. 2003. A custom designed Density Estimation Method for Light Transport. MPI-I-2003-4-004 (April 2003).Google ScholarGoogle Scholar
  5. Laurent Belcour, Cyril Soler, Kartic Subr, Nicolas Holzschuch, and Fredo Durand. 2013. 5D Covariance Tracing for Efficient Defocus and Motion Blur. ACM Trans. Graph. 32, 3, Article Article 31 (July 2013), 18 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Benedikt Bitterli. 2016. Rendering resources. https://benedikt-bitterli.me/resources/.Google ScholarGoogle Scholar
  7. Mark Broadie and Paul Glasserman. 1998. Risk Management and Analysis, Volume 1: Measuring and Modelling Financial Risk. Wiley, New York, Chapter Simulation for option pricing and risk management, 173--208.Google ScholarGoogle Scholar
  8. Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. 2018. Neural Ordinary Differential Equations. arXiv:1806.07366 (June 2018).Google ScholarGoogle Scholar
  9. Petrik Clarberg and Tomas Akenine-Möller. 2008. Exploiting Visibility Correlation in Direct Illumination. Computer Graphics Forum 27, 4 (2008), 1125--1136. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Ken Dahm and Alexander Keller. 2018. Learning Light Transport the Reinforced Way. In Monte Carlo and Quasi-Monte Carlo Methods, Art B. Owen and Peter W. Glynn (Eds.). Springer International Publishing, 181--195.Google ScholarGoogle Scholar
  11. Laurent Dinh, David Krueger, and Yoshua Bengio. 2014. NICE: Non-linear Independent Components Estimation. arXiv:1410.8516 (Oct. 2014).Google ScholarGoogle Scholar
  12. Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. 2016. Density Estimation using Real NVP. arXiv:1605.08803 (March 2016).Google ScholarGoogle Scholar
  13. Shaohua Fan, Stephen Chenney, Bo Hu, Kam-Wah Tsui, and Yu-Chi Lai. 2006. Optimizing Control Variate Estimators for Rendering. Computer Graphics Forum 25, 3 (2006), 351--358.Google ScholarGoogle ScholarCross RefCross Ref
  14. Iliyan Georgiev, Zackary Misso, Toshiya Hachisuka, Derek Nowrouzezahrai, Jaroslav Křivánek, and Wojciech Jarosz. 2019. Integral formulations of volumetric transmittance. ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia) 38, 6 (Nov. 2019). https://doi.org/10/dffnGoogle ScholarGoogle Scholar
  15. Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. 2015. MADE: Masked Autoencoder for Distribution Estimation. In International Conference on Machine Learning. 881--889.Google ScholarGoogle Scholar
  16. Michael B. Giles. 2008. Improved Multilevel Monte Carlo Convergence using the Milstein Scheme. In Monte Carlo and Quasi-Monte Carlo Methods 2006, Alexander Keller, Stefan Heinrich, and Harald Niederreiter (Eds.). Springer, Berlin, Heidelberg, 343--358. Google ScholarGoogle ScholarCross RefCross Ref
  17. Michael B. Giles. 2013. Multilevel Monte Carlo Methods. In Monte Carlo and Quasi-Monte Carlo Methods 2012, Josef Dick, Y. Frances Kuo, W. Gareth Peters, and H. Ian Sloan (Eds.). Springer, Berlin, Heidelberg, 83--103. Google ScholarGoogle ScholarCross RefCross Ref
  18. Xavier Glorot and Yoshua Bengio. 2010. Understanding the Difficulty of Training Deep Feedforward Neural Networks. In Proc. 13th International Conference on Artificial Intelligence and Statistics (May 13--15). JMLR.org, 249--256.Google ScholarGoogle Scholar
  19. Peter W. Glynn and Roberto Szechtman. 2002. Some New Perspectives on the Method of Control Variates. In Monte Carlo and Quasi-Monte Carlo Methods 2000, Kai-Tai Fang, Harald Niederreiter, and Fred J. Hickernell (Eds.). Springer, Berlin, Heidelberg, 27--49. Google ScholarGoogle ScholarCross RefCross Ref
  20. Will Grathwohl, Dami Choi, Yuhuai Wu, Geoff Roeder, and David Duvenaud. 2018. Back-propagation through the Void: Optimizing control variates for black-box gradient estimation. International Conference on Learning Representations.Google ScholarGoogle Scholar
  21. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).Google ScholarGoogle Scholar
  22. Stefan Heinrich. 1998. Monte Carlo Complexity of Global Solution of Integral Equations. Journal of Complexity 14, 2 (1998), 151 -- 175. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Stefan Heinrich. 2000. The Multilevel Method of Dependent Tests. In Advances in Stochastic Simulation Methods. Birkhäuser Boston, Boston, MA, 47--61. Google ScholarGoogle ScholarCross RefCross Ref
  24. Sebastian Herholz, Yangyang Zhao, Oskar Elek, Derek Nowrouzezahrai, Hendrik P. A. Lensch, and Jaroslav Křivánek. 2019. Volume Path Guiding Based on Zero-Variance Random Walk Theory. ACM Trans. Graph. 38, 3, Article 25 (June 2019), 19 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Pedro Hermosilla, Sebastian Maisch, Tobias Ritschel, and Timo Ropinski. 2019. Deep-learning the Latent Space of Light Transport. Computer Graphics Forum 38, 4 (2019).Google ScholarGoogle Scholar
  26. Timothy C. Hesterberg and Barry L. Nelson. 1998. Control Variates for Probability and Quantile Estimation. Management Science 44, 9 (Sept. 1998), 1295--1312. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Chin-Wei Huang, David Krueger, Alexandre Lacoste, and Aaron C. Courville. 2018. Neural Autoregressive Flows. arXiv:1804.00779 (April 2018).Google ScholarGoogle Scholar
  28. Sergey Ioffe and Christian Szegedy. 2015. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv:1502.03167 (2015).Google ScholarGoogle Scholar
  29. Wenzel Jakob. 2010. Mitsuba Renderer. http://www.mitsuba-renderer.org.Google ScholarGoogle Scholar
  30. Simon Kallweit, Thomas Müller, Brian McWilliams, Markus Gross, and Jan Novák. 2017. Deep Scattering: Rendering Atmospheric Clouds with Radiance-Predicting Neural Networks. ACM Trans. Graph. 36, 6, Article 231 (Nov. 2017), 11 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Alexander Keller. 2001. Hierarchical Monte Carlo Image Synthesis. Mathematics and Computers in Simulation 55, 1--3 (2001), 79 -- 92. The Second {IMACS} Seminar on Monte Carlo Methods. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Angelien Kemna and Ton Vorst. 1990. A Pricing Method for Options based on Average Asset Values. Journal of Banking & Finance 14, 1 (1990), 113--129. Google ScholarGoogle ScholarCross RefCross Ref
  33. Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. arXiv:1412.6980 (June 2014).Google ScholarGoogle Scholar
  34. Diederik P. Kingma and Prafulla Dhariwal. 2018. Glow: Generative Flow with Invertible 1x1 Convolutions. arXiv:1807.03039 (July 2018).Google ScholarGoogle Scholar
  35. Diederik P. Kingma, Tim Salimans, Rafal Jozefowicz, Xi Chen, Ilya Sutskever, and Max Welling. 2016. Improved Variational Inference with inverse Autoregressive Flow. In Advances in Neural Information Processing Systems. 4743--4751.Google ScholarGoogle Scholar
  36. Ivan Kobyzev, Simon Prince, and Marcus A. Brubaker. 2019. Normalizing Flows: An Introduction and Review of Current Methods. arXiv:stat.ML/1908.09257Google ScholarGoogle Scholar
  37. Ivo Kondapaneni, Petr Vevoda, Pascal Grittmann, Tomáš Skřivan, Philipp Slusallek, and Jaroslav Křivánek. 2019. Optimal Multiple Importance Sampling. ACM Trans. Graph. 38, 4, Article 37 (July 2019), 14 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Eric P. Lafortune and Yves D. Willems. 1994. The Ambient Term as a Variance Reducing Technique for Monte Carlo Ray Tracing. In Proc. EGWR. 163--171.Google ScholarGoogle Scholar
  39. Eric P. Lafortune and Yves D. Willems. 1995. A 5D Tree to Reduce the Variance of Monte Carlo Ray Tracing. In Proc. EGWR. 11--20.Google ScholarGoogle Scholar
  40. Stephen S. Lavenberg, Thomas L. Moeller, and Peter D. Welch. 1982. Statistical Results on Control Variables with Application to Queueing Network Simulation. Operations Research 30, 1 (1982), 182--202. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Jaakko Lehtinen, Jacob Munkberg, Jon Hasselgren, Samuli Laine, Tero Karras, Miika Aittala, and Timo Aila. 2018. Noise2Noise: Learning Image Restoration without Clean Data. arXiv:cs.CV/1803.04189Google ScholarGoogle Scholar
  42. Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, and Yaser Sheikh. 2019. Neural Volumes: Learning Dynamic Renderable Volumes from Images. ACM Trans. Graph. 38, 4, Article 65 (July 2019), 14 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Maxim Maximov, Laura Leal-Taixe, Mario Fritz, and Tobias Ritschel. 2019. Deep Appearance Maps. In The IEEE International Conference on Computer Vision (ICCV).Google ScholarGoogle Scholar
  44. Abhimitra Meka, Christian Häne, Rohit Pandey, Michael Zollhöfer, Sean Fanello, Graham Fyffe, Adarsh Kowdle, Xueming Yu, Jay Busch, Jason Dourgarian, Peter Denny, Sofien Bouaziz, Peter Lincoln, Matt Whalen, Geoff Harvey, Jonathan Taylor, Shahram Izadi, Andrea Tagliasacchi, Paul Debevec, Christian Theobalt, Julien Valentin, and Christoph Rhemann. 2019. Deep Reflectance Fields: High-quality Facial Reflectance Field Inference from Color Gradient Illumination. ACM Trans. Graph. 38, 4, Article 77 (July 2019), 12 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Antonietta Mira, Reza Solgi, and Daniele Imparato. 2013. Zero variance Markov chain Monte Carlo for Bayesian estimators. Statistics and Computing 23 (2013), 653--662.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Thomas Müller. 2019. "Practical Path Guiding" in Production. In ACM SIGGRAPH Courses: Path Guiding in Production, Chapter 10. ACM, New York, NY, USA, 18:1--18:77. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Thomas Müller, Markus Gross, and Jan Novák. 2017. Practical Path Guiding for Efficient Light-Transport Simulation. Computer Graphics Forum 36, 4 (June 2017), 91--100. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Thomas Müller, Brian Mcwilliams, Fabrice Rousselle, Markus Gross, and Jan Novák. 2019. Neural Importance Sampling. ACM Trans. Graph. 38, 5, Article 145 (Oct. 2019), 19 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Oliver Nalbach, Elena Arabadzhiyska, Dushyant Mehta, Hans-Peter Seidel, and Tobias Ritschel. 2017. Deep Shading: Convolutional Neural Networks for Screen-Space Shading. 36, 4 (2017).Google ScholarGoogle Scholar
  50. Barry L. Nelson. 1990. Control Variate Remedies. Operations Research 38, 6 (1990), 974--992. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Jan Novák, Andrew Selle, and Wojciech Jarosz. 2014. Residual Ratio Tracking for Estimating Attenuation in Participating Media. ACM Trans. Graph. 33, 6 (Nov. 2014). Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. C. Oates, M. Girolami, and N. Chopin. 2014. Control functionals for Monte Carlo integration. Journal of The Royal Statistical Society Series B-statistical Methodology 79 (2014), 695--718.Google ScholarGoogle ScholarCross RefCross Ref
  53. Art Owen and Yi Zhou. 2000. Safe and Effective Importance Sampling. J. Amer. Statist. Assoc. 95, 449 (2000), 135--143. http://www.jstor.org/stable/2669533Google ScholarGoogle ScholarCross RefCross Ref
  54. George Papamakarios, Iain Murray, and Theo Pavlakou. 2017. Masked Autoregressive Flow for Density Estimation. In Advances in Neural Information Processing Systems. 2338--2347.Google ScholarGoogle Scholar
  55. George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan. 2019. Normalizing Flows for Probabilistic Modeling and Inference. arXiv:stat.ML/1912.02762Google ScholarGoogle Scholar
  56. Vincent Pegoraro, Carson Brownlee, Peter S. Shirley, and Steven G. Parker. 2008a. Towards Interactive Global Illumination Effects via Sequential Monte Carlo Adaptation. In Proceedings of the 3rd IEEE Symposium on Interactive Ray Tracing. 107--114.Google ScholarGoogle Scholar
  57. Vincent Pegoraro, Ingo Wald, and Steven G. Parker. 2008b. Sequential Monte Carlo Adaptation in Low-Anisotropy Participating Media. Computer Graphics Forum 27, 4 (2008), 1097--1104.Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Matt Pharr, Wenzel Jacob, and Greg Humphreys. 2016. Physically Based Rendering - From Theory to Implementation. Morgan Kaufmann, Third Edition.Google ScholarGoogle Scholar
  59. Peiran Ren, Jiaping Wang, Minmin Gong, Stephen Lin, Xin Tong, and Baining Guo. 2013. Global Illumination with Radiance Regression Functions. ACM Trans. Graph. 32, 4, Article 130 (July 2013), 12 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Danilo Rezende and Shakir Mohamed. 2015. Variational Inference with Normalizing Flows. In International Conference on Machine Learning. 1530--1538.Google ScholarGoogle Scholar
  61. Fabrice Rousselle, Wojciech Jarosz, and Jan Novák. 2016. Image-space Control Variates for Rendering. ACM Trans. Graph. 35, 6, Article 169 (Nov. 2016), 12 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Fabrice Rousselle, Claude Knaus, and Matthias Zwicker. 2011. Adaptive Sampling and Reconstruction Using Greedy Error Minimization. ACM Trans. Graph. 30, 6, Article 159 (Dec. 2011), 12 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. Vincent Sitzmann, Justus Thies, Felix Heide, Matthias Nießner, Gordon Wetzstein, and Michael Zollhöfer. 2018. DeepVoxels: Learning Persistent 3D Feature Embeddings. In CVPR.Google ScholarGoogle Scholar
  64. Charles Stein. 1972. A bound for the error in the normal approximation to the distribution of a sum of dependent random variables. In Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability, Volume 2: Probability Theory. University of California Press, Berkeley, Calif., 583--602. https://projecteuclid.org/euclid.bsmsp/1200514239Google ScholarGoogle Scholar
  65. László Szécsi, Mateu Sbert, and László Szirmay-Kalos. 2004. Combined Correlated and Importance Sampling in Direct Light Source Computation and Environment Mapping. Computer Graphics Forum 23 (2004), 585--594.Google ScholarGoogle ScholarCross RefCross Ref
  66. László Szirmay-Kalos, Balázs Tóth, and Milán Magdics. 2011. Free Path Sampling in High Resolution Inhomogeneous Participating Media. Computer Graphics Forum 30, 1 (2011), 85--97.Google ScholarGoogle ScholarCross RefCross Ref
  67. Esteban Tabak and Cristina V. Turner. 2013. A Family of Nonparametric Density Estimation Algorithms. Communications on Pure and Applied Mathematics 66, 2 (2013), 145--164. Google ScholarGoogle ScholarCross RefCross Ref
  68. Esteban Tabak and Eric Vanden Eijnden. 2010. Density Estimation by dual Ascent of the Log-Likelihood. Communications in Mathematical Sciences 8, 1 (2010), 217--233.Google ScholarGoogle ScholarCross RefCross Ref
  69. Ayush Tewari, Ohad Fried, Justus Thies, Vincent Sitzmann, Stephen Lombardi, Kalyan Sunkavalli, Ricardo Martin-Brualla, Tomas Simon, Jason Saragih, Matthias Nießner, Rohit Pandey, Sean Fanello, Gordon Wetzstein, Jun-Yan Zhu, Christian Theobalt, Maneesh Agrawala, Eli Shechtman, Dan B Goldman, and Michael Zollhöfer. 2020. State of the Art on Neural Rendering. arXiv:cs.CV/2004.03805Google ScholarGoogle Scholar
  70. Justus Thies, Michael Zollhöfer, and Matthias Nießner. 2019. Deferred Neural Rendering: Image Synthesis Using Neural Textures. ACM Trans. Graph. 38, 4, Article 66 (July 2019), 12 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. Eric Veach. 1997. Robust Monte Carlo methods for light transport simulation. Ph.D. Dissertation. Stanford, CA, USA.Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. Eric Veach and Leonidas J. Guibas. 1995. Optimally Combining Sampling Techniques for Monte Carlo Rendering. In Proc. SIGGRAPH. 419--428. Google ScholarGoogle ScholarDigital LibraryDigital Library
  73. Delio Vicini, Vladlen Koltun, and Wenzel Jakob. 2019. A Learned Shape-Adaptive Subsurface Scattering Model. ACM Trans. Graph. 38, 4, Article 127 (July 2019), 15 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. Marc Sabate Vidales, David Siska, and Lukasz Szpruch. 2018. Unbiased deep solvers for parametric PDEs. arXiv:1810.05094 (Oct. 2018).Google ScholarGoogle Scholar
  75. Jiří Vorba, Ondřej Karlík, Martin Šik, Tobias Ritschel, and Jaroslav Křivánek. 2014. On-line Learning of Parametric Mixture Models for Light Transport Simulation. ACM Trans. Graph. 33, 4 (Aug. 2014).Google ScholarGoogle ScholarDigital LibraryDigital Library
  76. Jiří Vorba and Jaroslav Křivánek. 2016. Adjoint-Driven Russian Roulette and Splitting in Light Transport Simulation. ACM Trans. Graph. 35, 4 (jul 2016).Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. Ruosi Wan, Mingjun Zhong, Haoyi Xiong, and Zhanxing Zhu. 2019. Neural Control Variates for Variance Reduction. arXiv:1806.00159 (Oct. 2019).Google ScholarGoogle Scholar
  78. Tomoya Yamaguchi, Tatsuya Yatagawa, and Shigeo Morishima. 2018. Efficient Metropolis Path Sampling for Material Editing and Re-rendering. In Pacific Graphics Short Papers, Hongbo Fu, Abhijeet Ghosh, and Johannes Kopf (Eds.). The Eurographics Association. Google ScholarGoogle ScholarDigital LibraryDigital Library
  79. Quan Zheng and Matthias Zwicker. 2019. Learning to Importance Sample in Primary Sample Space. Computer Graphics Forum 38, 2 (2019), 169--179. Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Neural control variates

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image ACM Transactions on Graphics
            ACM Transactions on Graphics  Volume 39, Issue 6
            December 2020
            1605 pages
            ISSN:0730-0301
            EISSN:1557-7368
            DOI:10.1145/3414685
            Issue’s Table of Contents

            Copyright © 2020 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 27 November 2020
            Published in tog Volume 39, Issue 6

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader