Skip to main content
Top

2022 | Book

Monte Carlo and Quasi-Monte Carlo Methods

MCQMC 2020, Oxford, United Kingdom, August 10–14

insite
SEARCH

About this book

This volume presents the revised papers of the 14th International Conference in Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing, MCQMC 2020, which took place online during August 10-14, 2020. This book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in statistics, machine learning, finance, and computer graphics, offering information on the latest developments in Monte Carlo and quasi-Monte Carlo methods and their randomized versions.

Table of Contents

Frontmatter

Invited Talks and Tutorials

Frontmatter
Density Estimation by Monte Carlo and Quasi-Monte Carlo
Abstract
Estimating the density of a continuous random variable X has been studied extensively in statistics, in the setting where n independent observations of X are given a priori and one wishes to estimate the density from that. Popular methods include histograms and kernel density estimators. In this review paper, we are interested instead in the situation where the observations are generated by Monte Carlo simulation from a model. Then, one can take advantage of variance reduction methods such as stratification, conditional Monte Carlo, and randomized quasi-Monte Carlo (RQMC), and obtain a more accurate density estimator than with standard Monte Carlo for a given computing budget. We discuss several ways of doing this, proposed in recent papers, with a focus on methods that exploit RQMC. A first idea is to directly combine RQMC with a standard kernel density estimator. Another one is to adapt a simulation-based derivative estimation method such as smoothed perturbation analysis or the likelihood ratio method to obtain a continuous estimator of the cumulative density function (CDF), whose derivative is an unbiased estimator of the density. This can then be combined with RQMC. We summarize recent theoretical results with these approaches and give numerical illustrations of how they improve the convergence of the mean square integrated error.
Pierre L’Ecuyer, Florian Puchhammer
Quasi-Monte Carlo Software
Abstract
Practitioners wishing to experience the efficiency gains from using low discrepancy sequences need correct, robust, well-written software. This article, based on our MCQMC 2020 tutorial, describes some of the better quasi-Monte Carlo (QMC) software available. We highlight the key software components required by QMC to approximate multivariate integrals or expectations of functions of vector random variables. We have combined these components in QMCPy, a Python open-source library, which we hope will draw the support of the QMC community. Here we introduce QMCPy.
Sou-Cheng T. Choi, Fred J. Hickernell, Rathinavel Jagadeeswaran, Michael J. McCourt, Aleksei G. Sorokin

Regular Talks

Frontmatter
A Tool for Custom Construction of QMC and RQMC Point Sets
Abstract
We present LatNet Builder, a software tool to find good parameters for lattice rules, polynomial lattice rules, and digital nets in base 2, for quasi-Monte Carlo (QMC) and randomized quasi-Monte Carlo (RQMC) sampling over the s-dimensional unit hypercube. The selection criteria are figures of merit that give different weights to different subsets of coordinates. They are upper bounds on the worst-case error (for QMC) or variance (for RQMC) for integrands rescaled to have a norm of at most one in certain Hilbert spaces of functions. We summarize what are the various Hilbert spaces, discrepancies, types of weights, figures of merit, types of constructions, and search methods supported by LatNet Builder. We briefly discuss its organization and we provide simple illustrations of what it can do.
Pierre L’Ecuyer, Pierre Marion, Maxime Godin, Florian Puchhammer
On Dropping the First Sobol’ Point
Abstract
Quasi-Monte Carlo (QMC) points are a substitute for plain Monte Carlo (MC) points that greatly improve integration accuracy under mild assumptions on the problem. Because QMC can give errors that are o(1/n) as \(n\rightarrow \infty \), and randomized versions can attain root mean squared errors that are o(1/n), changing even one point can change the estimate by an amount much larger than the error would have been and worsen the convergence rate. As a result, certain practices that fit quite naturally and intuitively with MC points can be very detrimental to QMC performance. These include thinning, burn-in, and taking sample sizes such as powers of 10, when the QMC points were designed for different sample sizes. This article looks at the effects of a common practice in which one skips the first point of a Sobol’ sequence. The retained points ordinarily fail to be a digital net and when scrambling is applied, skipping over the first point can increase the numerical error by a factor proportional to \(\sqrt{n}\) where n is the number of function evaluations used.
Art B. Owen
On the Distribution of Scrambled Nets Over Unanchored Boxes
Abstract
We introduce a new quality measure to assess randomized low-discrepancy point sets of finite size n. This new quality measure, which we call “pairwise sampling dependence index”, is based on the concept of negative dependence. A negative value for this index implies that the corresponding point set integrates the indicator function of any unanchored box with smaller variance than the Monte Carlo method. We show that scrambled \((0,m,s)-\)nets have a negative pairwise sampling dependence index. We also illustrate through an example that randomizing via a digital shift instead of scrambling may yield a positive pairwise sampling dependence index.
Christiane Lemieux, Jaspar Wiart
Lower Bounds for the Number of Random Bits in Monte Carlo Algorithms
Abstract
We continue the study of restricted Monte Carlo algorithms in a general setting. Here we show a lower bound for minimal errors in the setting with finite restriction in terms of deterministic minimal errors. This generalizes a result of Heinrich, Novak, and Pfeiffer (in: Monte Carlo and Quasi-Monte Carlo Methods 2002, H. Niederreiter, ed., Springer-Verlag, Berlin, 2004, pp. 27–49) to the adaptive setting. As a consequence, the lower bounds on the number of random bits from that paper also hold in this setting. We also derive a lower bound on the number of needed bits for integration of Lipschitz functions over the Wiener space, complementing a result of Giles, Hefter, Mayer, and Ritter (J. Complexity 54 (2019), 101395).
Stefan Heinrich
Massively Parallel Path Space Filtering
Abstract
Restricting path tracing to a small number of paths per pixel in order to render images faster rarely achieves a satisfactory image quality for scenes of interest. However, path space filtering may dramatically improve the visual quality by sharing information across vertices of paths classified as proximate. Unlike screen space approaches, these paths neither need to be present on the screen, nor is filtering restricted to the first intersection with the scene. While searching proximate vertices had been more expensive than filtering in screen space, we greatly improve over this performance penalty by storing, updating, and looking up the required information in a hash table. The keys are constructed from jittered and quantized information, such that only a single query very likely replaces costly neighborhood searches. A massively parallel implementation of the algorithm is demonstrated on a graphics processing unit (GPU).
Nikolaus Binder, Sascha Fricke, Alexander Keller
A fresh Take on ‘Barker Dynamics’ for MCMC
Abstract
We study a recently introduced gradient-based Markov chain Monte Carlo method based on ‘Barker dynamics’. We provide a full derivation of the method from first principles, placing it within a wider class of continuous-time Markov jump processes. We then evaluate the Barker approach numerically on a challenging ill-conditioned logistic regression example with imbalanced data, showing in particular that the algorithm is remarkably robust to irregularity (in this case a high degree of skew) in the target distribution.
Max Hird, Samuel Livingstone, Giacomo Zanella
On the Selection of Random Field Evaluation Points in the p-MLQMC Method
Abstract
Engineering problems are often characterized by significant uncertainty in their material parameters. A typical example coming from geotechnical engineering is the slope stability problem where the soil’s cohesion is modeled as a random field. An efficient manner to account for this uncertainty is the novel sampling method called p-refined Multilevel Quasi-Monte Carlo (p-MLQMC). The p-MLQMC method uses a hierarchy of p-refined finite element meshes combined with a deterministic Quasi-Monte Carlo sampling rule. This combination yields a significant computational cost reduction with respect to classic Multilevel Monte Carlo. However, in previous work, not enough consideration was given to how to incorporate the uncertainty, modeled as a random field, in the finite element model with the p-MLQMC method. In the present work we investigate how this can be adequately achieved by means of the integration point method. We therefore investigate how the evaluation points of the random field are to be selected in order to obtain a variance reduction over the levels. We consider three different approaches. These approaches will be benchmarked on a slope stability problem in terms of computational runtime. We find that for a given tolerance the local nested approach yields a speedup up to a factor five with respect to the non-nested approach.
Philippe Blondeel, Pieterjan Robbe, Stijn François, Geert Lombaert, Stefan Vandewalle
Scalable Control Variates for Monte Carlo Methods Via Stochastic Optimization
Abstract
Control variates are a well-established tool to reduce the variance of Monte Carlo estimators. However, for large-scale problems including high-dimensional and large-sample settings, their advantages can be outweighed by a substantial computational cost. This paper considers control variates based on Stein operators, presenting a framework that encompasses and generalizes existing approaches that use polynomials, kernels and neural networks. A learning strategy based on minimizing a variational objective through stochastic optimization is proposed, leading to scalable and effective control variates. Novel theoretical results are presented to provide insight into the variance reduction that can be achieved, and an empirical assessment, including applications to Bayesian inference, is provided in support.
Shijing Si, Chris. J. Oates, Andrew B. Duncan, Lawrence Carin, François-Xavier Briol
Simulation of Conditional Expectations Under Fast Mean-Reverting Stochastic Volatility Models
Abstract
We study the simulation of a large system of stochastic processes subject to a common driving noise and fast mean-reverting stochastic volatilities. This model may be used to describe the firm values of a large pool of financial entities. We then seek an efficient estimator for the probability of a default, indicated by a firm value below a certain threshold, conditional on common factors. We consider approximations where coefficients containing the fast volatility are replaced by certain ergodic averages (a type of law of large numbers), and study a correction term (of central limit theorem-type). The accuracy of these approximations is assessed by numerical simulation of pathwise losses and the estimation of payoff functions as they appear in basket credit derivatives.
Andrei S. Cozma, Christoph Reisinger
Generating From the Strauss Process Using Stitching
Abstract
The Strauss process is a point process with unnormalized density with respect to a Poisson point process of rate \( \lambda \), where each pair of points within a specified distance r of each other contributes a factor \(\gamma \in [0, 1]\) to the density. Basic acceptance-rejection works spectacularly poorly for this problem, which is why several other perfect simulation methods have been developed. These methods, however, also work poorly for reasonably large values of \(\lambda \). Recursive Acceptance Rejection Stitching is a new method that works much faster, allowing the simulation of point processes with values of \(\lambda \) much larger than ever before.
Mark Huber
A Note on Transformed Fourier Systems for the Approximation of Non-periodic Signals
Abstract
A variety of techniques have been developed for the approximation of non-periodic functions. In particular, there are approximation techniques based on rank-1 lattices and transformed rank-1 lattices, including methods that use sampling sets consisting of Chebyshev- and tent-transformed nodes. We compare these methods with a parameterized transformed Fourier system that yields similar \(\ell _2\)-approximation errors.
Robert Nasdala, Daniel Potts
Applications of Multivariate Quasi-Random Sampling with Neural Networks
Abstract
Generative moment matching networks (GMMNs) are suggested for modeling the cross-sectional dependence between stochastic processes. The stochastic processes considered are geometric Brownian motions and ARMA–GARCH models. Geometric Brownian motions lead to an application of pricing American basket call options under dependence and ARMA–GARCH models lead to an application of simulating predictive distributions. In both types of applications the benefit of using GMMNs in comparison to parametric dependence models is highlighted and the fact that GMMNs can produce dependent quasi-random samples with no additional effort is exploited to obtain variance reduction.
Marius Hofert, Avinash Prasad, Mu Zhu
Artificial Neural Networks Generated by Low Discrepancy Sequences
Abstract
Artificial neural networks can be represented by paths. Generated as random walks on a dense network graph, we find that the resulting sparse networks allow for deterministic initialization and even weights with fixed sign. Such networks can be trained sparse from scratch, avoiding the expensive procedure of training a dense network and compressing it afterwards. Although sparse, weights are accessed as contiguous blocks of memory. In addition, enumerating the paths using deterministic low discrepancy sequences, for example variants of the Sobol’ sequence, amounts to connecting the layers of neural units by progressive permutations, which naturally avoids bank conflicts in parallel computer hardware. We demonstrate that the artificial neural networks generated by low discrepancy sequences can achieve an accuracy within reach of their dense counterparts at a much lower computational complexity.
Alexander Keller, Matthijs Van keirsbilck
Metadata
Title
Monte Carlo and Quasi-Monte Carlo Methods
Editor
Alexander Keller
Copyright Year
2022
Electronic ISBN
978-3-030-98319-2
Print ISBN
978-3-030-98318-5
DOI
https://doi.org/10.1007/978-3-030-98319-2

Premium Partner