Skip to main content
Top

2024 | Book

Monte Carlo and Quasi-Monte Carlo Methods

MCQMC 2022, Linz, Austria, July 17–22

insite
SEARCH

About this book

This book presents the refereed proceedings of the 15th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held in Linz, Austria, and organized by the Johannes Kepler University Linz and the Austrian Academy of Sciences, in July 2022. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these highly active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, in particular arising in finance, statistics and computer graphics.

Table of Contents

Frontmatter

Invited Articles

Frontmatter
Quasi Continuous Level Monte Carlo for Random Elliptic PDEs

This paper provides a framework in which multilevel Monte Carlo and continuous level Monte Carlo can be compared. In continuous level Monte Carlo the level of refinement is determined by an exponentially distributed random variable, which therefore heavily influences the computational complexity. We propose in this paper a variant of the algorithm, where the exponentially distributed random variable is generated by a quasi Monte Carlo sequence, resulting in a significant variance reduction. In the examples presented the quasi continuous level Monte Carlo algorithm outperforms multilevel and continuous level Monte Carlo by a clear margin.

Cedric Aaron Beschle, Andrea Barth
MLMC Techniques for Discontinuous Functions

The Multilevel Monte Carlo (MLMC) approach usually works well when estimating the expected value of a quantity which is a Lipschitz function of intermediate quantities, but if it is a discontinuous function it can lead to a much slower decay in the variance of the MLMC correction. This article reviews the literature on techniques which can be used to overcome this challenge in a variety of different contexts, and discusses recent developments using either a branching diffusion or adaptive sampling.

Michael B. Giles
Introduction to Gaussian Process Regression in Bayesian Inverse Problems, with New Results on Experimental Design for Weighted Error Measures

Bayesian posterior distributions arising in modern applications are often computationally intractable due to the large computational cost of evaluating the data likelihood. Examples include inverse problems in partial differential equation models arising in climate modeling and in subsurface fluid flow. To alleviate the problem of expensive likelihood evaluation, a natural approach is to use Gaussian process regression to build a surrogate model for the likelihood, resulting in an approximate posterior distribution that is amenable to computations in practice. This paper serves as an introduction to Gaussian process regression, in particular in the context of building surrogate models for inverse problems; we also present new insights into a suitable choice of training points, motivated by the use of Gaussian processes in approximate Bayesian inversion. We show that the error between the true and approximate posterior distribution can be bounded by the error between the true and approximate likelihood, measured in the $$L^2$$ L 2 -norm weighted by the true posterior; furthermore we show that minimizing the error between the true and approximate likelihood in this norm suggests choosing the training points in the Gaussian process surrogate model based on the true posterior.

Tapio Helin, Andrew M. Stuart, Aretha L. Teckentrup, Konstantinos C. Zygalakis
Lattice-Based Kernel Approximation and Serendipitous Weights for Parametric PDEs in Very High Dimensions

We describe a fast method for solving elliptic partial differential equations (PDEs) with uncertain coefficients using kernel interpolation at a lattice point set. By representing the input random field of the system using the model proposed by Kaarnioja, Kuo, and Sloan (SIAM J. Numer. Anal. 2020), in which a countable number of independent random variables enter the random field as periodic functions, it was shown by Kaarnioja, Kazashi, Kuo, Nobile, and Sloan (Numer. Math. 2022) that the lattice-based kernel interpolant can be constructed for the PDE solution as a function of the stochastic variables in a highly efficient manner using fast Fourier transform (FFT). In this work, we discuss the connection between our model and the popular “affine and uniform model” studied widely in the literature of uncertainty quantification for PDEs with uncertain coefficients. We also propose a new class of product weights entering the construction of the kernel interpolant, which dramatically improve the computational performance of the kernel interpolant for PDE problems with uncertain coefficients, and allow us to tackle function approximation problems up to very high dimensionalities. Numerical experiments are presented to showcase the performance of the new weights.

Vesa Kaarnioja, Frances Y. Kuo, Ian H. Sloan
Optimal Algorithms for Numerical Integration: Recent Results and Open Problems

We present recent results on optimal algorithms for numerical integration and several open problems. The paper has six parts: 1. Introduction 2. Lower Bounds 3. Universality 4. General Domains 5. IID Information 6. Concluding Remarks.

Erich Novak
Minimum Kernel Discrepancy Estimators

For two decades, reproducing kernels and their associated discrepancies have facilitated elegant theoretical analyses in the setting of quasi Monte Carlo. These same tools are now receiving interest in statistics and related fields, as criteria that can be used to select an appropriate statistical model for a given dataset. The focus of this article is on minimum kernel discrepancy estimators, whose use in statistical applications is reviewed, and a general theoretical framework for establishing their asymptotic properties is presented.

Chris J. Oates
Error Estimates and Variance Reduction for Nonequilibrium Stochastic Dynamics

Equilibrium properties in statistical physics are obtained by computing averages with respect to Boltzmann–Gibbs measures, sampled in practice using ergodic dynamics such as the Langevin dynamics. Some quantities however cannot be computed by simply sampling the Boltzmann–Gibbs measure, in particular transport coefficients, which relate the current of some physical quantity of interest to the forcing needed to induce it. For instance, a temperature difference induces an energy current, the proportionality factor between these two quantities being the thermal conductivity. From an abstract point of view, transport coefficients can also be considered as some form of sensitivity analysis with respect to an added forcing to the baseline dynamics. There are various numerical techniques to estimate transport coefficients, which all suffer from large errors, in particular large statistical errors. This contribution reviews the most popular methods, namely the Green–Kubo approach where the transport coefficient is expressed as some time-integrated correlation function, and the approach based on longtime averages of the stochastic dynamics perturbed by an external driving (so-called nonequilibrium molecular dynamics). In each case, the various sources of errors are made precise, in particular the bias related to the time discretization of the underlying continuous dynamics, and the variance of the associated Monte Carlo estimators. Some recent alternative techniques to estimate transport coefficients are also discussed.

Gabriel Stoltz

Contributed Articles

Frontmatter
Heuristics for the Probabilistic Solution of BVPs with Mixed Boundary Conditions

The performance of stochastic numerical methods for bounded diffusions (accuracy, speed and weak rate of convergence) is considered. The backdrop is the pointwise solution, via stochastic representations, of boundary value problems with mixed boundary conditions. Three stochastic solvers are tested on application-inspired problems, and their performance is noticeably improved thanks to simple heuristics introduced here.

Francisco Bernal, Andrés Berridi
Challenges in Developing Great Quasi-Monte Carlo Software

Quasi-Monte Carlo (QMC) methods have developed over several decades. With the explosion in computational science, there is a need for great software that implements QMC algorithms. We summarize the QMC software that has been developed to date, propose some criteria for developing great QMC software, and suggest some steps toward achieving great software. We illustrate these criteria and steps with the Quasi-Monte Carlo Python library (QMCPy), an open-source community software framework, extensible by design with common programming interfaces to an increasing number of existing or emerging QMC libraries developed by the greater community of QMC researchers.

Sou-Cheng T. Choi, Yuhan Ding, Fred J. Hickernell, Jagadeeswaran Rathinavel, Aleksei G. Sorokin
Numerical Computation of Risk Functionals in PDMP Risk Models

We analyze the ruin event in a Markovian insurance risk model. For actual computations of risk functionals, we sketch different numerical approaches and focus on assessing the performance of a quantization algorithm. Since by nature ruin should be a rare event, it is necessary to deploy a variance reduction technique based on a proper change of measure.

Lea Enzi, Stefan Thonhauser
New Bounds for the Extreme and the Star Discrepancy of Double-Infinite Matrices

According to Aistleitner and Weimar, there exist two-dimensional (double) infinite matrices whose star-discrepancy $$D_N^{*s}$$ D N ∗ s of the first N rows and s columns, interpreted as N points in $$[0,1]^s$$ [ 0 , 1 ] s , satisfies an inequality of the form $$ D_N^{*s} \le \sqrt{\alpha } \sqrt{A+B\frac{\ln (\log _2(N))}{s}}\sqrt{\frac{s}{N}} $$ D N ∗ s ≤ α A + B ln ( log 2 ( N ) ) s s N with $$\alpha = \zeta ^{-1}(2) \approx 1.73, A=1165$$ α = ζ - 1 ( 2 ) ≈ 1.73 , A = 1165 and $$B=178$$ B = 178 . These matrices are obtained by using i.i.d sequences, and the parameters s and N refer to the dimension and the sample size respectively. In this paper, we improve their result in two directions: First, we change the character of the equation so that the constant A gets replaced by a value $$A_s$$ A s dependent on the dimension s such that for $$s>1$$ s > 1 we have $$A_s<A$$ A s < A . Second, we generalize the result to the case of the (extreme) discrepancy. The paper is complemented by a section where we show numerical results for the dependence of the parameter $$A_s$$ A s on s.

Jasmin Fiedler, Michael Gnewuch, Christian Weiß
Unbiased Likelihood Estimation of Wright–Fisher Diffusion Processes

In this paper we propose a Monte Carlo maximum likelihood estimation strategy for discretely observed Wright–Fisher diffusions. Our approach provides an unbiased estimator of the likelihood function and is based on exact simulation techniques that are of special interest for diffusion processes defined on a bounded domain, where numerical methods typically fail to remain within the required boundaries. We start by building unbiased likelihood estimators for scalar diffusions and later present an extension to the multidimensional case. Consistency results of our proposed estimator are also presented and the performance of our method is illustrated through numerical examples.

Celia García-Pareja, Fabio Nobile
Theory and Construction of Quasi-Monte Carlo Rules for Asian Option Pricing and Density Estimation

In this paper we propose and analyse a method for estimating three quantities related to an Asian option: the fair price, the cumulative distribution function, and the probability density. The method involves preintegration with respect to one well chosen integration variable to obtain a smooth function of the remaining variables, followed by the application of a tailored lattice Quasi-Monte Carlo rule to integrate over the remaining variables.

Alexander D. Gilbert, Frances Y. Kuo, Ian H. Sloan, Abirami Srikumar
Application of Dimension Truncation Error Analysis to High-Dimensional Function Approximation in Uncertainty Quantification

Parametric mathematical models such as parameterizations of partial differential equations with random coefficients have received a lot of attention within the field of uncertainty quantification. The model uncertainties are often represented via a series expansion in terms of the parametric variables. In practice, this series expansion needs to be truncated to a finite number of terms, introducing a dimension truncation error to the numerical simulation of a parametric mathematical model. There have been several studies of the dimension truncation error corresponding to different models of the input random field in recent years, but many of these analyses have been carried out within the context of numerical integration. In this paper, we study the $$L^2$$ L 2 dimension truncation error of the parametric model problem. Estimates of this kind arise in the assessment of the dimension truncation error for function approximation in high dimensions. In addition, we show that the dimension truncation error rate is invariant with respect to certain transformations of the parametric variables. Numerical results are presented which showcase the sharpness of the theoretical results.

Philipp A. Guth, Vesa Kaarnioja
Simple Stratified Sampling for Simulating Multi-dimensional Markov Chains

Monte Carlo (MC) is widely used for the simulation of discrete time Markov chains. We consider the case of a d-dimensional continuous state space and we restrict ourselves to chains where the d components are advanced independently from each other, with d random numbers used at each step. We simulate N copies of the chain in parallel, and we replace pseudorandom numbers on $$I^d := (0,1)^d$$ I d : = ( 0 , 1 ) d with stratified random points over $$I^{2d}$$ I 2 d : for each point, the first d components are used to select a state and the last d components are used to advance the chain by one step. We use a simple stratification technique: let p be an integer, then for $$N=p^{2d}$$ N = p 2 d samples, the unit hypercube is dissected into N hypercubes of measure 1/N and there is one sample in each of them. The strategy outperforms classical MC if a well-chosen multivariate sort of the states is employed to order the chains at each step. We prove that the variance of the stratified sampling estimator is bounded by $$\mathcal {O}(N^{-(1+1/(2d))})$$ O ( N - ( 1 + 1 / ( 2 d ) ) ) , while it is $$\mathcal {O}(N^{-1})$$ O ( N - 1 ) for MC. In numerical experiments, we observe empirical rates that satisfy the bounds. We also compare with the Array-RQMC method.

Rami El Haddad, Christian Lécot, Pierre L’Ecuyer
Infinite-Variate -Approximation with Nested Subspace Sampling

We consider $$L^2$$ L 2 -approximation on weighted reproducing kernel Hilbert spaces of functions depending on infinitely many variables. We focus on unrestricted linear information, admitting evaluations of arbitrary continuous linear functionals. We distinguish between ANOVA and non-ANOVA spaces, where, by ANOVA spaces, we refer to function spaces whose norms are induced by an underlying ANOVA function decomposition. In ANOVA spaces, we provide an optimal algorithm to solve the approximation problem using linear information. We determine the upper and lower error bounds on the polynomial convergence rate of n-th minimal worst-case errors, which match if the weights decay regularly. For non-ANOVA spaces, we also establish upper and lower error bounds. Our analysis reveals that for weights with a regular and moderate decay behavior, the convergence rate of n-th minimal errors is strictly higher in ANOVA than in non-ANOVA spaces.

Kumar Harsha, Michael Gnewuch, Marcin Wnuk
Randomized Complexity of Vector-Valued Approximation

We study the randomized nth minimal errors (and hence the complexity) of vector valued approximation. In a recent paper by the author [Randomized complexity of parametric integration and the role of adaption I. Finite dimensional case J. Complex. 82 (2024), paper ref. 101821] a long-standing problem of Information-Based Complexity was solved: Is there a constant $$c>0$$ c > 0 such that for all linear problems $$\mathcal {P}$$ P the randomized non-adaptive and adaptive nth minimal errors can deviate at most by a factor of c? That is, does the following hold for all linear $$\mathcal {P}$$ P and $$n\in {\mathbb N}$$ n ∈ N $$\begin{aligned} e_n^\mathrm{ran-non} (\mathcal {P})\le ce_n^\textrm{ran} (\mathcal {P}) \, \mathbf{?} \end{aligned}$$ e n ran - non ( P ) ≤ c e n ran ( P ) ? The analysis of vector-valued mean computation showed that the answer is negative. More precisely, there are instances of this problem where the gap between non-adaptive and adaptive randomized minimal errors can be (up to log factors) of the order $$n^{1/8}$$ n 1 / 8 . This raises the question about the maximal possible deviation. In this paper we show that for certain instances of vector valued approximation the gap is $$n^{1/2}$$ n 1 / 2 (again, up to log factors).

Stefan Heinrich
Quasi-Monte Carlo Algorithms (Not Only) for Graphics Software

Quasi-Monte Carlo methods have become the industry standard in computer graphics. For that purpose, efficient algorithms for low discrepancy sequences are discussed. In addition, numerical pitfalls encountered in practice are revealed. We then take a look at massively parallel quasi-Monte Carlo integro-approximation for image synthesis by light transport simulation. Beyond superior uniformity, low discrepancy points may be optimized with respect to additional criteria, such as noise characteristics at low sampling rates or the quality of low-dimensional projections.

Alexander Keller, Carsten Wächter, Nikolaus Binder
Sequential Estimation Using Hierarchically Stratified Domains with Latin Hypercube Sampling

Quantifying the effect of uncertainties in computationally complex systems where only point evaluations in the stochastic domain but no regularity conditions are available is limited to sampling-based techniques. This work presents an adaptive sequential stratification estimation method that uses Latin Hypercube Sampling within each stratum. The adaptation is achieved through a sequential hierarchical refinement of the stratification, guided by previous estimators using local (i.e., stratum-dependent) variability indicators based on generalized polynomial chaos expansions and Sobol decompositions. For a given total number of samples N, the corresponding hierarchically constructed sequence of Stratified Sampling estimators combined with Latin Hypercube sampling is adequately averaged to provide a final estimator with reduced variance. Numerical experiments illustrate the procedure’s efficiency, indicating that it can offer a variance decay proportional to $$N^{-2}$$ N - 2 in some cases.

Sebastian Krumscheid, Per Pettersson
Comparison of Two Search Criteria for Lattice-Based Kernel Approximation

The kernel interpolant in a reproducing kernel Hilbert space is optimal in the worst-case sense among all approximations of a function using the same set of function values. In this paper, we compare two search criteria to construct lattice point sets for use in lattice-based kernel approximation. The first candidate, $${\mathcal {P}}_n^*$$ P n ∗ , is based on the power function that appears in machine learning literature. The second, $${\mathcal {S}}_n^*$$ S n ∗ , is a search criterion used for generating lattices for approximation using truncated Fourier series. We find that the empirical difference in error between the lattices constructed using $${\mathcal {P}}_n^*$$ P n ∗ and $${\mathcal {S}}_n^*$$ S n ∗ is marginal. The criterion $${\mathcal {S}}_n^*$$ S n ∗ is preferred as it is computationally more efficient and has a bound with a superior convergence rate.

Frances Y. Kuo, Weiwen Mo, Dirk Nuyens, Ian H. Sloan, Abirami Srikumar
A-Posteriori QMC-FEM Error Estimation for Bayesian Inversion and Optimal Control with Entropic Risk Measure

We propose a novel a-posteriori error estimation technique where the target quantities of interest are ratios of high-dimensional integrals, as occur e.g. in PDE constrained Bayesian inversion and PDE constrained optimal control subject to an entropic risk measure. We consider in particular parametric, elliptic PDEs with affine-parametric diffusion coefficient, on high-dimensional parameter spaces. We combine our recent a-posteriori Quasi-Monte Carlo (QMC) error analysis, with Finite Element a-posteriori error estimation. The proposed approach yields a computable a-posteriori estimator which is reliable, up to higher order terms. The estimator’s reliability is uniform with respect to the PDE discretization, and robust with respect to the parametric dimension of the uncertain PDE input.

Marcello Longo, Christoph Schwab, Andreas Stein
Reversible Random Number Generation for Adjoint Monte Carlo Simulation of the Heat Equation

In PDE-constrained optimization, one aims to find design parameters that minimize some objective, subject to the satisfaction of a partial differential equation. A major challenge is computing gradients of the objective to the design parameters, as applying the chain rule requires computing the Jacobian of the design parameters to the PDE’s state. The adjoint method avoids this Jacobian by computing partial derivatives of a Lagrangian. Evaluating these derivatives requires the solution of a second PDE with the adjoint differential operator to the constraint, resulting in a backwards-in-time simulation. Particle-based Monte Carlo solvers are often used to compute the solution to high-dimensional PDEs. However, such solvers have the drawback of introducing noise to the computed results, thus requiring stochastic optimization methods. To guarantee convergence in this setting, both the constraint and adjoint Monte Carlo simulations should simulate the same particle trajectories. For large simulations, storing full paths from the constraint equation for re-use in the adjoint equation becomes infeasible due to memory limitations. In this paper, we provide a reversible extension to the family of permuted congruential pseudorandom number generators (PCG). We then use such a generator to recompute these time-reversed paths for the heat equation, avoiding these memory issues.

Emil Løvbak, Frédéric Blondeel, Adam Lee, Lander Vanroye, Andreas Van Barel, Giovanni Samaey
Large Scale Gaussian Processes with Matheron’s Update Rule and Karhunen-Loève Expansion

Gaussian processes have become essential for nonparametric function estimation and are widely used in many fields, like machine learning. In this paper, large scale Gaussian process regression (GPR) is investigated. This problem is related to the simulation of high-dimensional Gaussian vectors truncated on the intersection of a set of hyperplanes. The main idea is to combine both Matheron’s update rule (MUR) and Karhunen-Loève expansion (KLE). First, by the MUR we show that simulating from the posterior distribution can be achieved without computing the posterior covariance matrix and its decomposition. Second, by splitting the input domain into smallest nonoverlapping subdomains, the KLE coefficients are conditioned in order to guarantee the correlation structure in the entire domain. The parallelization of this technique is developed and the advantages are highlighted. Through this, the computational complexity is drastically reduced. The mean-square global block error is computed. It provides accurate results when using a family of covariance functions with compact support. Some numerical examples to study the performance of the proposed approach are included.

Hassan Maatouk, Didier Rullière, Xavier Bay
The Order Barrier for the -approximation of the Log-Heston SDE at a Single Point

We study the $$L^1$$ L 1 -approximation of the log-Heston SDE at the terminal time point by arbitrary methods that use an equidistant discretization of the driving Brownian motion. We show that such methods can achieve at most order $$ \min \{ \nu , \tfrac{1}{2} \}$$ min { ν , 1 2 } , where $$\nu $$ ν is the Feller index of the underlying CIR process. As a consequence Euler-type schemes are optimal for $$\nu \ge 1$$ ν ≥ 1 , since they have convergence order $$\tfrac{1}{2}-\epsilon $$ 1 2 - ϵ for $$\epsilon >0$$ ϵ > 0 arbitrarily small in this regime.

Annalena Mickel, Andreas Neuenkirch
A Randomised Lattice Rule Algorithm with Pre-determined Generating Vector and Random Number of Points for Korobov Spaces with 

In previous work [12], we showed that a lattice rule with a pre-determined generating vector but random number of points can achieve the near optimal convergence of $$O(n^{-\alpha -1/2+\epsilon })$$ O ( n - α - 1 / 2 + ϵ ) , $$\epsilon > 0$$ ϵ > 0 , for the worst case expected error, commonly referred to as the randomised error, for numerical integration of high-dimensional functions in the Korobov space with smoothness $$\alpha > 1/2$$ α > 1 / 2 . Compared to the optimal deterministic rate of $$O(n^{-\alpha +\epsilon })$$ O ( n - α + ϵ ) , $$\epsilon > 0$$ ϵ > 0 , such a randomised algorithm is capable of an extra half in the rate of convergence. In this paper, we show that a pre-determined generating vector also exists in the case of $$0 < \alpha \le 1/2$$ 0 < α ≤ 1 / 2 . Also here we obtain the near optimal convergence of $$O(n^{-\alpha -1/2+\epsilon })$$ O ( n - α - 1 / 2 + ϵ ) , $$\epsilon > 0$$ ϵ > 0 ; or in more detail, we obtain $$O(\sqrt{r} \, n^{-\alpha -1/2+1/(2r)+\epsilon '})$$ O ( r n - α - 1 / 2 + 1 / ( 2 r ) + ϵ ′ ) which holds for any choices of $$\epsilon ' > 0$$ ϵ ′ > 0 and $$r \in {\mathbb {N}}$$ r ∈ N with $$r > 1/(2\alpha )$$ r > 1 / ( 2 α ) .

Dirk Nuyens, Laurence Wilkes
Generator Matrices by Solving Integer Linear Programs

In quasi-Monte Carlo methods, generating high-dimensional low discrepancy sequences by generator matrices is a popular and efficient approach. Historically, constructing or finding such generator matrices has been a hard problem. In particular, it is challenging to take advantage of the intrinsic structure of a given numerical problem to design samplers of low discrepancy in certain subsets of dimensions. To address this issue, we devise a greedy algorithm allowing us to translate desired net properties into linear constraints on the generator matrix entries. Solving the resulting integer linear program yields generator matrices that satisfy the desired net properties. We demonstrate that our method finds generator matrices in challenging settings, offering low discrepancy sequences beyond the limitations of classic constructions.

Loïs Paulin, David Coeurjolly, Nicolas Bonneel, Jean-Claude Iehl, Victor Ostromoukhov, Alexander Keller
Multi-fidelity No-U-Turn Sampling

Markov Chain Monte Carlo (MCMC) methods often take many iterations to converge for highly correlated or high-dimensional target density functions. Methods such as Hamiltonian Monte Carlo (HMC) or No-U-Turn Sampling (NUTS) use the first-order derivative of the density function to tackle the aforementioned issues. However, the calculation of the derivative represents a bottleneck for computationally expensive models. We propose to first build a multi-fidelity Gaussian Process (GP) surrogate. The building block of the multi-fidelity surrogate is a hierarchy of models of decreasing approximation error and increasing computational cost. Then the generated multi-fidelity surrogate is used to approximate the derivative. The majority of the computation is assigned to the cheap models thereby reducing the overall computational cost. The derivative of the multi-fidelity method is used to explore the target density function and generate proposals. We select or reject the proposals using the Metropolis Hasting criterion using the highest fidelity model which ensures that the proposed method is ergodic with respect to the highest fidelity density function. We apply the proposed method to three test cases including some well-known benchmarks to compare it with existing methods and show that multi-fidelity No-U-turn sampling outperforms other methods.

Kislaya Ravi, Tobias Neckel, Hans-Joachim Bungartz
Convergence of the Euler–Maruyama Particle Scheme for a Regularised McKean–Vlasov Equation Arising from the Calibration of Local-Stochastic Volatility Models

In this paper, we study the Euler–Maruyama scheme for a particle method to approximate the McKean–Vlasov dynamics of calibrated local-stochastic volatility (LSV) models. Given the open question of well-posedness of the original problem, we work with regularised coefficients and prove that under certain assumptions on the inputs, the regularised model is well-posed. Using this result, we prove the strong convergence of the Euler–Maruyama scheme to the particle system with rate 1/2 in the step-size and obtain an explicit dependence of the error on the regularisation parameters. Finally, we implement the particle method for the calibration of a Heston-type LSV model to illustrate the convergence in practice and to investigate how the choice of regularisation parameters affects the accuracy of the calibration.

Christoph Reisinger, Maria Olympia Tsianni
On Bounding and Approximating Functions of Multiple Expectations Using Quasi-Monte Carlo

Monte Carlo and Quasi-Monte Carlo methods present a convenient approach for approximating the expected value of a random variable. Algorithms exist to adaptively sample the random variable until a user defined absolute error tolerance is satisfied with high probability. This work describes an extension of such methods which supports adaptive sampling to satisfy general error criteria for functions of a common array of expectations. Although several functions involving multiple expectations are being evaluated, only one random sequence is required, albeit sometimes of larger dimension than the underlying randomness. These enhanced Monte Carlo and Quasi-Monte Carlo algorithms are implemented in the QMCPy Python package with support for economic and parallel function evaluation. We exemplify these capabilities on problems from machine learning and global sensitivity analysis.

Aleksei G. Sorokin, Jagadeeswaran Rathinavel
Convergence of the Tamed-Euler–Maruyama Method for SDEs with Discontinuous and Polynomially Growing Drift

Numerical methods for SDEs with irregular coefficients are intensively studied in the literature, with different types of irregularities usually being attacked separately. In this paper we combine two different types of irregularities: polynomially growing drift coefficients and discontinuous drift coefficients. For SDEs that suffer from both irregularities we prove strong convergence of order 1/2 of the tamed-Euler–Maruyama scheme from [10].

Kathrin Spendier, Michaela Szölgyenyi
QMC Strength for Some Random Configurations on the Sphere

A sequence $$(X_N)\subset \mathbb {S}^d$$ ( X N ) ⊂ S d of N-point sets from the d-dimensional sphere has QMC strength $$s^*>d/2$$ s ∗ > d / 2 if it has worst-case error of optimal order, $$N^{-s/d},$$ N - s / d , for Sobolev spaces of order s for all $$d/2<s<s^*,$$ d / 2 < s < s ∗ , and the order is not optimal for $$s> s^*.$$ s > s ∗ . In [15] conjectured values of the strength are given for some well known point families in $$\mathbb S^2$$ S 2 based on numerical results. We study the average QMC strength for some related random configurations.

Víctor de la Torre, Jordi Marzo
Multilevel MCMC with Level-Dependent Data in a Model Case of Structural Damage Assessment

We discuss a generalisation of the multilevel Markov Chain Monte Carlo algorithm for a Bayesian inverse problem with high-resolution (full-field) data. We extend the method to include a level-dependent treatment of the data, which is useful for very high-resolution data that cannot be represented by the much lower resolution of the forward problem at coarser levels. The approach is illustrated using a model case situated in a context of structural health monitoring, additionally providing a new application domain for the multilevel Markov Chain Monte Carlo methodology.

Pieter Vanmechelen, Geert Lombaert, Giovanni Samaey
A Note on Compact Embeddings of Reproducing Kernel Hilbert Spaces in and Infinite-Variate Function Approximation

This note consists of two largely independent parts. In the first part we give conditions on the kernel $$k: \Omega \times \Omega \rightarrow \mathbb {R}$$ k : Ω × Ω → R of a reproducing kernel Hilbert space H continuously embedded via the identity mapping into $$L^2(\Omega , \mu ),$$ L 2 ( Ω , μ ) , which are equivalent to the fact that H is even compactly embedded into $$L^2(\Omega , \mu ).$$ L 2 ( Ω , μ ) . In the second part we consider a scenario from infinite-variate $$L^2$$ L 2 -approximation. Suppose that the embedding of a reproducing kernel Hilbert space of univariate functions with reproducing kernel $$1+k$$ 1 + k into $$L^2(\Omega , \mu )$$ L 2 ( Ω , μ ) is compact. We provide a simple criterion for checking compactness of the embedding of a reproducing kernel Hilbert space with the kernel given by $$\begin{aligned} \sum _{u \in \mathcal {U}} \gamma _u \bigotimes _{j \in u}k, \end{aligned}$$ ∑ u ∈ U γ u ⨂ j ∈ u k , where $$\mathcal {U} = \{u \subset \mathbb {N}: |u| < \infty \},$$ U = { u ⊂ N : | u | < ∞ } , and $$\gamma = (\gamma _u)_{u \in \mathcal {U}}$$ γ = ( γ u ) u ∈ U is a family of non-negative numbers, into an appropriate $$L^2$$ L 2 space.

Marcin Wnuk
Metadata
Title
Monte Carlo and Quasi-Monte Carlo Methods
Editors
Aicke Hinrichs
Peter Kritzer
Friedrich Pillichshammer
Copyright Year
2024
Electronic ISBN
978-3-031-59762-6
Print ISBN
978-3-031-59761-9
DOI
https://doi.org/10.1007/978-3-031-59762-6

Premium Partner