Skip to main content

2009 | Buch

Monte Carlo and Quasi-Monte Carlo Methods 2008

herausgegeben von: Pierre L' Ecuyer, Art B. Owen

Verlag: Springer Berlin Heidelberg

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

Tutorials

Frontmatter
Monte Carlo and Quasi-Monte Carlo for Statistics

This article reports on the contents of a tutorial session at MCQMC 2008. The tutorial explored various places in statistics where Monte Carlo methods can be used. There was a special emphasis on areas where Quasi-Monte Carlo ideas have been or could be applied, as well as areas that look like they need more research.

Art B. Owen
Monte Carlo Computation in Finance

This advanced tutorial aims at an exposition of problems in finance that are worthy of study by the Monte Carlo research community. It describes problems in valuing and hedging securities, risk management, portfolio optimization, and model calibration. It surveys some areas of active research in efficient procedures for simulation in finance and addresses the impact of the business context on the opportunities for efficiency. There is an emphasis on the many challenging problems in which it is necessary to perform several similar simulations.

Jeremy Staum

Invited Articles

Frontmatter
Particle Markov Chain Monte Carlo for Efficient Numerical Simulation

Markov Chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC) methods are the two most popular classes of algorithms used to sample from general high-dimensional probability distributions. The theoretical convergence of MCMC algorithms is ensured under weak assumptions, but their practical performance is notoriously unsatisfactory when the proposal distributions used to explore the space are poorly chosen and/or if highly correlated variables are updated independently. We show here how it is possible to systematically design potentially very efficient high-dimensional proposal distributions for MCMC by using SMC techniques. We demonstrate how this novel approach allows us to design effective MCMC algorithms in complex scenarios. This is illustrated by a problem of Bayesian inference for a stochastic kinetic model.

Christophe Andrieu, Arnaud Doucet, Roman Holenstein
Computational Complexity of Metropolis-Hastings Methods in High Dimensions

This article contains an overview of the literature concerning the computational complexity of Metropolis-Hastings based MCMC methods for sampling probability measures on ℝ

d

, when the dimension

d

is large. The material is structured in three parts addressing, in turn, the following questions: (i) what are sensible assumptions to make on the family of probability measures indexed by

d

 ? (ii) what is known concerning computational complexity for Metropolis-Hastings methods applied to these families? (iii) what remains open in this area?

Alexandros Beskos, Andrew Stuart
On Quasi-Monte Carlo Rules Achieving Higher Order Convergence

Quasi-Monte Carlo rules which can achieve arbitrarily high order of convergence have been introduced recently. The construction is based on digital nets and the analysis of the integration error uses Walsh functions. Various approaches have been used to show arbitrarily high convergence. In this paper we explain the ideas behind higher order quasi-Monte Carlo rules by leaving out most of the technical details and focusing on the main ideas.

Josef Dick
Sensitivity Estimates for Compound Sums

We derive unbiased derivative estimators for expectations of functions of random sums, where differentiation is taken with respect to a parameter of the number of terms in the sum. As the number of terms is integer valued, its derivative is zero wherever it exists. Nevertheless, we present two constructions that make the sum continuous even when the number of terms is not. We present a locally continuous construction that preserves continuity across a single change in the number of terms and a globally continuous construction specific to the compound Poisson case. This problem is motivated by two applications in finance: approximating Lévy-driven models of asset prices and exact sampling of a stochastic volatility process.

Paul Glasserman, Kyoung-Kuk Kim
New Perspectives on (0,s)-Sequences

Low-discrepancy sequences that have an optimal value of 0 for their

t

-parameter

have always been of interest to both theorists and practitioners. However, in practice the Sobol’ sequence often performs better than the original (0,

s

)-sequences in prime bases proposed by Faure in 1982, although the former construction does not have an optimal value of 0 for its

t

-parameter. In this paper, we introduce new ideas that can be used to find improved constructions for (0,

s

)-sequences in prime bases. To do so, we study them within the framework of

generalized Niederreiter sequences

, which was introduced by Tezuka in 1993. We take a closer look at the structure of the corresponding generating matrices, as this helps us to better understand the differences and analogies between the constructions that we are interested in. This study is then used to guide our search for improved (0,

s

)-sequences, which are shown to perform well on a variety of problems.

Christiane Lemieux, Henri Faure
Variable Subspace Sampling and Multi-level Algorithms

We survey recent results on numerical integration with respect to measures

μ

on infinite-dimensional spaces, e.g., Gaussian measures on function spaces or distributions of diffusion processes on the path space. Emphasis is given to the class of multi-level Monte Carlo algorithms and, more generally, to variable subspace sampling and the associated cost model. In particular we investigate integration of Lipschitz functionals. Here we establish a close relation between quadrature by means of randomized algorithms and Kolmogorov widths and quantization numbers of

μ

. Suitable multi-level algorithms turn out to be almost optimal in the Gaussian case and in the diffusion case.

Thomas Müller-Gronbach, Klaus Ritter
Markov Chain Monte Carlo Algorithms: Theory and Practice

We describe the importance and widespread use of Markov chain Monte Carlo (MCMC) algorithms, with an emphasis on the ways in which theoretical analysis can help with their practical implementation. In particular, we discuss how to achieve rigorous quantitative bounds on convergence to stationarity using the coupling method together with drift and minorisation conditions. We also discuss recent advances in the field of adaptive MCMC, where the computer iteratively selects from among many different MCMC algorithms. Such adaptive MCMC algorithms may fail to converge if implemented naively, but they will converge correctly if certain conditions such as Diminishing Adaptation are satisfied.

Jeffrey S. Rosenthal
MINT – New Features and New Results

(

t

,

m

,

s

)-nets are among the best methods for the construction of low-discrepancy point sets in the

s

-dimensional unit cube. Various types of constructions and bounds are known today. Additionally there exist many propagation rules connecting nets to other mathematical objects.

The

MinT

database developed by the authors is one of the most elaborate and convenient tools for accessing information on many aspects of nets. In this article we provide new information about

MinT

.

We also develop several new constructions by generalizing methods from coding theory and show how these methods can be used for obtaining new (

t

,

m

,

s

)-nets. In many cases the development of these new methods has been guided by

MinT

.

Rudolf Schürer, Wolfgang Ch. Schmid

Contributed Articles

Frontmatter
Recursive Computation of Value-at-Risk and Conditional Value-at-Risk using MC and QMC

Value-at-Risk (VaR) and Conditional-Value-at-Risk (CVaR) are two widely-used measures in risk management. This paper deals with the problem of estimating both VaR and CVaR using stochastic approximation (with decreasing steps): we propose a first Robbins-Monro (RM) procedure based on Rockafellar-Uryasev’s identity for the CVaR. The estimator provided by the algorithm satisfies a Gaussian Central Limit Theorem. As a second step, in order to speed up the initial procedure, we propose a recursive and adaptive importance sampling (IS) procedure which induces a significant variance reduction of both VaR and CVaR procedures. This idea, which has been investigated by many authors, follows a new approach introduced in Lemaire and Pagès

20

. Finally, to speed up the initialization phase of the IS algorithm, we replace the original confidence level of the VaR by a deterministic moving risk level. We prove that the weak convergence rate of the resulting procedure is ruled by a Central Limit Theorem with minimal variance and we illustrate its efficiency by considering typical energy portfolios.

Olivier Bardou, Noufel Frikha, Gilles Pagès
Adaptive Monte Carlo Algorithms Applied to Heterogeneous Transport Problems

We apply three generations of geometrically convergent adaptive Monte Carlo algorithms to solve a model transport problem with severe heterogeneities in energy. In the first generation algorithms an arbitrarily precise solution of the transport equation is sought pointwise. In the second generation algorithms the solution is represented more economically as a vector of regionwise averages over a fixed uniform phase space decomposition. The economy of this representation provides geometric reduction in error to a precision limited by the granularity of the imposed phase space decomposition. With the third generation algorithms we address the question of how the second generation uniform phase space subdivision should be refined in order to achieve additional geometric learning. A refinement strategy is proposed based on an information density function that combines information from the transport equation and its dual.

Katherine Bhan, Rong Kong, Jerome Spanier
Efficient Simulation of Light-Tailed Sums: an Old-Folk Song Sung to a Faster New Tune...

We revisit a classical problem in rare-event simulation, namely, efficient estimation of the probability that the sample mean of

n

independent identically distributed light tailed (i.e. with finite moment generating function in a neighborhood of the origin) random variables lies in a sufficiently regular closed convex set that does not contain their mean. It is well known that the optimal exponential tilting (OET), although logarithmically efficient, is not strongly efficient (typically, the squared coefficient of variation of the estimator grows at rate

n

1/2

). After discussing some important differences between the optimal change of measure and OET (for instance, in the one dimensional case the size of the overshoot is bounded for the optimal importance sampler and of order

O

(

n

1/2

) for OET) that indicate why OET is not strongly efficient, we provide a state-dependent importance sampling that can be proved to be strongly efficient. Our procedure is obtained based on computing the optimal tilting at each step, which corresponds to the solution of the Isaacs equation studied recently by Dupuis and Wang

8

.

Jose H. Blanchet, Kevin Leder, Peter W. Glynn
Distribution of Digital Explicit Inversive Pseudorandom Numbers and Their Binary Threshold Sequence

We study the distribution of

s

-dimensional points of digital explicit inversive pseudorandom numbers with arbitrary lags. We prove a discrepancy bound and derive results on the pseudorandomness of the binary threshold sequence derived from digital explicit inversive pseudorandom numbers in terms of bounds on the correlation measure of order

k

and the linear complexity profile. The proofs are based on bounds on exponential sums and earlier relations of Mauduit, Niederreiter and Sárközy between discrepancy and correlation measure of order

k

and of Brandstätter and the third author between correlation measure of order

k

and linear complexity profile, respectively.

Zhixiong Chen, Domingo Gomez, Arne Winterhof
Extensions of Fibonacci Lattice Rules

We study the trigonometric degree of pairs of embedded cubature rules for the approximation of two-dimensional integrals, where the basic cubature rule is a Fibonacci lattice rule. The embedded cubature rule is constructed by simply doubling the points which results in adding a shifted version of the basic Fibonacci rule. An explicit expression is derived for the trigonometric degree of this particular extension of the Fibonacci rule based on the index of the Fibonacci number.

Ronald Cools, Dirk Nuyens
Efficient Search for Two-Dimensional Rank-1 Lattices with Applications in Graphics

Selecting rank-1 lattices with respect to maximized mutual minimum distance has been shown to be very useful for image representation and synthesis in computer graphics. While algorithms using rank-1 lattices are very simple and efficient, the selection of their generator vectors often has to resort to exhaustive computer searches, which is prohibitively slow. For the two-dimensional setting, we introduce an efficient approximate search algorithm and transfer the principle to the search for maximum minimum distance rank-1 lattice sequences. We then extend the search for rank-1 lattices to approximate a given spectrum and present new algorithms for anti-aliasing and texture representation in computer graphics.

Sabrina Dammertz, Holger Dammertz, Alexander Keller
Parallel Random Number Generators Based on Large Order Multiple Recursive Generators

Classical random number generators like Linear Congruential Generators (LCG) and Multiple Recursive Generators (MRG) are popular for large-scale simulation studies. To speed up the simulation process, a systematic method is needed to construct and parallelize the random number generators so that they can run simultaneously on several computers or processors. LCGs and MRGs have served as baseline generators for some parallel random number generator (PRNG) constructed in the literature. In this paper, we consider the parallelization problem particularly for a general class of efficient and portable large-order MRGs, in which most coefficients of the recurrence equation are nonzero. With many nonzero terms, such MRGs have an advantage over the MRGs with just a few nonzero terms that they can recover more quickly from a bad initialization. With the special structure imposed on the nonzero coefficients of the new class of generators, the proposed PRNGs can be implemented efficiently. A method of automatic generation of the corresponding MRGs for parallel computation is presented.

Lih-Yuan Deng, Jyh-Jen Horng Shiau, Gwei-Hung Tsai
Efficient Numerical Inversion for Financial Simulations

Generating samples from generalized hyperbolic distributions and non-central chi-square distributions by inversion has become an important task for the simulation of recent models in finance in the framework of (quasi-) Monte Carlo. However, their distribution functions are quite expensive to evaluate and thus numerical methods like root finding algorithms are extremely slow. In this paper we demonstrate how our new method based on Newton interpolation and Gauss-Lobatto quadrature can be utilized for financial applications. Its fast marginal generation times make it competitive, even for situations where the parameters are not always constant.

Gerhard Derflinger, Wolfgang Hörmann, Josef Leydold, Halis Sak
Equidistribution Properties of Generalized Nets and Sequences

Generalized digital nets and sequences have been introduced for the numerical integration of smooth functions using quasi-Monte Carlo rules. In this paper we study geometrical properties of such nets and sequences. The definition of these nets and sequences does not depend on linear algebra over finite fields, it only requires that the point set or sequence satisfies certain distributional properties. Generalized digital nets and sequences appear as special cases. We prove some propagation rules and give bounds on the quality parameter

t

.

Josef Dick, Jan Baldeaux
Implementation of a Component-By-Component Algorithm to Generate Small Low-Discrepancy Samples

In [B. Doerr, M. Gnewuch, P. Kritzer, F. Pillichshammer. Monte Carlo Methods Appl., 14:129–149, 2008], a component-by-component (CBC) approach to generate small low-discrepancy samples was proposed and analyzed. The method is based on randomized rounding satisfying hard constraints and its derandomization. In this paper we discuss how to implement the algorithm and present first numerical experiments. We observe that the generated points in many cases have a significantly better star discrepancy than what is guaranteed by the theoretical upper bound. Moreover, we exhibit that the actual discrepancy is mainly caused by the underlying grid structure, whereas the rounding errors have a negligible contribution. Hence to improve the algorithm, we propose and analyze a randomized point placement. We also study a hybrid approach which combines classical low-discrepancy sequences and the CBC algorithm.

Benjamin Doerr, Michael Gnewuch, Magnus Wahlström
Quasi-Monte Carlo Simulation of Diffusion in a Spatially Nonhomogeneous Medium

We propose and test a quasi-Monte Carlo (QMC) method for solving the diffusion equation in the spatially nonhomogeneous case. For a constant diffusion coefficient, the Monte Carlo (MC) method is a valuable tool for simulating the equation: the solution is approximated by using particles and in every time step the displacement of each particle is drawn from a Gaussian distribution with constant variance. But for a spatially dependent diffusion coefficient, the straightforward extension using a spatially variable variance leads to biased results. A correction to the Gaussian steplength was recently proposed and provides satisfactory results. In the present work, we devise a QMC variant of this corrected MC scheme. We present the results of some numerical experiments showing that our QMC algorithm converges better than the corresponding MC method for the same number of particles.

Rami El Haddad, Christian Lécot, Gopalakrishnan Venkiteswaran
L 2 Discrepancy of Two-Dimensional Digitally Shifted Hammersley Point Sets in Base b

We give an exact formula for the

L

2

discrepancy of two-dimensional digitally shifted Hammersley point sets in base

b

. This formula shows that for certain bases

b

and certain shifts the

L

2

discrepancy is of best possible order with respect to the general lower bound due to Roth. Hence, for the first time, it is proved that, for a thin, but infinite subsequence of bases

b

starting with 5,19,71,…, a single permutation only can achieve this best possible order, unlike previous results of White (1975) who needs

b

permutations and Faure & Pillichshammer (2008) who need 2 permutations.

Henri Faure, Friedrich Pillichshammer
Vibrato Monte Carlo Sensitivities

We show how the benefits of the pathwise sensitivity approach to computing Monte Carlo Greeks can be extended to discontinuous payoff functions through a combination of the pathwise approach and the Likelihood Ratio Method. With a variance reduction modification, this results in an estimator which for timestep

h

has a variance which is

O

(

h

−1/2

) for discontinuous payoffs and

O

(1) for continuous payoffs. Numerical results confirm the variance is much lower than the

O

(

h

−1

) variance of the Likelihood Ratio Method, and the approach is also compatible with the use of adjoints to obtain multiple first order sensitivities at a fixed cost.

Michael B. Giles
The Weighted Variance Minimization in Jump-Diffusion Stochastic Volatility Models

The Monte Carlo method is applied to estimation of options in the case of a stochastic volatility model with jumps. An option contract has a number of parameters like a strike, an exercise date, etc. Estimators of option prices with different values of its parameters are constructed on the same trajectories of the underlying asset price process. The problem of minimization of the weighted sum of their variances is considered. Optimal estimators with minimal weighted variance are pointed out. Their approximations are applied to variance reduction.

Anatoly Gormin, Yuri Kashtanov
(t,m,s)-Nets and Maximized Minimum Distance, Part II

The quality parameter

t

of (

t

,

m

,

s

)-nets controls extensive stratification properties of the generated sample points. However, the definition allows for points that are arbitrarily close across strata boundaries. We continue the investigation of (

t

,

m

,

s

)-nets under the constraint of maximizing the mutual distance of the points on the unit torus and present two new constructions along with algorithms. The first approach is based on the fact that reordering (

t

,

s

)-sequences can result in (

t

,

m

,

s

+1)-nets with varying toroidal distance, while the second algorithm generates points by permutations instead of matrices.

Leonhard Grünschloß, Alexander Keller
Automation of Statistical Tests on Randomness to Obtain Clearer Conclusion

Statistical testing of pseudorandom number generators (PRNGs) is indispensable for their evaluation. A common difficulty among statistical tests is how we consider the resulting probability values (

p

-values). When we observe a small

p

-value such as 10

−3

, it is unclear whether it is due to a defect of the PRNG, or merely by chance. At the evaluation stage, we apply some hundred of different statistical tests to a PRNG. Even a good PRNG may produce some suspicious

p

-values in the results of a battery of tests. This may make the conclusions of the test battery unclear. This paper proposes an adaptive modification of statistical tests: once a suspicious

p

-value is observed, the adaptive statistical test procedure automatically increases the sample size, and tests the PRNG again. If the

p

-value is still suspicious, the procedure again increases the size, and re-tests. The procedure stops when the

p

-value falls either in an acceptable range, or in a clearly rejectable range. We implement such adaptive modifications of some statistical tests, in particular some of those in the Crush battery of TestU01. Experiments show that the evaluation of PRNGs becomes clearer and easier, and the sensitivity of the test is increased, at the cost of additional computation time.

Hiroshi Haramoto
On Subsequences of Niederreiter-Halton Sequences

In this paper we investigate the distribution properties of subsequences of Niederreiter-Halton sequences. A Niederreiter-Halton sequence is generated by joining digital (

T

,

s

)-sequences in different prime bases. Thus Niederreiter-Halton sequences are hybrids of digital (

T

,

s

)-sequences as mainly introduced by Niederreiter and the van der Corput-Halton sequences, which are joint versions of special (0,1)-sequences in different prime bases. As hybrids of well-known low-discrepancy sequences the distribution properties of such sequences are of great interest. In this paper we give an overview of existing results. Furthermore, we investigate the distribution of special types of subsequences, as for example subsequences indexed by arithmetic progressions or the subsequence indexed by primes.

Roswitha Hofer
Correcting the Bias in Monte Carlo Estimators of American-style Option Values

Existing Monte Carlo estimators of American option values are consistent but biased. This article presents a general bias reduction technique which corrects the bias due to making suboptimal exercise decisions. The derived asymptotic expression for the bias is independent of dimensionality, holds for very general underlying processes and option payoffs, and is easily evaluated. The bias is subtracted from the estimators at each exercise opportunity in order to produce bias-corrected estimators. We illustrate how to apply this technique to three methods of generating estimators — stochastic tree, stochastic mesh and least-squares Monte Carlo. Numerical results demonstrate that for a fixed sample size this technique significantly reduces the relative error for both high- and low-biased estimators.

K. H. Felix Kan, R. Mark Reesor, Tyson Whitehead, Matt Davison
Fast Principal Components Analysis Method for Finance Problems With Unequal Time Steps

The use of the Principal Components Analysis (PCA) method as a variance reduction technique when evaluating integrals from mathematical finance using quasi-Monte Carlo point sets suffers from a distinct disadvantage in that it requires a dense matrix-vector multiplication with

$\mathcal{O}(s^{2})$

computations for an

s

-dimensional problem. It was shown by Scheicher

18

that the cost of this matrix-vector multiplication could be reduced to

$\mathcal{O}(s\log s)$

arithmetic operations for problems where the time steps are equally sized. In this paper we show how we may drop this requirement and perform the matrix-vector multiplication in

$\mathcal{O}(s\log s\log(1/\varepsilon))$

arithmetic operations for any desired accuracy

ε

>0.

Jens Keiner, Benjamin J. Waterhouse
Adaptive Monte Carlo Algorithms for General Transport Problems

Recently there has been a concerted effort to develop adaptively modified Monte Carlo algorithms that converge geometrically to solutions of the radiative transport equation. We have concentrated on algorithms that extend to integral equations methods first proposed for matrix equations by Halton in 1962 [Halton, J., Proc. Camb. Phil. Soc., 58, 57–78 (1962)]. Geometric convergence has been rigorously demonstrated [Kong, R., and Spanier, J., J. Comp. Phys., 227(23), 9762–9777 (2008)] for these “first generation” (G1) algorithms but their practical utility is limited by computational complexities resulting from the expansion. Recently, we have developed new adaptive algorithms that overcome most of the computational restrictions of the earlier algorithms and we have also established the geometric convergence of these “second generation” (G2) algorithms [Kong, R. and Spanier, J.: Geometric convergence of second generation adaptive Monte Carlo algorithms for general transport problems based on sequential correlated sampling. In review]. In this paper we outline the main ideas involved and indicate how the resulting G2 algorithm might be optimized using information drawn from simulations of both the RTE and the dual RTE. Simple examples will illustrate these ideas and the gains in computational efficiency that the new methods can achieve.

Jerome Spanier, Rong Kong, Martin Ambrose
On Array-RQMC for Markov Chains: Mapping Alternatives and Convergence Rates

We study the convergence behavior of a randomized quasi-Monte Carlo (RQMC) method for the simulation of discrete-time Markov chains, known as array-RQMC. The goal is to estimate the expectation of a smooth function of the sample path of the chain. The method simulates

n

copies of the chain in parallel, using highly uniform point sets randomized independently at each step. The copies are sorted after each step, according to some multidimensional order, for the purpose of assigning the RQMC points to the chains. In this paper, we provide some insight on why the method works, explain what would need to be done to bound its convergence rate, discuss and compare different ways of realizing the sort and assignment, and report empirical experiments on the convergence rate of the variance and of the mean square discrepancy between the empirical and theoretical distribution of the states, as a function of

n

, for various types of discrepancies.

Pierre L’Ecuyer, Christian Lécot, Adam L’Archevêque-Gaudet
Testing the Tests: Using Random Number Generators to Improve Empirical Tests

The implementer of an empirical test for random number generators is faced with some difficult problems, especially if the test is based on a statistic which is known only approximately: How can the test be tested? How can the approximation be improved? When is it good enough? A number of principles can be applied to these problems. These principles are illustrated using implementations of the overlapping serial “Monkey” tests of Marsaglia and Zaman.

Paul Leopardi
Stochastic Spectral Formulations for Elliptic Problems

We describe new stochastic spectral formulations with very good properties in terms of conditioning. These formulations are built by combining Monte Carlo approximations of the Feynman-Kac formula and standard deterministic approximations on basis functions. We give error bounds on the solutions obtained using these formulations in the case of linear approximations. Some numerical tests are made on an anisotropic diffusion equation using a tensor product Tchebychef polynomial basis and one random point schemes quantized or not.

Sylvain Maire, Etienne Tanré
Adaptive (Quasi-)Monte Carlo Methods for Pricing Path-Dependent Options

We study a recently developed adaptive path-integration technique for pricing financial derivatives. The method is based on the rearrangement and splitting of path-integral variables to apply a combination of bridge sampling, adaptive methods of numerical integration, and the quasi-Monte Carlo method. We study the subregion adaptive Vegas-type method Suave from the

Cuba

library and propose a new variance reduction method with a multivariate piecewise constant sampling density. Two models of asset pricing are considered: the constant elasticity of variance diffusion model and the variance gamma Lévy model. Numerical tests are done for Asian-type options.

Roman N. Makarov
Monte Carlo Simulation of Stochastic Integrals when the Cost of Function Evaluation Is Dimension Dependent

In mathematical finance, pricing a path-dependent financial derivative, such as a continuously monitored Asian option, requires the computation of

$\mathbb{E}[g(B(\cdot))]$

, the expectation of a payoff functional,

g

, of a Brownian motion,

B

(

t

). The expectation problem is an infinite dimensional integration which has been studied in

1

,

5

,

7

,

8

, and

10

. A straightforward way to approximate such an expectation is to take the average of the functional over

n

sample paths,

B

1

,…,

B

n

. The Brownian paths may be simulated by the Karhunen-Loéve expansion truncated at

d

terms,

$\hat{B}_{d}$

. The cost of functional evaluation for each sampled Brownian path is assumed to be

${\mathcal{O}}(d)$

. The whole computational cost of an approximate expectation is then

${\mathcal{O}}(N)$

, where

N

=

nd

. The (randomized) worst-case error is investigated as a function of both

n

and

d

for payoff functionals that arise from Hilbert spaces defined in terms of a kernel and coordinate weights. The optimal relationship between

n

and

d

given fixed

N

is studied and the corresponding worst-case error as a function of

N

is derived.

Ben Niu, Fred J. Hickernell
Recent Progress in Improvement of Extreme Discrepancy and Star Discrepancy of One-Dimensional Sequences

In this communication, we report on recent progress in improvement of extreme discrepancy and star discrepancy of one-dimensional sequences. Namely, we present a permutation of “Babylonian” sequences in base 60, which improves the best known results for star discrepancy obtained by Henri Faure in 1981 [Bull. Soc. Math. France, 109, 143–182 (1981)], and a permutation of sequences in base 84, which improves the best known results for extreme discrepancy obtained by Henri Faure in 1992 [J. Numb. Theory, 42, 47–56 (1992)]. Our best result for star discrepancy in base 60 is 32209/(35400log 60)≈0.222223 (Faure’s best result in base 12 is 1919/(3454log 12)≈0.223585); our best result for extreme discrepancy in base 84 is 130/(83log 84)≈0.353494 (Faure’s best result in base 36 is 23/(35log 6)≈0.366758).

Victor Ostromoukhov
Discrepancy of Hyperplane Nets and Cyclic Nets

Digital nets are very important representatives in the family of low-discrepancy point sets which are often used as underlying nodes for quasi-Monte Carlo integration rules. Here we consider a special sub-class of digital nets known as cyclic nets and, more general, hyperplane nets. We show the existence of such digital nets of good quality with respect to star discrepancy in the classical as well as weighted case and we present effective search algorithms based on a component-by-component construction.

Friedrich Pillichshammer, Gottlieb Pirsic
A PRNG Specialized in Double Precision Floating Point Numbers Using an Affine Transition

We propose a pseudorandom number generator specialized to generate double precision floating point numbers. It generates 52-bit pseudorandom patterns supplemented by a constant most significant 12 bits (sign and exponent), so that the concatenated 64 bits represents a floating point number obeying the IEEE 754 format. To keep the constant part, we adopt an affine transition function instead of the usual

$\mathbb{F}_{2}$

-linear transition, and extend algorithms computing the period and the dimensions of equidistribution to the affine case. The resulted generator generates double precision floating point numbers faster than the Mersenne Twister, whoes output numbers only have 32-bit precision.

Mutsuo Saito, Makoto Matsumoto
On the Behavior of the Weighted Star Discrepancy Bounds for Shifted Lattice Rules

We examine the question of constructing shifted lattice rules of rank one with an arbitrary number of points

n

, an arbitrary shift, and small weighted star discrepancy. An upper bound on the weighted star discrepancy, that depends on the lattice parameters and is easily computable, serves as a figure of merit. It is known that there are lattice rules for which this upper bound converges as

O

(

n

−1+

δ

) for any

δ

>0, uniformly over the shift, and lattice rules that achieve this convergence rate can be found by a component-by-component (CBC) construction. In this paper, we examine practical aspects of these bounds and results, such as: What is the shape of the probability distribution of the figure of merit for a random lattice with a given

n

? Is the CBC construction doing much better than just picking the best out of a few random lattices, or much better than using a randomized CBC construction that tries only a small number of random values at each step? How does the figure of merit really behave as a function of

n

for the best lattice, and on average for a random lattice, say for

n

under a million? Do we observe a convergence rate near

O

(

n

−1

) in that range of values of

n

? Finally, is the figure of merit a tight bound on the true discrepancy, or is there a large gap between the two?

Vasile Sinescu, Pierre L’Ecuyer
Ergodic Estimations of Upscaled Coefficients for Diffusion in Random Velocity Fields

Upscaled coefficients for diffusion in ergodic velocity fields are derived by summing up correlations of increments of the position process, or equivalently of the Lagrangian velocity. Ergodic estimations of the correlations are obtained from time averages over finite paths sampled on a single trajectory of the process and a space average with respect to the initial positions of the paths. The first term in this path decomposition of the diffusion coefficients corresponds to Markovian diffusive behavior and is the only contribution for processes with independent increments. The next terms describe memory effects on diffusion coefficients until they level off to the value of the upscaled coefficients. Since the convergence with respect to the path length is rather fast and no repeated Monte Carlo simulations are required, this method speeds up the computation of the upscaled coefficients over methods based on long-time limit and ensemble averages by four orders of magnitude.

Nicolae Suciu, Călin Vamoş
Green’s Functions by Monte Carlo

We describe a new numerical technique to estimate Green’s functions of elliptic differential operators on bounded open sets. The algorithm utilizes SPDE based function space sampling techniques in conjunction with Metropolis-Hastings MCMC. The key idea is that neither the proposal nor the acceptance probability require the evaluation of a Dirac measure. The method allows Green’s functions to be estimated via ergodic averaging. Numerical examples in both 1D and 2D, with second and fourth order elliptic PDE’s, are presented to validate this methodology.

David White, Andrew Stuart
Tractability of Multivariate Integration for Weighted Korobov Spaces: My 15 Year Partnership with Ian Sloan

This paper is intended as a birthday present for Ian Sloan who celebrated his 70th birthday during MCQMC’08 in Montreal. In the first paper with Ian we studied multivariate integration for the unweighted Korobov spaces of smooth and periodic functions equipped with

L

-type norms expressed in terms of Fourier coefficients. We proved that this problem is intractable and suffers from the curse of dimensionality. To break intractability, weighted Korobov spaces are studied in this paper. Product weights are mainly considered, and finite-order weights are only briefly mentioned. Necessary and sufficient conditions for strong polynomial tractability, polynomial tractability and weak tractability are presented. The necessary and sufficient conditions coincide only for weak tractability, whereas there is a gap between them for strong polynomial and polynomial tractability. In terms of the exponent of strong polynomial tractability, the lower and upper bounds differ at most by a factor of two. Nevertheless, these bounds prove that the exponent of strong polynomial tractability depends on the decay of weights.

Henryk Woźniakowski
Backmatter
Metadaten
Titel
Monte Carlo and Quasi-Monte Carlo Methods 2008
herausgegeben von
Pierre L' Ecuyer
Art B. Owen
Copyright-Jahr
2009
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-04107-5
Print ISBN
978-3-642-04106-8
DOI
https://doi.org/10.1007/978-3-642-04107-5