Skip to main content

2008 | Buch

Monte Carlo and Quasi-Monte Carlo Methods 2006

herausgegeben von: Alexander Keller, Stefan Heinrich, Harald Niederreiter

Verlag: Springer Berlin Heidelberg

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

Invited Articles

A Belgian View on Lattice Rules
Ronald Cools, Dirk Nuyens
MCQMC Algorithms for Solving some Classes of Equations

The Monte-Carlo method is known to be used for solving problems of very different nature. Equation solving constitutes one of the very important classes of problems. A stochastic process which can be effectively simulated by computer is usually associated with the equation under consideration. Then some functional of process trajectories is constructed in order to obtain an unbiased estimation of the required value which can be either solution of the equation or some functional of the solution. And finally, one of the laws of large numbers or limit theorems is used. Stochastic methods usually permit to apply a simple software implementation, they are easily adapted for parallel computer systems and can also effectively use a priori information about the exact problem’s solution (i.e. methods of variance reduction). The well-known disadvantage of the stochastic methods is a comparatively low speed of the error decrease as the number of independent process realizations grows. There are a lot of works aimed at overcoming this disadvantage. The works concerning application of the deterministic methods in computational schemes (Quasi Monte-Carlo Method — QMC) are among them. It is important to notice that the QMC methods preserve the parallel structure of classical stochastic algorithms. It seems that the parallelism of algorithms is one of the most important problems in the modern theory of the Monte-Carlo methods. Another important problem is in comparison of computational complexities of stochastic algorithms and similar deterministic algorithms. Investigations in these fields of MC theory might be important to find out the structure of modern computational systems. This article includes a brief revue of the author’s and his colleague’s investigations in these and related fields. Generalizations of some results and their analysis from the point of view of parallelism are presented for the first time.

Sergej Ermakov
MCQMC Methods for Multivariate Statistical Distributions

A review and comparison is presented for the use of Monte Carlo and Quasi-Monte Carlo methods for multivariate Normal and multivariate t distribution computation problems. Spherical-radial transformations, and separation-of-variables transformations for these problems are considered. The use of various Monte Carlo methods, Quasi-Monte Carlo methods and randomized Quasi-Monte Carlo methods are discussed for the different problem formulations and test results are summarized.

Alan Genz
Minimal Errors for Strong and Weak Approximation of Stochastic Differential Equations

We present a survey of results on minimal errors and optimality of algorithms for strong and weak approximation of systems of stochastic differential equations. For strong approximation, emphasis lies on the analysis of algorithms that are based on point evaluations of the driving Brownian motion and on the impact of non-commutativity, if present. Furthermore, we relate strong approximation to weighted integration and reconstruction of Brownian motion, and we demonstrate that the analysis of minimal errors leads to new algorithms that perform asymptotically optimal. In particular, these algorithms use a path-dependent step-size control. for weak approximation we consider the problem of computing the expected value of a functional of the solution, and we concentrate on recent results for a worst-case analysis either with respect to the functional or with respect to the coefficients of the system. Moreover, we relate weak approximation problems to average Kolmogorov widths and quantization numbers as well as to high-dimensional tensor product problems.

Thomas Müller-Gronbach, Klaus Ritter
Nets, (t, s)-Sequences, and Codes

Nets and (

t, s

)-sequences are standard sources of quasirandom points for quasi-Monte Carlo methods. Connections between nets and error-correcting codes have been noticed for a long time, and these links have become even more pronounced with the development of the duality theory for digital nets. In this paper, we further explore these fascinating connections. We present also a recent construction of digital (

t, s

)-sequences using global function fields and new general constructions of nets and (

t, s

)-sequences.

Harald Niederreiter
Quadratic Optimal Functional Quantization of Stochastic Processes and Numerical Applications

In this paper, we present an overview of the recent developments of functional quantization of stochastic processes, with an emphasis on the quadratic case. Functional quantization is a way to approximate a process, viewed as a Hilbert -valued random variable, using a nearest neighbour projection on a finite codebook. A special emphasis is made on the computational aspects and the numerical applications, in particular the pricing of some path-dependent European options.

Gilles Pagès
Random Field Simulation and Applications

In this paper I present some new approaches to the random field simulation, and show in four different examples how this simulation technique works. The first example deals with a transport in turbulent flows, where the Lagrangian trajectories are described by a stochastic differential equation whose drift term involves the Eulerian velocity as a random field with a given spectral tensor. Studies of the second example concern with the flows in porous medium governed by the Darcy equation with random hydraulic conductivity. Elasticity system of elliptic Lamé equations with random loads is considered in the third example. Finally, in the fourth example we solve a nonlinear Smoluchowski equation which is used to model the process of crystal growth.

Karl Sabelfeld
Monte Carlo and Quasi-Monte Carlo Methods for Computer Graphics

Some computer graphics applications, such as architectural design, generate visually realistic images of computer models. This is accomplished by either explicitly or implicitly solving the light transport equations. Accurate solutions involve high-dimensional equations, and Monte Carlo (MC) techniques are used with an emphasis on importance sampling rather than stratification. For many applications, approximate solutions are adequate, and the dimensionality of the problem can be reduced. In these cases, the distribution of samples is important, and quasi-Monte Carlo (QMC) methods are often used. It is still unknown what sampling schemes are best for these lower dimensional graphics problems, or what “best” even means in this case. This paper reviews the work in MC and QMC computer graphics, and poses some open problems in the field.

Peter Shirley, Dave Edwards, Solomon Boulos

Contributed Articles

Random Walk Algorithm for Estimating the Derivatives of Solution to the Elliptic BVP

Elliptic boundary value problem (BVP) for the stationary diffusion equation is considered. Within [BM03], we estimate the solution and its spatial derivatives by solving a system of local integral equations. We propose to use the Poisson-Boltzmann Green function instead of the Laplacian one. This enables us to obtain a convergent Neumann series for a wider class of equations.

Alexander Burmistrov
Free-Knot Spline Approximation of Fractional Brownian Motion

For a fractional Brownian motion

B

H

on [0,1], we consider approximations of

B

H

by piecewise polynomial splines. Asymptotics of minimal average error rates are established and found to be of order

k

-H

, where

k

is the number of free knots used in the spline approximation.

Jakob Creutzig, Mikhail Lifshits
Simulation on Rank-1 Lattices

Rank-1 lattices are available in any dimension for any number of lattice points and because their generation is so efficient, they often are used in quasi-Monte Carlo methods. Applying the Fourier transform to functions sampled on rank-1 lattice points turns out to be simple and efficient if the number of lattice points is a power of two. Considering the Voronoi diagram of a rank-1 lattice as a partition of the simulation domain and its dual, the Delauney tessellation, as a mesh for display and interpolation, rank-1 lattices are an interesting alternative to tensor product lattices. Instead of classical criteria, we investigate lattices selected by maximized minimum distance, because then the Delauney tessellation becomes as equilateral as possible. Similar arguments apply for the selection of the wave vectors. We explore the use of rank-1 lattices for the examples of stochastic field synthesis and a simple fluid solver with periodic boundary conditions.

Holger Dammertz, Alexander Keller, Sabrina Dammertz
Image Synthesis by Rank-1 Lattices

Considering uniform points for sampling, rank-1 lattices provide the simplest generation algorithm. Compared to classical tensor product lattices or random samples, their geometry allows for a higher sampling efficiency. These considerations result in a proof that for periodic Lipschitz continuous functions, rank-1 lattices with maximized minimum distance perform best. This result is then investigated in the context of image synthesis, where we study anti-aliasing by rank-1 lattices and using the geometry of rank-1 lattices for sensor and display layouts.

Sabrina Dammertz, Alexander Keller
Continuous Runge-Kutta Methods for Stratonovich Stochastic Differential Equations

Stochastic differential equations (SDEs) are applied in many disciplines like physics, biology or mathematical finance in order to describe dynamical systems disturbed by random effects. Approximation methods for the strong as well as for the weak time discrete approximation have been proposed in recent years (see, e.g., [BBOO, DR06, KP99, KMS97, Mil95, MT04, New91, Roe04, Roe06b, TVA02] and the literature therein) converging with some given order at the discretization points. However, there is still a lack of higher order continuous time approximation methods guaranteeing uniform orders of convergence not only at the discretization points but also at any arbitrary time point within the approximation interval. Classical time discrete methods are inefficient in this case where the number of output points has to be very large because this forces the step size to be very small. Therefore, we develop a continuous extension of the class of stochastic Runge-Kutta (SRK) methods introduced in [Roe06c] for the weak approximation which provides continuous time approximations of the solution of Stratonovich SDE systems with uniform order two in the weak sense. Such methods are also called dense output formulas [HNW93]. The main advantage of the presented continuous extension of the SRK methods is their negligible additional computational complexity compared to the time discrete SRK methods. Especially, we are interested in continuous sample trajectories of the applied SRK method. For example, an SRK method with continuous sample trajectories allows the use of an individual discretization for each sample trajectory which needs not necessarily to contain some common discretization points for all trajectories in order to be able to calculate the expectation at these common time points. Further, in future research SRK methods with continuous sample trajectories may be applied for the numerical treatment of stochastic delay differential equations like in the deterministic setting where continuous Runge-Kutta methods are already successfully applied.

Kristian Debrabant, Andreas Rößlers
Issues on Computer Search for Large Order Multiple Recursive Generators

Multiple Recursive Generators (MRGs) have become the most popular random number generators recently. They compute the next value iteratively from the previous

k

values using a

k

-th order recurrence equation which, in turn, corresponds to a

k

-th degree primitive polynomial under a prime modulus

p

. In general, when

k

and

p

are large, checking if a

k

-th degree polynomial is primitive under a prime modulus

p

is known to be a hard problem. A common approach is to check the conditions given in Alanen and Knuth [1964] and Knuth [1998]. However, as mentioned in Deng [2004], this approach has two obvious problems: (a) it requires the complete factorization of

p

k

- 1, which can be difficult; (b) it does not provide any early exit strategy for non-primitive polynomials. To avoid (a), one can consider a prime order

k

and prime modulus

p

such that (

p

k

- 1)/(

p

- 1) is also a prime number as considered in L’Ecuyer [1999] and Deng [2004]. To avoid (b), one can use a more efficient iterative irreducibility test proposed in Deng [2004].

In this paper, we survey several leading probabilistic and deterministic methods for the problems of primality testing and irreducibility testing. To test primality of a large number, it is known that probabilistic methods are much faster than deterministic methods. On the other hand, a probabilistic algorithm in fact has a very tiny probability of, say, 10

-200

to commit a false positive error in the test result. Moreover, even when such an unlikely event had happened, for a speci.c choice of

k

and

p

, it can be argued that such an error has a negligible e.ect on the successful search of a primitive polynomial. We perform a computer search for large-order DX generators proposed in Deng and Xu [2003] and present many such generators in the paper for ready implementation. An extensive empirical study shows that these large-order DX generators have passed the stringent Crush battery of the TestU01 package.

Lih-Yuan Deng
Design and Implementation of Efficient and Portable Multiple Recursive Generators with Few Zero Coefficients

DX-

k

, proposed by Deng and Xu [2003], is a special class of multiple Recursive Generators (MRGs) where all nonzero coefficients of the

k

-th order recurrence are equal. In particular, a DX-

k

generator requires only up to four nonzero coefficients in its recurrence equation, hence is very efficient in computation. However, a random number generator with few nonzero coefficients has a drawback that, when the

k

-dimensional state vector is close to the zero vector, the subsequent numbers generated may stay within a neighborhood of zero for quite many of them before they can break away from this near-zero land, a property apparently not desirable in the sense of randomness. Consequently, two generated sequences using the same DX generator with nearly identical initial state vectors may not depart from each other quickly enough. To avoid the above potential problem, we consider MRGs with very few zero coefficients. To make such generators efficient and portable, we propose selecting the same nonzero value for all coefficients (with at most one exception) in the recurrence equation. With this feature, the proposed generators can be implemented efficiently via a higher-order recurrence of few zero coefficients. Note that the new class of generators is an opposite of the DX generators in terms of the number of nonzero coefficients. Several such generators with the maximum period have been found via computer search and presented in the paper for ready implementation.

Lih-Yuan Deng, Huajiang Li, Jyh-Jen Horng Shiau, Gwei-Hung Tsai
Approximation of Functions Using Digital Nets

In analogy to a recent paper by Kuo, Sloan, and Woźniakowski, which studied lattice rule algorithms for approximation in weighted Korobov spaces, we consider the approximation problem in a weighted Hilbert space of Walsh series. Our approximation uses a truncated Walsh series with Walsh coefficients approximated by numerical integration using digital nets. We show that digital nets (or more precisely, polynomial lattices) tailored specially for the approximation problem lead to better error bounds. The error bounds can be independent of the dimension s, or depend only polynomially on s, under certain conditions on the weights defining the function space.

Josef Dick, Peter Kritzer, Prances Y. Kuo
Construction of Low-Discrepancy Point Sets of Small Size by Bracketing Covers and Dependent Randomized Rounding

We provide a deterministic algorithm that constructs small point sets exhibiting a low star discrepancy. The algorithm is based on bracketing and on recent results on randomized roundings respecting hard constraints. It is structurally much simpler than the previous algorithm presented for this problem in [B. Doerr, M. Gnewuch, A. Srivastav. Bounds and constructions for the star discrepancy via

δ

-covers. J. Complexity, 21:691–709, 2005]. Besides leading to better theoretical run time bounds, our approach also can be implemented with reasonable effort.

Benjamin Doerr, Michael Gnewuch
A Coding Theoretic Approach to Building Nets with Well-Equidistributed Projections

Starting from coding-theoretic constructions, we build digital nets with good figures of merit, where the figure of merit takes into account the equidistribution of a preselected set of low-dimensional projections. This type of figure of merit turns out to be a better predictor than the t-value for the variance of randomized quasi-Monte Carlo (RQMC) estimators based on nets, for certain classes of integrals. Our construction method determines the most significant digits of the points by exploiting the equivalence between the desired equidistribution properties used in our criterion and the property of a related point set to be an orthogonal array, and using existing orthogonal array constructions. The least significant digits are then adjusted to improve the figure of merit. Known results on orthogonal arrays provide bounds on the best possible figure of merit that can be achieved. We present a concrete construction that belongs to the class of cyclic digital nets and we provide numerical illustrations of how it can reduce the variance of an RQMC estimator, compared with more standard constructions.

Yves Edel, Pierre L’Ecuyer
Improvements on Low Discrepancy One-Dimensional Sequences and Two-Dimensional Point Sets

We obtain significant improvements for the star discrepancy

D

* of generalized van der Corput sequences by means of linear digit scramblings (see Section 5.2 for the definition). We also find a new lower bound for the extreme discrepancy

D

of these sequences which permits to show that linearly-scrambled sequences are set in a good place among generalized van der Corput sequences. Finally, we derive the corresponding properties for generalized Hammersley point sets in arbitrary bases and link recent developments in base 2 by Kritzer, Larcher and Pillichshammer to former studies of Béjian and the author.

Henri Faure
Improved Multilevel Monte Carlo Convergence using the Milstein Scheme

In this paper we show that the Milstein scheme can be used to improve the convergence of the multilevel Monte Carlo method for scalar stochastic differential equations. Numerical results for Asian, lookback, barrier and digital options demonstrate that the computational cost to achieve a root-mean-square error of

ε

is reduced to

O

(

ε

-2

). This is achieved through a careful construction of the multilevel estimator which computes the difference in expected payoff when using different numbers of timesteps.

Mike Giles
Generalized Tractability for Linear Functionals

We study approximation of continuous linear functionals I

d

defined over reproducing kernel weighted Hilbert spaces of d-variate functions. Let n(ε, I

d

) denote the minimal number of function values needed to solve the problem to within ε. There are many papers studying polynomial tractability for which n(ε, I

d

) is to be bounded by a polynomial in ε

−1

and d. We study generalized tractability for which we want to guarantee that either n(ε, I

d

) is not exponentially dependent on ε

−1

and d, which is called weak tractability, or is bounded by a power of T(ε

−1

, d) for (ε

−1

, d) ∈ Ω ⊆ [1,∞) × N, which is called (T,Ω)-tractability. Here, the tractability function T is non-increasing in both arguments and does not depend exponentially on ε

−1

and d.

We present necessary conditions on generalized tractability for arbitrary continuous linear functionals I

d

defined on weighted Hilbert spaces whose kernel has a decomposable component, and sufficient conditions on generalized tractability for multivariate integration for general reproducing kernel Hilbert spaces. For some weighted Sobolev spaces these necessary and sufficient conditions coincide. They are expressed in terms of necessary and sufficient conditions on the weights of the underlying spaces.

Michael Gnewuch, Henryk Wozniakowski
An Improved Implementation of Stochastic Particle Methods and Applications to Coagulation Equations

We present a sampling method with applications to Monte Carlo simulations of particle systems which improves the efficiency in sampling from a table of probabilities with changing values and a variable number of elements. For this purpose an optimized partition of the set of events is constructed. The goal is to minimize the expected number of operations in choosing first an element of the partition and sampling afterwards the event from this set. The approach presented here computes an optimized grouping based only on the current structure of the table of events. It can be used as an universal tool, avoiding the experimental way for determining the best group structure, which depends on the problem parameters and on the number of particles. The method is tested numerically by simulations of coagulation processes.

Flavius Guiaş
(t, m, s)-Nets and Maximized Minimum Distance

Many experiments in computer graphics imply that the average quality of quasi-Monte Carlo integro-approximation is improved as the minimal distance of the point set grows. While the definition of (

t, m, s

)-nets in base

b

guarantees extensive stratification properties, which are best for

t

= 0, sampling points can still lie arbitrarily close together. We remove this degree of freedom, report results of two computer searches for (0,

m

, 2)-nets in base 2 with maximized minimum distance, and present an inferred construction for general

m

. The findings are especially useful in computer graphics and, unexpectedly, some (0,

m

, 2)-nets with the best minimum distance properties cannot be generated in the classical way using generator matrices.

Leonhard Grünschloß, Johannes Hanika, Ronnie Schwede, Alexander Keller
Quasi-Monte Carlo Simulation of Discrete-Time Markov Chains on Multidimensional State Spaces

We propose and analyze a quasi-Monte Carlo (QMC) method for simulating a discrete-time Markov chain on a discrete state space of dimension

s

≥ 1. Several paths of the chain are simulated in parallel and reordered at each step, using a multidimensional matching between the QMC points and the copies of the chains. This method generalizes a technique proposed previously for the case where

s

= 1. We provide a convergence result when the number

N

of simulated paths increases toward infinity. Finally, we present the results of some numerical experiments showing that our QMC algorithm converges faster as a function of

N

, at least in some situations, than the corresponding Monte Carlo (MC) method.

Rami El Haddad, Christian Lécot, Pierre L’Ecuyer
Computational Engine for a Virtual Tissue Simulator

We have developed a computational platform that simulates light transport in tissue in support of biomedical optics research. Although in its initial stage of development, this platform is being used to answer important questions regarding the detection of tissue changes, and the optimal design and positioning of optical probes to ‘interrogate’ the tissue best. We provide answers to such questions by applying perturbation and midway surface Monte Carlo techniques. Derivation of these methods makes rigorous use of the radiative transport equation which is essential if the methods are to provide accurate solutions for highly complex media such as biological tissue.

Carole Hayakawa, Jerome Spanier, Vasan Venugopalan
Randomized Approximation of Sobolev Embeddings

We study approximation of functions belonging to Sobolev spaces

W

r

p

(

Q

) by randomized algorithms based on function values. Here 1

≤ p ≤ ∞

,

Q

= [0, 1]

d

, and

r, d ∈

N. The error is measured in

L

q

(

Q

), with 1

≤ q < ∞

, and we assume

r/d >

1

/p −

1

/q

, guaranteeing that

W

r

p

(

Q

) is embedded into

L

q

(

Q

). The optimal order of convergence for the case that

W

r

p

(

Q

) is embedded even into

C

(

Q

) is wellknown. It is

n−r/d

+max(1

/p−

1

/q

,0)

(

n

the number of function evaluations). This rate is already reached by deterministic algorithms, and randomization gives no speedup.

In this paper we are concerned with the case that

W

r

p

(

Q

) is not embedded into

C

(

Q

) (but, of course, still into

L

q

(

Q

)). For this situation approximation based on function values was not studied before. We prove that for randomized algorithms the above rate also holds, while for deterministic algorithms no rate whatsoever is possible. Thus, in the case of low smoothness, Monte Carlo approximation algorithms reach a considerable speedup over deterministic ones (up to

n

−1+

ε

for any

ε >

0).

We also give some applications to integration of functions and to approximation of solutions of elliptic PDE.

Stefan Heinrich
Tractability of Linear Multivariate Problems in the Average Case Setting

We study the average case setting for linear multivariate problems defined over a separable Banach space of functions

f

of

d

variables. The Banach space is equipped with a Gaussian measure. We approximate linear multivariate problems by computing finitely many information evaluations. An information evaluation is defined as an evaluation of a continuous linear functional from a given class

Λ

. We consider two classes of information evaluations; the first class

Λ

all

consists of all continuous linear functionals, and the second class

Λ

std

consists of function evaluations.

We investigate the minimal number

n

(

ε, d,Λ

) of information evaluations needed to reduce the initial average case error by a factor

ε

. The initial average case error is defined as the minimal error that can be achieved without any information evaluations.

We study tractability of linear multivariate problems in the average case setting. Tractability means that

n

(

ε, d,Λ

) is bounded by a polynomial in both

ε

−1

and

d

, and strong tractability means that

n

(

ε, d,Λ

) is bounded by a polynomial only in

ε

−1

.

For the class

Λ

all

, we provide necessary and sufficient conditions for tractability and strong tractability in terms of the eigenvalues of the covariance operator of a Gaussian measure on the space of solution elements. These conditions are simplified under additional assumptions on the measure. In particular, we consider measures with finite-order weights and product weights. For finite-order weights, we prove that linear multivariate problems are always tractable.

Fred Hickernell, Greg Wasilkowski, Henryk Woźniakowski
Zinterhof Sequences in GRID-Based Numerical Integration

The appropriateness of Zinterhof sequences to be used in GRID-based QMC integration is discussed. Theoretical considerations as well as experimental investigations are conducted comparing and assessing different strategies for an efficient and reliable usage. The high robustness and ease of construction exhibited by those sequences qualifies them as excellent QMC point set candidates for heterogeneous environments like the GRID.

Heinz Hofbauer, Andreas Uhl, Peter Zinterhof
A Pragmatic View on Numerical Integration of Unbounded Functions

We take a pragmatic approach to numerical integration of unbounded functions. In this context we discuss and evaluate the practical application of a method suited also for non-specialists and application developers. We will show that this method can be applied to a rich body of functions, and evaluate it’s merits in comparison to other methods for integration of unbounded integrals. Furthermore, we will give experimental results to illustrate certain issues in the actual application and to confirm theoretic results.

Heinz Hofbauer, Andreas Uhl, Peter Zinterhof
Assessment of Genetic Association using Haplotypes Inferred with Uncertainty via Markov Chain Monte Carlo

In the last years, haplotypic information has become an important subject in the context of molecular genetic studies. Assuming that some genetic mutations take part in the etiology of some diseases, it could be of great interest to compare sets of genetic variations among different unrelated individuals, inherited in block from their parents, in order to conclude if there is some association between variations and a disease. The main problem is that, in the absence of family data, obtaining haplotypic information is not straightforward: individuals having more than one polymorphic heterozygous locus have uncertain haplotypes.

We have developed a Markov Chain Monte Carlo method to estimate simultaneously the sample frequency of each possible haplotype and the association between haplotypes and a disease.

Raquel Iniesta, Victor Moreno
The Generalized Gibbs Sampler and the Neighborhood Sampler

The Generalized Gibbs Sampler (GGS) is a recently proposed Markov chain Monte Carlo (MCMC) technique that is particularly useful for sampling from distributions defined on spaces in which the dimension varies from point to point or in which points are not easily defined in terms of co-ordinates. Such spaces arise in problems involving model selection and model averaging and in a number of interesting problems in computational biology. Such problems have hitherto been the domain of the Reversible-jump Sampler, but the method described here, which generalizes the well-known conventional Gibbs Sampler, provides an alternative that is easy to implement and often highly efficient.

Jonathan Keith, George Sofronov, Dirk Kroese
The Weighted Dyadic Diaphony of Digital Sequences

The (weighted) dyadic diaphony is a measure for the irregularity of distribution modulo one of a sequence. Recently it has been shown that the (weighted) dyadic diaphony can be interpreted as the worst-case error for QMC integration in a certain Hilbert space of functions. In this paper we give upper bounds on the weighted dyadic diaphony of digital (

t, s

)-sequences over ℤ

2

.

Peter Kritzer, Friedrich Pillichshammer
A New Criterion for Finiteness of Weight Estimator Variance in Statistical Simulation

It has been found recently that an increase in phase space dimension by including simulated auxiliary random variables in the number of phase coordinates can be effective for the construction of weight modifications. In this paper the effectiveness of “value” and partial “value” modelling is considered. These types of modelling are related to the construction of simulated distribution for some auxiliary random variable by multiplying the initial density by the “value” function which is usually corresponds to the solution of adjoint integral equation of the second kind. It is proved that the weight estimator variance in case of the partial value modelling is finite. On the basis of this fact a new criterion based on the use of majorant adjoint equation was proposed for finiteness of the weight estimator variance. Using this criterion the classical “exponential transformation” method is studied for the free path simulation in one and three dimensional modifications.

Ilya Medvedev, Gennadii Mikhailov
Optimal Pointwise Approximation of a Linear Stochastic Heat Equation with Additive Space-Time White Noise

We consider a linear stochastic heat equation on the spatial domain ]0, 1[ with additive space-time white noise, and we study approximation of the mild solution at a fixed time instance. We show that a drift-implicit Euler scheme with a non-equidistant time discretization achieves the order of convergence

N

-1/2

, where

N

is the total number of evaluations of one-dimensional components of the driving Wiener process. This order is best possible and cannot be achieved with an equidistant time discretization.

Thomas Müller-Gronbach, Klaus Ritter, Tim Wagner
Unbiased Global Illumination with Participating Media

We present techniques that enhance global illumination algorithms by incorporating the effects of participating media. Instead of ray marching we use a sophisticated Monte Carlo method for the realization of propagation events and transmittance estimations. The presented techniques lead to unbiased estimators of the light transport equation with participating media.

Matthias Raab, Daniel Seibert, Alexander Keller
SIMD-Oriented Fast Mersenne Twister: a 128-bit Pseudorandom Number Generator

Mersenne Twister (MT) is a widely-used fast pseudorandom number generator (PRNG) with a long period of 2

19937

- 1, designed 10 years ago based on 32-bit operations. In this decade, CPUs for personal computers have acquired new features, such as Single Instruction Multiple Data (SIMD) operations (i.e., 128-bit operations) and multi-stage pipelines. Here we propose a 128-bit based PRNG, named SIMD-oriented Fast Mersenne Twister (SFMT), which is analogous to MT but making full use of these features. Its recursion fits pipeline processing better than MT, and it is roughly twice as fast as optimised MT using SIMD operations. Moreover, the dimension of equidistribution of SFMT is better than MT.

We also introduce a block-generation function, which fills an array of 32-bit integers in one call. It speeds up the generation by a factor of two. A speed comparison with other modern generators, such as multiplicative recursive generators, shows an advantage of SFMT. The implemented C-codes are downloadable from http ://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html.

Mutsuo Saito, Makoto Matsumoto
A New Lower Bound on the t-Parameter of (t, s)-Sequences
Rudolf Schürer
Walk-on-Spheres Algorithm for Solving Boundary-Value Problems with Continuity Flux Conditions

We consider boundary-value problem for elliptic equations with constant coefficients related through the continuity conditions on the boundary between the domains. To take into account conditions involving the solution’s normal derivative, we apply a new mean-value relation written down at a boundary point. This integral relation is exact and provides a possibility to get rid of the bias caused by usually used finite-difference approximation. Randomization of this mean-value relation makes it possible to continue simulating walk-on-spheres trajectory after it hits the boundary. We prove the convergence of the algorithm and determine its rate. In conclusion, we present the results of some model computations.

Nikolai Simonov
Good Lattice Rules with a Composite Number of Points Based on the Product Weighted Star Discrepancy

Rank-1 lattice rules based on a weighted star discrepancy with weights of a product form have been previously constructed under the assumption that the number of points is prime. Here, we extend these results to the non-prime case. We show that if the weights are summable, there exist lattice rules whose weighted star discrepancy is

O

(

n

1+

δ

), for any

δ >

0, with the implied constant independent of the dimension and the number of lattice points, but dependent on

δ

and the weights. Then we show that the generating vector of such a rule can be constructed using a component-by-component (CBC) technique. The cost of the CBC construction is analysed in the final part of the paper.

Vasile Sinescu, Stephen Joe
Ergodic Simulations for Diffusion in Random Velocity Fields

Ergodic simulations aim at estimating ensemble average characteristics of diffusion in random fields from space averages. The traditional approach, based on large supports of the initial concentration in general fails to obtain ergodic simulations. However, such simulations, using single realizations of the velocity, are shown to be feasible if space averages with respect to the location of the initial concentration support are used to estimate ensemble averages.

Nicolae Suciu, Călin Vamos, Karl Sabelfeld
Efficient Simultaneous Simulation of Markov Chains

Markov chains can be simulated efficiently by either high-dimensional low discrepancy point sets or by padding low dimensional point sets. Given an order on the state space, both approaches can be improved by sorting the ensemble of Markov chains. We analyze deterministic approaches resulting in algorithmic simplifications and provide intuition when and why the sorting works. Then we discuss the efficiency of different sorting strategies for the example of light transport simulation.

Carsten Wächter, Alexander Keller
Backmatter
Metadaten
Titel
Monte Carlo and Quasi-Monte Carlo Methods 2006
herausgegeben von
Alexander Keller
Stefan Heinrich
Harald Niederreiter
Copyright-Jahr
2008
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-74496-2
Print ISBN
978-3-540-74495-5
DOI
https://doi.org/10.1007/978-3-540-74496-2

Premium Partner