Skip to main content

Über dieses Buch

Scientists and engineers are increasingly making use of simulation methods to solve problems which are insoluble by analytical techniques. Monte Carlo methods which make use of probabilistic simulations are frequently used in areas such as numerical integration, complex scheduling, queueing networks, and large-dimensional simulations. This collection of papers arises from a conference held at the University of Nevada, Las Vegas, in 1994. The conference brought together researchers across a range of disciplines whose interests include the theory and application of these methods. This volume provides a timely survey of this field and the new directions in which the field is moving.



Modified Monte Carlo Methods Using Quasi-Random Sequences

Computational experiments have shown that Monte Carlo methods using quasi-random sequences lose some of their effectiveness for integration problems in which the dimension is large or the integrand is not smooth. In this paper, two modified Monte Carlo methods are developed, which regain an enhanced convergence rate. The standard rejection method involves discontinuities, corresponding to the decision to accept or reject. In place of this, a smoothed rejection method is formulated and found to be very effective when used with quasi-random sequences. Monte Carlo evaluation of Feynman-Kac path integrals involves high dimension, one dimension for each discrete time interval. Through an alternative discretization, the effective dimension of the integration domain is drastically reduced, so that quasi-random sequences are effective.
Russel E. Caflisch, Bradley Moskowitz

Simulated Annealing: Folklore, Facts, and Directions

Simulated annealing is a (sometimes-maligned) probabilistic search method used in global optimization. Folklore has it that simulated annealing must use static (generally-large) neighborhoods, that it must blindly propose moves independently of objective-function values, that it must waste a lot of (computer) time rejecting moves, that tabu search and genetic algorithms as well as random restarting and deterministic methods are disjoint from simulated annealing, and that the canonical cooling schedule is worthwhile. All this is false. After exploding these myths, the role of search diversification in eliminating pathologies and in setting the stage for guaranteed linear speedup of parallel search is pointed out. Next we sketch a way to use low-discrepancy point sets in a preprocessor. Finally, we outline methods to handle noisy objective functions and argue that convergence results for simulated annealing have strong practical relevance in that setting.
Bennett L. Fox

Two Approaches to the Initial Transient Problem

This paper describes two different approaches to dealing with the initial transient problem. In the first approach, the length of the “warm-up period” is determined by obtaining analytical estimates on the rate of convergence to stationarity. Specifically, we obtain an upper bound on the “second eigenvalue” of the transition matrix of a Markov chain, thereby providing one with a theoretical device that potentially can give estimates of the desired form. The second approach is data-driven, and involves using observed data from the simulation to determine an estimate of the “warm-up period”. For the method we study, we are able to use a coupling argument to establish a number of important theoretical properties of the algorithm.
Peter W. Glynn

Tables of (T, M, S)-Net and (T, 5)-Sequence Parameters

We present a survey of the known constructions of (t,m,s)-nets and (t,s)-sequences and tabulate the best parameters arising from these constructions for various bases.
Gary L. Mullen, Arijit Mahalanabis, Harald Niederreiter

New Developments in Uniform Pseudorandom Number and Vector Generation

A survey of recent and new developments in the areas of uniform pseudorandom number and uniform pseudorandom vector generation is presented. The emphasis is on generators for which a detailed theoretical analysis is available.
Harald Niederreiter

Quasi-Monte Carlo Methods for Particle Transport Problems

Particle transport problems arise in such diverse application areas as the modeling of nuclear reactors and of semiconductor devices, and in the remote sensing of underground geologic features. Conventional Monte Carlo methods solve such problems by using pseudorandom numbers to make decisions at the microscopic level in order to draw conclusions about the macroscopic behavior of the system. Application of quasirandom (low discrepancy) sequences to such problems encounters certain difficulties that must be overcome if predictable gains over the use of pseudorandom Monte Carlo are to be realized. This paper outlines several ideas for achieving this and presents the results of “model” problem analyses and numerical tests of these ideas.
Jerome Spanier

Non-Adaptive Coverings for Optimization of Gaussian Random Fields

Non-adaptive methods for global optimization of a function choose observation points independently of past observed values. We study the average performance of two simple non-adaptive algorithms for optimization of real-valued functions defined on a compact set in R d. One method chooses observations on a deterministic uniform grid, while the other chooses observations independently and uniformly distributed over the domain. By average performance is meant that we endow the set of objective functions with a probability measure; i.e., view the function as a sample path of a stochastic process (d = 1) or a random field (d > 1). Under the assumption of a smooth Gaussian field, we identify the limiting distributions of the (normalized) error for both methods. The deterministic grid is asymptotically 6 times as efficient in the one-dimensional case. As the dimension increases, the relative advantage of the deterministic grid decreases, and for dimensions above 6 the random grid is superior. The results on relative performance are insensitive to the particular Gaussian field assumed.
James M. Calvin

The Method of Fundamental Solutions and the Quasi-Monte Carlo Method for Poisson’s Equation

A multivariate integration method based on the quasi-Monte Carlo method which alleviates the problem of domain discretization has been implemented for computing particular solutions of Poisson’s equation. Coupled with the method of fundamental solutions, a simpler and versatile technique, we have demonstrated a simple computational method which provides means of shifting the effort from analyst to computer. For illustration, we give three numerical examples for a set of standard problems and compare results of different methods.
C. S. Chen

Implementation of a Distributed Pseudorandom Number Generator

In parallel Monte Carlo simulations, it is highly desirable to have a system of pseudo-random number generators that has good statistical properties and allows reproducibility of random sequences used by the individual processes. In this work, we discuss a distributed implementation of such a system on computer networks. The system employs linear congruential pseudorandom number generators obtained from the work “Random Number Generators for MIMD Parallel Processors” [1] and is organized as a virtual, complete binary tree. Nodes of the tree are the generators and the processes using them. The relationship between a node and its subtrees reflects that of a process and its children. Any binary tree up to a million nodes can be maintained. To achieve high utilization of the generators, we implement a dynamically expanding complete binary tree. A server is maintained on a host on our computer network and application programs obtain the generators from the interface provided by the system. The system guarantees that no generator is allocated more than once to an application process and no extraneous correlations are introduced. The pseudorandom number system uses the client/server model, communicating via the Remote Procedure Calls (RPC).
Jian Chen, Paula Whitlock

On the lattice test for inversive congruential pseudorandom numbers

An important tool for the analysis of inversive congruential pseudorandom numbers is the lattice test. For a given prime modulus p, the optimal behavior of inversive congruential generators occurs when they pass the (p - 2)-dimensional lattice test. We use the connection with permutation polynomials to establish several criteria for passing the (p - 2)-dimensional lattice test. We also prove that if p is a Mersenne prime, then there exists an inversive congruential generator which has period length p and passes the (p - 2)-dimensional lattice test.
Wun-Seng Chou, Harald Niederreiter

Discrepancy lower bound in two dimensions

In this communication we get, for the first time, the exact order of magnitude for the discrepancy of a two-dimensional sequence independently constructed by I.M. Sobol’ and S. Srinivasan; moreover this sequence has the smallest discrepancy presently known in two dimensions.
Henri Faure

Stable Estimators for Self-Adjusting Simulations

We present two estimators which converge almost surely to the nearest integer to t = ET, with few restrictions on the random variable T. This works even when t is of the form m + 1/2 for an integer m. Applications to simulated annealing and to the design of other Monte Carlo experiments are indicated.
Bennett L. Fox, George W. Heine

A Comparison of Random and Quasirandom Points for Multidimensional Quadrature

An integral over a unit volume may be approximated by the mean of the values of the integrand on some sample of points from the integration domain. A variety of sampling methods for quadrature have been proposed. These include random samples, randomized orthogonal arrays, good lattice point sets and quasirandom sequences. Recently the author has derived quadrature error bounds by using an ANOVA decomposition and reproducing kernel Hilbert spaces. The error bound coefficients are the parts of the error bound depending only on the sample and not on the integrand. This article compares the error bound coefficients for various random and quasirandom samples. The relative advantages of different sampling schemes are explored. More sophisticated samples are found to be significantly better than a simple random sample if and only if the sample size is large or the integrand is approximately a sum of parts depending on only a few variables.
Fred J. Hickernell

A Simulation Study of a Change-Point Poisson Process Based on Two Well-known Test Statistics

Methods for analysis of trends for the series of events have been extensively studied by many authors, both from a parametric and a nonparametric point of view. Most of the work on the development of the procedures for the nonhomoge- neous Poisson processes are based on the fixed sample size tests. It is also desirable that in some situations, interim analyses are undertaken periodically. Suppose, for example, a repairable system is under development. A development program might consist of testing to identify deficiencies, a redesign effort to correct the deficiencies, and further testing to verify these corrections and identify new problem areas. It would be advantageous to track the reliability growth trend of the system, by means of the failure data collected during development testing, so that the program could be revised, if necessary, in order to attain the system reliability objectives. Since the data often occur naturally in a sequential fashion, it will be useful to have sequential procedures allowing for repeated significance tests to the accumulating data. In this paper, we provide side-to-side information for the following two widely used test statistics in this area: (1) the well-known Laplace test (called L test), and (2) the most powerful test for the shape parameter in the Poisson process with Weibull intensity (called Z test). Discussion of major results based on a Monte Carlo simulation study include: (1) the estimated probability of type I error at or before the nth test in sampling from a simple Poisson distribution at a constant nominal level, (2) the existing support from the well-developed sequential clinical trial designs available in the literature, (3) performance assessment for abrupt changes (increasing or decreasing in the intensity of the process), and (4) a control charting procedure which presents a visual interpretation of the trend and can be practically translated for tabular or manual use in modeling the occurrences of stochastic phenomena.
Chih-Hsiang Ho

A Quasi-Monte Carlo Algorithm for the Global Illumination Problem in the Radiosity Setting

One of the main problems in computer graphics is to solve the global illumination problem, which is given by a Fredholm integral equation of the second kind, called the radiance equation (REQ). In order to achieve realistic images, a very complex kernel of the integral equation, modelling all physical effects of light, must be considered. Due to this complexity Monte Carlo methods seem to be an appropriate approach to solve the REQ approximately. We show that replacing Monte Carlo by quasi-Monte Carlo in some steps of the algorithm results in a faster convergence
Alexander Keller

Multivariate Walsh series, digital nets and quasi-Monte Carlo integration

Quite recently in a series of papers ([6], [11], [9], [10], [8]) a theory of the numerical integration of multivariate Walsh series by means of (t, m, s)-nets was developed.
Gerhard Larcher, Wolfgang Ch. Schmid

Parallel Pseudorandom Number Generation Using Additive Lagged-Fibonacci Recursions

We study the suitability of the additive lagged-Fibonacci pseudorandom number generator for parallel computation. This generator has a relatively short period with respect to the size of its seed. However, the short period is more than made up for with the huge number of full-period cycles it contains. We call these different full-period cycles equivalence classes. We show how to enumerate the equivalence classes and how to compute seeds to select a given equivalence class. The use of these equivalence classes gives an explicit parallelization suitable for a fully reproducible asynchronous MIMD implementation. To explore such an implementation we introduce an exponential sum measure of quality for the additive lagged-Fibonacci generators used in serial or parallel. We then prove the first non-trivial results we are aware of on this measure of quality.
Michael Mascagni, M. L. Robinson, Daniel V. Pryor, Steven A. Cuccaro

Quasirandom Diffusion Monte Carlo

Diffusion Monte Carlo is a common method for estimating the properties of quantum mechanical systems by computing averages over sets of random walk simulations. We have found that by using quasirandom sequences of points in place of random or pseudorandom points in generating the simulation paths, we are able to obtain improved convergence rates and consequently reduced Monte Carlo errors for Diffusion Monte Carlo. Computational results are presented for a three dimensional harmonic oscillator and the Helium atom.
Bradley Moskowitz

Randomly Permuted (t,m,s)-Nets and (t, s)-Sequences

This article presents a hybrid of Monte Carlo and Quasi-Monte Carlo methods. In this hybrid, certain low discrepancy point sets and sequences due to Faure, Niederreiter and Sobol’ are obtained and their digits are randomly permuted. Since this randomization preserves the equidistribution properties of the points it also preserves the proven bounds on their quadrature errors. The accuracy of an estimated integrand can be assessed by replication, consisting of independent re-randomizations.
Art B. Owen

Quantum Monte Carlo Simulation: Algorithm and Applications

The algorithm of the diffusion quantum Monte Carlo simulation method and some of its applications are presented. The emphasis is on the actual calculations of electronic structures of small systems, including a hydrogen molecule confined in a spheroidal box and the ionic hydrogen clusters H+2n+1 with n=1,2,3,4. The Monte Carlo calculations on the molecular energies and structures of these systems reported here are the best theoretical calculations to date. Equilibrium geometric structures of some clusters are searched through optimizing their molecular energies and good agreement is found with the previous configuration interaction calculations. The electronic structures of the H+ 2n+1 clusters are analyzed based on the bonding formation in the systems. Estimate on the dissociation energy of each cluster is made and compared with the available experimental measurements.
Tao Pang

A Coupled Monte Carlo/Explicit Euler Method for the Numerical Simulation of a Forest Fire Spreading Model

This paper presents an application of a general Monte Carlo integration method, coupled with an explicit euler time discretization to the numerical solution of an integro-differential forest fire propagation model. In this model, the time variation of the enthalpy of the fuel bed is given in terms of radiative heat transfer and radiative and convective heat loss terms. The radiative heat exchange is given by a local flame model This approach results in a convolution integral between the heat generated by combustion and an empirical shape function.
Antonini Puppin Macedo, Antonio C. P. Brasil

Microcanonical Monte Carlo

The generic properties of finite equilibrised many-body systems under the action of long-range forces are discussed. The microcanonical thermodynamics of such system is developed. As realistic example the multifragmentation of hot nuclei is investigated in some detail with the help of microcanonical Metropolis-Monte Carlo (MMMC). MMMC is an alternative to molecular dynamics but it has the advantage that one only samples the final states one is interested in and to get the relevant branching ratios. Besides the nuclear fragmentation there are other important candidates for MMMC description like fragmenting of multiply charged metal clusters and astrophysical systems like planetary systems.
O. Schapiro, D. H. E. Gross, A. Ecker

Computational Investigations of Low-Discrepancy Point Sets II

Point sets and sequences with small L2, discrepancy are useful in the evaluation of multiple integrals. For example, the average error in integration of all continuous functions over the unit cube (with respect to the Wiener measure) is given by the L2 discrepancy of the point set being used. [6] The Koksma-Hlawka inequality and Zaremba’s related inequality also imply the usefulness of low-discrepancy point sets. [4,7]
Tony T. Warnock

Estimates for the Volume of Points of (0, s)-Sequences in Base b ≥ s ≥ 2

In this paper we suggest a notion of volume of points and we estimate the volume of points for (0, s)-sequences which are among the best sequences with low-discrepancy (cf. [1], [7], [8] and [6]).
Yi-Jun Xiao


Weitere Informationen