Skip to main content
Top

1992 | Book

Simulation and Optimization

Proceedings of the International Workshop on Computationally Intensive Methods in Simulation and Optimization held at the International Institute for Applied Systems Analysis (IIASA), Laxenburg, Austria, August 23–25, 1990

Editors: Prof. Dr. Georg Pflug, Prof. Dr. Ulrich Dieter

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Economics and Mathematical Systems

insite
SEARCH

About this book

This volume contains selected papers presented at the "International Workshop on Computationally Intensive Methods in Simulation and Op­ th th timization" held from 23 to 25 August 1990 at the International Institute for Applied Systems Analysis (nASA) in La~enburg, Austria. The purpose of this workshop was to evaluate and to compare recently developed methods dealing with optimization in uncertain environments. It is one of the nASA's activities to study optimal decisions for uncertain systems and to make the result usable in economic, financial, ecological and resource planning. Over 40 participants from 12 different countries contributed to the success of the workshop, 12 papers were selected for this volume. Prof. A. Kurzhanskii Chairman of the Systems and Decision Sciences Program nASA Preface Optimization in an random environment has become an important branch of Applied Mathematics and Operations Research. It deals with optimal de­ cisions when only incomplete information of t.he future is available. Consider the following example: you have to make the decision about the amount of production although the future demand is unknown. If the size of the de­ mand can be described by a probability distribution, the problem is called a stochastic optimization problem.

Table of Contents

Frontmatter

Optimization of Simulated Systems

Performance evaluation for the score function method in sensitivity analysis and stochastic optimization
Abstract
Estimating systems performance 𝓁(ρ) = 𝔼 ρ L and the associated sensitivity (the gradient ∇𝓁(ρ)) for several scenarios via simulation generally requires a separate simulation for each scenario. The score function (SF) method handles this problem by using a single simulation run, but little is known about how the estimators perform. Here we discuss the efficiency of the SF estimators in the setting of simple queueing models. In particular we consider heavy traffic (diffusion) approximations for the sensitivity and the variances of the associated simulation estimators, and discuss how to choose a ‘good’ reference system (if any) in order to obtain reasonably good SF estimators.
Søren Asmussen, Reuven Rubinstein
Experimental Results for Gradient Estimation and Optimization of a Markov Chain in Steady-State
Abstract
Infinitesimal perturbation analysis (IPA) and the likelihood ratio (LR) method have drawn tots of attention recently, as ways of estimating the gradient of a performance measure with respect to continuous parameters in dynamic stochastic systems. In this paper, we experiment with the use of these estimators in stochastic approximation algorithms, to perform so-called “single-run optimizations” of steady-state systems, as suggested in [23]. We also compare them to finite-difference estimators, with and without common random numbers. In most cases, the simulation length must be increased from iteration to iteration, otherwise the algorithm converges to the wrong value. We have performed extensive numerical experiments with a simple M/M/1 queue. We state convergence results, but do not give the proofs. The proofs are given in [14].
Pierre L’Ecuyer, Nataly Giroux, Peter W. Glynn
Optimization of Stochastic Discrete Event Dynamic Systems: A Survey of Some Recent Results
Abstract
Models of discrete event dynamic systems (DEDS) include finite state machines [24], Petri nets [35], finitely recursive processes [26], communicating sequential processes [18], queuing models [41] among others. They become increasingly popular due to important applications in manufacturing systems, communication networks, computer systems. We would consider here a system which evolution or sample path consists of the sequence
$$y\left( {x,\omega} \right) = \left\{ {\left( {{t_0},{z_0}} \right),\left( {{t_1},{z_1}} \right),...,\left( {{t_s},{z_s}} \right)} \right\},{t_i} = {t_i}\left( {x,\omega} \right),{z_i} = {z_i}\left( {x,\omega} \right)$$
where zi(x,ω)) ∈ W is the state of the system during the time interval \({t_i}\left( {x,\omega} \right) \le t < {t_{i + 1}}\left( {x,\omega} \right)\), x∈X⊆Rn is the set of control parameters and ω∈Ω is an element of some probability space (Ω, 𝔹, ℙ). Particular rules which govern transitions between states at time moments ti. can be specified in the framework of one of the models mentioned above. For describing the time behavior the generalized semi- Markov processes proved to be useful [47].
Alexei A. Gaivoronski
Sensitivity Analysis of Simulation Experiments: Regression Analysis and Statistical Design
Abstract
This tutorial gives a survey of strategic issues in the statistical design and analysis of experiments with deterministic and random simulation models. These issues concern what-if analysis, optimization, and so on. The analysis uses regression (meta)models and Least Squares. The design uses classical experimental designs such as 2k-p factorials, which are efficient and effective. If there are very many inputs, then special techniques such as group screening and sequential bifurcation are useful. Applications are discussed.
Jack P. C. Kleijnen

Optimization and Stochastic Optimization

A Stochastic Optimization Approach for Training the Parameters in Neural Networks
Abstract
Recently, back-propagation method has often been applied to adapt artificial neural network for various pattern classification problems. However, an important limitation of this method is that it sometimes fails to find a global minimum of the total error function of neural network. In this paper, a hybrid algorithm which combines the modified back-propagation method and the random optimization method is proposed in order to find the global minimum of the total error function of neural network in a small number of steps. It is shown that this hybrid algorithm ensures convergence to a global minimum with probability 1 in a compact region of weight vector space. Further, several computer simulation results dealing with the problem of forcasting air pollution density, stock price, and etc. are given.
Norio Baba
Integrated stochastic approximation program system
Summary
The brief description and the full user manual of the Stochastic Approximation program is given. This program contains a large variety of the one dimensional stochastic approximation methods both for the root and the extreme estimates. It can be used also for the root and the extreme estimates of the function of location parameters or the regression quantiles function, in particular. The estimation of parameters of the unknown distribution together with the solution of the LD-50 problem is included as the special part of the program. The program offers the basic statistics and information for comparison of different methods chosen by the user. It was developed for computers compatible with IBM PC and the user needs Turbo Pascal v. 4.0 for its performance.
Pavel Charamza
Lexicographic Duality in Linear Optimization
Abstract
By lexicographic maximization of the system of functions {fi(x)} 0 k on a set M corresponding to permutation p = (i 0,...,i k) we mean the problem: Find M such that vector \(\left[ {{f_{{i_0}}}\left( {\tilde x} \right), \ldots ,{f_{{i_k}}}\left( {\tilde x} \right)} \right]\) is a p-lexicographic maximum on the set
$$y = \left\{ {\left[ {{y_0}, \ldots ,{y_k}} \right]\;\left| {\;{y_t} = {f_{{i_t}}}\left( x \right),x \in M,\;t = 0, \ldots ,k} \right.} \right\}$$
.
I. I. Eremin
Dual Optimization of Dynamic Systems
Abstract
Three pairs of adjoint control and observation problems and a pair of adjoint control and identification problems are investigated. On the base of constructive theory of extremal problems created by the authors and their collaborators a method of constructing guaranteeing program optimal solutions using observation and control procedures is set.
R. Gabasov, F. Kirillova
Stochastic Approximation Via Averaging: The Polyak’s Approach Revisited
Abstract
Recursive stochastic optimization algorithms are considered in this work. A class of multistage procedures is developed. We analyze essentially the same kind of procedures as proposed in Polyak’s recent work. A quite different approach is taken and correlated noise processes are dealt with. In lieu of evaluating the second moments, the methods of weak convergence are employed and the asymptotic properties are obtained by examining a suitably scaled sequence. Under rather weak conditions, we show that the algorithm via averaging is an efficient approach in that it provides us with the optimal convergence speed. In addition, no prewhitening filters are needed.
G. Yin

Random Numbers

Nonuniform Random Numbers: A Sensitivity Analysis for Transformation Methods
Abstract
There are many methods for the transformation of uniform random numbers into nonuniform random numbers. These methods are employed for pseudo-random numbers generated by computer programs. It is shown that the sensitivity to the pseudo-random numbers used can vary a lot between the transformation methods. A classification of the sensitivity of several transformation methods is given. Numerical examples are presented for various transformation methods.
Lothar Afflerbach, Wolfgang Hörmann
Nonlinear Methods for Pseudorandom Number and Vector Generation
Abstract
The principal aim of pseudorandom number generation is to devise and analyze deterministic algorithms for generating sequences of numbers which simulate a sequence of i.i.d random variables with given distribution function. We shall deal here exclusively with pseudorandom numbers for the uniform distribution on the interval [0,1], i.e. with uniform pseudorandom numbers. We refer to Knuth [16], Niederreiter [18], Ripley [25], and to the recent survey by Niederreiter [24] for a general background on uniform pseudorandom number generation.
Harald Niederreiter
Sampling from Discrete and Continuous Distributions with C-Rand
Abstract
C-RAND is a system of Turbo-C routines and functions intended for use on microcomputers. It contains up-to-date random number generators for more than thirty univariate distributions. For some important distributions the user has the choice between extremely fast but rather complicated methods and somewhat slower but also much simpler procedures. Menu driven demo programs allow to test and analyze the generators with regard to speed and quality of the output.
Ernst Stadlober, Ralf Kremer
Backmatter
Metadata
Title
Simulation and Optimization
Editors
Prof. Dr. Georg Pflug
Prof. Dr. Ulrich Dieter
Copyright Year
1992
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-48914-3
Print ISBN
978-3-540-54980-2
DOI
https://doi.org/10.1007/978-3-642-48914-3