Skip to main content

Über dieses Buch

This volume features original contributions and invited review articles on mathematical statistics, statistical simulation and experimental design. The selected peer-reviewed contributions originate from the 8th International Workshop on Simulation held in Vienna in 2015. The book is intended for mathematical statisticians, Ph.D. students and statisticians working in medicine, engineering, pharmacy, psychology, agriculture and other related fields.

The International Workshops on Simulation are devoted to statistical techniques in stochastic simulation, data collection, design of scientific experiments and studies representing broad areas of interest. The first 6 workshops took place in St. Petersburg, Russia, in 1994 – 2009 and the 7th workshop was held in Rimini, Italy, in 2013.



Invited Papers


Chapter 1. Design and Analysis of Simulation Experiments

This contribution summarizes the design and analysis of experiments with computerized simulation models. It focuses on two metamodel (surrogate, emulator) types, namely first-order or second-order polynomial regression, and Kriging (or Gaussian process). The metamodel type determines the design of the simulation experiment, which determines the input combinations of the simulation model. Before applying these metamodels, the analysts should screen the many inputs of a realistic simulation model; this contribution focuses on sequential bifurcation. Optimization of the simulated system may use either a sequence of first-order and second-order polynomials—so-called response surface methodology (RSM)—or Kriging models fitted through sequential designs—including efficient global optimization (EGO). Robust optimization accounts for uncertainty in some simulation inputs.
Jack P. C. Kleijnen

Chapter 2. A Review of Simulation Usage in the New Zealand Electricity Market

In this chapter, we outline and review the application of simulation on the generation offer and consumption bids for the New Zealand electricity market (NZEM). We start by describing the operation of the NZEM with a particular focus on how electricity prices are calculated for each time period. The complexity of this mechanism, in conjunction with uncertainty surrounding factors such as consumption levels, motivates the use of simulation. We will then discuss simulation–optimization methods for optimal offer strategies of a generator, for a particular time period, in the NZEM. We conclude by extending our ideas and techniques to consumption bids and interruptible load reserve offers for major consumers of electricity including large manufacturers such as the steel mill.
Golbon Zakeri, Geoff Pritchard

Chapter 3. Power and Sample Size Considerations in Psychometrics

An overview and discussion of the latest developments regarding power and sample size determination for statistical tests of assumptions of psychometric models are given. Theoretical as well as computational issues and simulation techniques, respectively, are considered. The treatment of the topic includes maximum likelihood and least squares procedures applied in the framework of generalized linear (mixed) models. Numerical examples and comparisons of the procedures to be introduced are quoted.
Clemens Draxler, Klaus D. Kubinger

Chapter 4. Bootstrap Change Point Testing for Dependent Data

Critical values of change point tests in location and regression models are usually based on limit distribution of the respective test statistics under the null hypothesis. However, the limit distribution is very often a functional of some Gaussian processes depending on unknown quantities that cannot be easily estimated. In many situations, convergence to the asymptotic distribution is rather slow and the asymptotic critical values are not well applicable in small and moderate samples. It has appeared that resampling methods provide reasonable approximations for critical values of test statistics for detection changes in location and regression models. In this chapter dependent wild bootstrap procedure for testing changes in linear model with weakly dependent regressors and errors will be proposed and its validity verified. More specifically, the concept of \(L_p\)-m-approximability will be used.
Zuzana Prášková

Simulation for Mathematical Modeling and Analysis


Chapter 5. The Covariation Matrix of Solution of a Linear Algebraic System by the Monte Carlo Method

A linear algebraic system is solved by the Monte Carlo method generating a vector stochastic series. The expectation of a stochastic series coincides with the Neumann series presenting the solution of a linear algebraic system. An analytical form of the covariation matrix of this series is obtained, and this matrix is used to estimate the exactness of the system solution. The sufficient conditions for the boundedness of the covariation matrix are found. From these conditions, it follows the stochastic stability of the algorithm using the Monte Carlo method. The number of iterations is found, which provides for the given exactness of solution with the large enough probability. The numerical examples for systems of the order 3 and of the order 100 are presented.
Tatiana M. Tovstik

Chapter 6. Large-Scale Simulation of Acoustic Waves in Random Multiscale Media

The effective coefficients in the problem of the acoustic wave propagation have been calculated for a multiscale 3D medium by using a subgrid modeling approach. The density and the elastic stiffness have been represented by the Kolmogorov multiplicative cascades with a log-normal probability distribution. The wavelength is assumed to be large as compared with the scale of heterogeneities of the medium. We consider the regime in which the waves propagate over a distance of the typical wavelength in source. If a medium is assumed to satisfy the improved Kolmogorov similarity hypothesis, the term for the effective coefficient of the elastic stiffness coincides with the Landau-Lifshitz-Matheron formula. The theoretical results are compared with the results of a direct 3D numerical simulation.
Olga N. Soboleva, Ekaterina P. Kurochkina

Chapter 7. Parameter Inference for Stochastic Differential Equations with Density Tracking by Quadrature

We derive and experimentally test an algorithm for maximum likelihood estimation of parameters in stochastic differential equations (SDEs). Our innovation is to efficiently compute the transition densities that form the log likelihood and its gradient, and to then couple these computations with quasi-Newton optimization methods to obtain maximum likelihood estimates. We compute transition densities by applying quadrature to the Chapman–Kolmogorov equation associated with a time discretization of the original SDE. To study the properties of our algorithm, we run a series of tests involving both linear and nonlinear SDE. We show that our algorithm is capable of accurate inference, and that its performance depends in a logical way on problem and algorithm parameters.
Harish S. Bhat, R. W. M. A. Madushani, Shagun Rawat

Chapter 8. New Monte Carlo Algorithm for Evaluation of Outgoing Polarized Radiation

This chapter is devoted to the discussion of a distinctive Monte Carlo method for evaluation of angular distribution of outgoing polarized radiation. The algorithm in consideration is based on the modification of N. N. Chentsov method for unknown probability density evaluation via the orthonormal polynomial expansion. A polarization was introduced into a mathematical model of radiation transfer with use of four-dimensional vector of Stokes parameters. Corresponding weighted Monte Carlo algorithm was constructed. Using this method and precise computer simulation, the angular distribution of outgoing radiation was investigated. Special attention was given to the value of polarization impact in the mathematical model of radiation. Algorithm in consideration allows us precisely estimate even a small effect of polarization as well as a deviation of the calculated angular distribution from the Lambertian one.
Gennady A. Mikhailov, Natalya V. Tracheva, Sergey A. Ukhinov

Simulation for Stochastic Processes and Their Applications


Chapter 9. Simulation of Stochastic Processes with Generation and Transport of Particles

In modeling of a cell population evolution, the key characteristics are the existence of several sources where cells can proliferate their copies or die, and migration of cells over an environment. One of the study aims is to obtain the threshold value of a parameter which separates different types of the cell proliferation process at the sources. Continuous-time branching random walks on multidimensional lattices with a few sources of branching can be used for modeling of a cell population dynamics. For example, active growth of the cancer cellular population in the frame of branching random walk models may be explained by the excess of the threshold value. Branching random walks is an appropriate tool to describe such processes in terms of generation and transport of particles. The effect of phase transitions on the asymptotic behavior of a particle population in the frame of branching random walks was studied analytically in detail by many authors. Simulation of branching random walks is applied for numerical estimation of a threshold value of the parameter on limited time intervals. Obtained results are used to define strategies that may delay a cell population progression to some extent. The work may be treated as the first step to the simulation of branching random walks. We assume that the process started by the initial particle which walks on the lattice until it reaches one of the sources where its behavior changes, and new copies may appear. All particles behave independently of each other and of their history. We present an approach to simulation of the mean number of particles over the lattice and in every point of the lattice. Simulation of the process is based on a well-known algorithm of queue data structures and the Monte Carlo method.
Ekaterina Ermishkina, Elena Yarovaya

Chapter 10. Stochastic Models for Nonlinear Cross-Diffusion Systems

Under a priori assumptions concerning existence and uniqueness of the Cauchy problem solution for a system of quasilinear parabolic equations with cross-diffusion, we treat the PDE system as an analogue of systems of forward Kolmogorov equations for some unknown stochastic processes and derive expressions for their generators. This allows to construct a stochastic representation of the required solution. We prove that introducing stochastic test function we can check that the stochastic system gives rise to the required generalized solution of the original PDE system. Next, we derive a closed stochastic system which can be treated as a stochastic counterpart of the Cauchy problem for a parabolic system with cross-diffusion.
Yana Belopolskaya

Chapter 11. Benefits and Application of Tree Structures in Gaussian Process Models to Optimize Magnetic Field Shaping Problems

Recent years have witnessed the development of powerful numerical methods to emulate realistic physical systems and their integration into the industrial product development process. Today, finite element simulations have become a standard tool to help with the design of technical products. However, when it comes to multivariate optimization, the computation power requirements of such tools can often not be met when working with classical algorithms. As a result, a lot of attention is currently given to the design of computer experiments approach. One goal of this work is the development of a sophisticated optimization process for simulation based models. Within many possible choices, Gaussian process models are most widely used as modeling approach for the simulation data. However, these models are strongly based on stationary assumptions that are often not satisfied in the underlying system. In this work, treed Gaussian process models are investigated for dealing with non-stationarities and compared to the usual modeling approach. The method is developed for and applied to the specific physical problem of the optimization of 1D magnetic linear position detection.
Natalie Vollert, Michael Ortner, Jürgen Pilz

Chapter 12. Insurance Models Under Incomplete Information

The aim of the chapter is optimization of insurance company performance under incomplete information. To this end, we consider the periodic-review model with capital injections and reinsurance studied by the authors in their previous paper for the case of known claim distribution. We investigate the stability of the one-step and multi-step model in terms of the Kantorovich metric. These results are used for obtaining almost optimal policies based on the empirical distributions of underlying processes.
Ekaterina Bulinskaya, Julia Gusak

Chapter 13. Comparison and Modelling of Pension Systems

The purpose of this work is a comparison of pension systems of the selected countries—the pension systems and reforms of Austria, the Czech Republic, Slovakia, Sweden, Poland, and Chile will be our subjects of interest. Firstly, we focus on a short historical overview of the development and classification of pension systems in general. Consequently, the main part of this chapter deals with different scenarios, which should show whether the systems would be stable in the future. For these purposes, we developed utility in Mathematica. We tested normality of salary samples from Slovakia by robust tests for normality and computed pensions in several scenarios.
Christian Quast, Luboš Střelec, Rastislav Potocký, Jozef KiseǏák, Milan Stehlík

Chapter 14. Markowitz Problem for a Case of Random Environment Existence

Classical Markowitz model considers n assets with \(R_1, R_2,\ldots ,R_n\) random profitability and \(r_1, r_2,\ldots ,r_n\) relevant average, \(\sigma _1^2, \sigma _2^2,\ldots , \sigma _n^2\) variances, and \(\sigma _{\mu , v}\), \(\mu , v=1,\ldots ,n\) covariance. The portfolio is built of these assets, by using weighting coefficients \(\omega _1, \omega _2,\ldots , \omega _n\), where \(\omega _\mu \) is the share of asset cost \(\mu \) in the whole portfolio value. The profitability of such portfolio is a random value \(F(\omega )=\omega _1R_1+ \omega _2R_2+\ldots + \omega _nR_n\). The cumulative hazard of the portfolio at pre-assigned value of average profitability \(r*\) can be measured by dispersion \(DF(\omega )\). It is necessary to determine weighting coefficients by such a way, that minimizes dispersion \(DF(\omega )\) given assigned value of \(r*\). A more general supposition considered in this chapter: It is supposed that a random environment exists. The last is described by a continuous-time irreducible Markov chain with k states and known matrix of transition intensities \(\lambda = (\lambda _{i,j})_{k \times k}\). The reward rate depends on a state of the random environment. For this case, the parameters of Markowitz model are derived.
Alexander Andronov, Tatjana Jurkina

Testing and Classification Problems in Statistics


Chapter 15. Signs of Residuals for Testing Coefficients in Quantile Regression

We introduce a family of tests for regression coefficients based on signs of quantile regression residuals. In our approach, we first fit a quantile regression for the model where an independent variable of interest is not included in the set of model predictors (the null model). Then signs of residuals of this null model are tested for association with the predictor of interest. This conditionally exact testing procedure is applicable for randomized studies. Further, we extend this testing procedure to observational data when co-linearity between the variable of interest and other model predictors is possible. In the presence of possible co-linearity, tests for conditional association controlling for other model predictors are used. Monte Carlo simulation studies show superior performance of the introduced tests over several other widely available testing procedures. These simulations explore situations when normality of regression coefficients is not met. An illustrative example shows the use of the proposed tests for investigating associations of hypertension with quantiles of hemoglobin A1C change.
Sergey Tarima, Peter Tarassenko, Bonifride Tuyishimire, Rodney Sparapani, Lisa Rein, John Meurer

Chapter 16. Classification of Multivariate Time Series of Arbitrary Nature Based on the -Complexity Theory

The problem of classification of relatively short multivariate time series generated by different mechanisms (stochastic, deterministic or mixed) is considered. We generalize our theory of the \(\epsilon \)-complexity, which was developed for scalar continuous functions, to the case of vector-valued functions from Hölder class. The methodology for classification of multivariate time series based on the \(\epsilon \)-complexity parameters is proposed. The results on classification of simulated data and real data (EEG records of alcoholic and control groups) are provided.
Boris Darkhovsky, Alexandra Piryatinska

Chapter 17. EEG, Nonparametric Multivariate Statistics, and Dementia Classification

We are considering the problem of performing statistical inference with functions as independent or dependent variables. Specifically, we will work with the spectral density curves of electroencephalographic (EEG) measurements. These represent the distribution of the energy in the brain on different frequencies and therefore provide important information on the electric activity of the brain. We have data of 315 patients with various forms of dementia. For each individual patient, we have one measurement on each of 17 EEG channels. We will look at three different methods to reduce the high dimensionality of the observed functions: 1. Modeling the functions as linear combinations of parametric functions, 2. The method of relative power (i.e., integration over prespecified intervals, e.g., the classical frequency bands), and 3. A method using random projections. The quantities that these methods return can then be analyzed using multivariate inference, for example, using the R package npmv (Ellis et al., J Stat Softw 76(1): 1–18, 2017, [4]). We include a simulation study comparing the first two methods with each other and consider the advantages and shortcomings of each method. We conclude with a short summary of when which method may be used.
Patrick Langthaler, Yvonne Höller, Zuzana Hübnerová, Vítězslav Veselý, Arne C. Bathke

Chapter 18. Change Point in Panel Data with Small Fixed Panel Size: Ratio and Non-ratio Test Statistics

The main goal is to develop and, consequently, compare stochastic methods for detecting whether a structural change in panel data occurred at some unknown time or not. Panel data of our interest consist of a moderate or relatively large number of panels, while the panels contain a small number of observations. Testing procedures to detect a possible common change in means of the panels are established. Ratio and non-ratio type test statistics are considered. Their asymptotic distributions under the no change null hypothesis are derived. Moreover, we prove the consistency of the tests under the alternative. The advantage of the ratio statistics compared to the non-ratio ones is that the variance of the observations neither has to be known nor estimated. A simulation study reveals that the proposed ratio statistic outperforms the non-ratio one by keeping the significance level under the null, mainly when stronger dependence within the panel is present. However, the non-ratio statistic incorrectly rejects the null in the simulations more often than it should, which yields higher power compared to the ratio statistic.
Barbora Peštová, Michal Pešta

Chapter 19. How Robust Is the Two-Sample Triangular Sequential T-Test Against Variance Heterogeneity?

Reference (Rasch, Kubinger and Moder (2011b). Stat. Pap. 52, 219–231.) [4] showed that in case that nothing is known about the two variances it is better to use the approximate Welch test instead of the two-sample t-test for comparing means of two continuous distributions with existing first two moments. An analogue approach for the triangular sequential t test is not possible because it is based on the first two derivatives of the underlying likelihood functions. Extensive simulations have been done and are reported in this chapter. It is shown that the two-sample triangular sequential t test in most interesting cases holds the type I and type II risks when variances are unequal.
Dieter Rasch, Takuya Yanagida

Clinical Trials and Design of Experiments


Chapter 20. Performances of Poisson–Gamma Model for Patients’ Recruitment in Clinical Trials When There Are Pauses in Recruitment or When the Number of Centres is Small

To predict the duration of a clinical trial is a question of paramount interest. To date, the more elaborated model is the so-called Poisson–gamma model introduced by Anisimov and Fedorov in 2007. Theoretical performances of this model are asymptotic and have been established under assumptions especially on the recruitment rates by centre which are assumed to be constant in time. In order to evaluate the practical use of this model, ranges of validity have to be assessed. By means of simulation studies, authors investigate, on the one hand, the impact of the number of centres involved, of the average recruitment rate, of the duration of recruitment and of the interim time of analysis on the expected duration of the trial and, on the other hand, two strategies of estimation of the trial duration accounting for breaks in recruitment (period during which centres do not recruit) which are compared and discussed. These investigations yield to guidelines on the use of Poisson–gamma processes to model recruitment dynamics regarding these issues.
Nathan Minois, Guillaume Mijoule, Stéphanie Savy, Valérie Lauwers-Cances, Sandrine Andrieu, Nicolas Savy

Chapter 21. Simulated Clinical Trials: Principle, Good Practices, and Focus on Virtual Patients Generation

It is a well-known fact that clinical trials is a challenging process essentially for financial, ethical, and scientific concern. For twenty years, simulated clinical trials (SCT for short) has been introduced in the drug development. It becomes more and more popular mainly due to pharmaceutical companies which aim to optimize their clinical trials (duration and expenses) and the regulatory agencies which consider simulations as an alternative tool to reduce safety issues. The whole simulation plan is based on virtual patients generation. The natural idea to do so is to perform Monte Carlo simulations from the joined distribution of the covariates. This method is named Discrete Method. This is trivial when the parameters of the distribution are known, but, in practice, data available come from historical databases. A preliminary estimation step is necessary. For Discrete Method that step may be not effective, especially when there are a lot of covariates mixing continuous and categorical ones. In this chapter, simulation studies illustrate that the so-called Continuous Method may be a good alternative to the discrete one, especially when marginal distributions are moderately bi-modal.
Nicolas Savy, Stéphanie Savy, Sandrine Andrieu, Sébastien Marque

Chapter 22. Determination of the Optimal Size of Subsamples for Testing a Correlation Coefficient by a Sequential Triangular Test

Schneider, Rasch, Kubinger and Yanagida [8] (Schneider, Rasch, Kubinger and Yanagida [8]. Stat. Pap. 56, 689 600) suggested a sequential triangular test for testing a correlation coefficient (see also Rasch, Yanagida, Kubinger, and Schneider [6]). In contrast to other sequential (triangular) tests, it is not possible to decide after each additional sampled research unit whether
the null-hypothesis is to accept or
to reject or
to sample further units.
For the calculation of the correlation coefficient and to use Fisher’s transformation, step-by-step \(k \ge 4\) units are needed at once. In the present chapter, we improve the test proposed by Rasch, Yanagida, Kubinger and Schneider (2014) by determining which number k of subsampled research units is minimal (optimal), in order to hold the type-I-risk, given a specific type-II-risk and a specific effect size \(\delta =\rho _{1}-\rho _{0}\). Selected results are presented. For parameters not included irrespective tables, the reader may use a R package called seqtest for own simulations.
Dieter Rasch, Takuya Yanagida, Klaus D. Kubinger, Berthold Schneider

Chapter 23. Explicit T-optimal Designs for Trigonometric Regression Models

This chapter devotes to the problem of constructing T-optimal discriminating designs for Fourier regression models which differ by at most three trigonometric functions. Here we develop the results obtained in a paper (Dette, Melas and Shpilev (2015). T-optimal discriminating designs for Fourier regression models. 1–17) [11] and give a few its generalizations. We consider in detail the case of discriminating between two models where the order of the larger one equals two. For this case, we provide explicit solutions and investigate the dependence of the locally T-optimal discriminating designs on the parameters of the larger model. The results obtained in the chapter can also be applied in classical approximation theory.
Viatcheslav B. Melas, Petr V. Shpilev

Chapter 24. Simulations on the Combinatorial Structure of D-Optimal Designs

In this work, we present the results of several simulations on main-effect factorial designs. The goal of such simulations is to investigate the connections between the D-optimality of a design and its geometric structure. By means of a combinatorial object, namely the circuit basis of the model matrix, we show that it is possible to define a simple index that exhibits strong connections with the D-optimality.
Roberto Fontana, Fabio Rapallo

Simulations for Reliability and Queueing Models


Chapter 25. On the Consequences of Model Misspecification for Biased Samples from the Weibull Distribution

Model misspecification is common in practice specially when the sampling mechanism is not known. A sized-biased sample arises in case where the probability of a unit of the population to be chosen in a sample is proportional to some nonnegative weight function w(x) of its size x. In this chapter, we study the model misspecification results when a sized-biased sample from the Weibull distribution is treated as a random one as well as when a random sample is treated as biased. Special attention is paid on the misspecification effects on the parameter estimation and on some of the most important characteristics of the distribution, such as the mean, the median, and the variance. It is proven that when we treat a biased sample as a random one, the parameters are overestimated and in the opposite case are underestimated. Simulation results verify the theoretical findings for small as well as for large samples.
George Tzavelas, Polychronis Economou

Chapter 26. An Overview on Recent Advances in Statistical Burn-In Modeling for Semiconductor Devices

In semiconductor manufacturing, the early life of the produced devices can be simulated by means of burn-in. In this way, early failures are screened out before delivery. To reduce the efforts associated with burn-in, the failure probability p in the early life of the devices is evaluated using a burn-in study. Classically, this is done by computing the exact Clopper–Pearson upper bound for p. In this chapter, we provide an overview on a series of new statistical models, which are capable of considering further available information (e.g., differently reliable chip areas) within the Clopper–Pearson estimator for p. These models help semiconductor manufacturers to more efficiently evaluate the early life failure probabilities of their products and therefore reduce the efforts associated with burn-in studies of new technologies.
Daniel Kurz, Horst Lewitschnig, Jürgen Pilz

Chapter 27. Simplified Analysis of Queueing Systems with Random Requirements

In this work, a simplification approach for analysis of queueing systems with random requirements is proposed. The main point of the approach is to keep track of only total amount of occupied system resources. Therefore, we cannot know the exact amount of resources released by the departure of a customer, so we assume it a random variable with conditional cumulative distribution function depending on only number of customers in the system and total occupied resources at the moment just before the departure. In the chapter, we briefly describe the queuing system with random requirements, the simplification method and show that in case of Poisson arrival process simplified system has exactly the same stationary probability distribution as the original one.
Konstantin E. Samouylov, Yuliya V. Gaidamaka, Eduard S. Sopin

Chapter 28. On Sensitivity of Steady-State Probabilities of a Cold Redundant System to the Shapes of Life and Repair Time Distributions of Its Elements

The problem of sensitivity of a redundant system’s reliability characteristics to shapes of their input distributions is considered. In Efrosinin and Rykov, Information Technologies and Mathematical Modelling, 2014, [1] an analytical form for dependence of a two-unit cold standby redundant system reliability characteristics on life and repair time input distributions was obtained and investigated for the case of exponential distribution of one of the time lengths. In the current chapter this study is extended with the help of simulation method to a general case of both non-exponential distributions. Comparison of analytic and simulation results was carried out.
Vladimir Rykov, Dmitry Kozyrev

Chapter 29. Reliability Analysis of an Aging Unit with a Controllable Repair Facility Activation

The chapter utilizes the continuous-time Markov chain for modeling the processes of the gradual aging with maintenance on a finite discrete set of an intermediate failure states. The transitions occur according to the birth-and-death process, and the unit fails completely after visiting the last available state. The unit of a multiple and single use is studied. The switching of the repair facility is performed by a hysteresis control policy with two threshold levels for switching on/off the repair server. We provide the expressions for the stationary and non-stationary performance and reliability characteristics, solution of optimization problems, and sensitivity analysis of the reliability function.
Dmitry Efrosinin, Janos Sztrik, Mais Farkhadov, Natalia Stepanova
Weitere Informationen

Premium Partner

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.



Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung

Unternehmen haben das Innovationspotenzial der eigenen Mitarbeiter auch außerhalb der F&E-Abteilung erkannt. Viele Initiativen zur Partizipation scheitern in der Praxis jedoch häufig. Lesen Sie hier  - basierend auf einer qualitativ-explorativen Expertenstudie - mehr über die wesentlichen Problemfelder der mitarbeiterzentrierten Produktentwicklung und profitieren Sie von konkreten Handlungsempfehlungen aus der Praxis.
Jetzt gratis downloaden!