Skip to main content
main-content

Über dieses Buch

Dynamic Systems (DEDS) are almost endless: military C31 Ilogistic systems, the emergency ward of a metropolitan hospital, back offices of large insurance and brokerage fums, service and spare part operations of multinational fums . . . . the point is the pervasive nature of such systems in the daily life of human beings. Yet DEDS is a relatively new phenomenon in dynamic systems studies. From the days of Galileo to Newton to quantum mechanics and cosmology of the present, dynamic systems in nature are primarily differential equations based and time driven. A large literature and endless success stories have been built up on such Continuous Variable Dynamic Systems (CVDS). It is, however, equally clear that DEDS are fundamentally different from CVDS. They are event driven, asynchronous, mostly man-made and only became significant during the past generation. Increasingly, however, it can be argued that in the modem world our lives are being impacted by and dependent upon the efficient operations of such DEDS. Yet compared to the successful paradigm of differential equations for CVDS the mathematical modelling of DEDS is in its infancy. Nor are there as many successful and established techniques for their analysis and synthesis. The purpose of this series is to promote the study and understanding of the modelling, analysis, control, and management of DEDS. The idea of the series came from editing a special issue of the Proceedings of IEEE on DEOS during 1988.

Inhaltsverzeichnis

Frontmatter

Chapter One. Introduction to Discrete Event Dynamic Systems

Abstract
Picture yourself with the mythical Mr. T. S. Mits (T he S cientific M an I n T he S treet) and the task of explaining to him the phenomena and workings of (i) the Gulf Stream and (ii) a computer-controlled flexible manufacturing system (FMS). Both phenomena are real and both are not completely understood. However, for task (i), you face an easy assignment since you can draw upon a knowledge of calculus and differential equations to provide a succinct description of ocean currents and fluid dynamics. For task (ii), no such ready-made models are in existences1. One is essentially reduced to using an algorithmic description not very different from writing a computer program to simulate the FMS. In fact, modern technology has increasingly created dynamic systems which cannot be easily described by ordinary or partial differential equations. Examples of such systems are production or assembly lines, computer communication networks, traffic systems on both the air and land side of a large airport, military C3I (C ommand-C ontrol-C ommunication-I ntelligence) system, etc. The evolution of these systems in time depends on the complex interactions of the timing of various discrete events, such as the arrival or the departure of a job, and the initiation or the completion of a task or a message.
Yu-Chi Ho, Xi-Ren Cao

Chapter Two. Introduction to Perturbation Analysis

Abstract
To facilitate the discussion of Perturbation Analysis (PA), let us introduce some notations for DEDS. Let
θ =
system parameter(s)
 
x(t) =
a time history of the evolution of the DEDS, i.e., the (state, holding time) sequence as illustrated in Fig.1.1 In more physical terms, this may consist of the content of all the queues as a function of time, durations of all service intervals, etc. Since DEDS are often stochastic, x(t) will in general be dependent on the actual realized values of various random variables in the system.
 
ξ=
a vector of random variables, defined on the underlying probability space, that represents all the random phenomena of the DEDS or a particular realization of all the random variables in the system.
 
Yu-Chi Ho, Xi-Ren Cao

Chapter Three. Informal Treatment of Infinitesimal Perturbation Analysis

Abstract
As we sketched out in Appendices A and B, by and large there are two approaches to the performance oriented study of DEDS, namely, the analytical or the simulation-based approaches. The former has as its centerpiece the product-form formula of Jackson-Newell-Gordon and their various extensions (see Appendix A); the latter, simulation languages such as SIMSCRIPT, GPSS, SLAM, SIMAN, and their mathematical formalization, GSMP (see Appendix B). Perturbation Analysis in some sense attempts to combine the advantages of both the theoretical and the simulation approaches. PA is time-domain based. It views a queueing network as a stochastic dynamical system evolving in time and analyzes the sample realization of its state process. To this extent, it is no different from the viewpoint (and, hence, enjoys the same advantages) of the simulation approach to DEDS. However, by observing a sample path (trajectory) of the network trajectory, we use analysis to derive answers to the question: “what” will happen “if’ we repeat the sample path exactly except for a small perturbation of the timing of some event at some time t? The efficiency of our approach lies in the fact that we can answer a multitude of such ”what if“ questions simultaneously, while a single sample path is being observed. Thus, compared with the brute-force simulation study, our approach has a computational advantage of N+1: 1, where N is the number of ”what if“ questions asked (the brute-force simulation method requires one additional simulation experiment for each question asked).
Yu-Chi Ho, Xi-Ren Cao

Chapter Four. Foundations of Perturbation Analysis

Abstract
The infinitesimal perturbation generation and propagation rules were explained in the last chapter. Following these rules, we can obtain a perturbed sample path by analyzing a nominal sample path and then obtain the performance sensitivity. In this chapter, we shall formulate and study the problem in a more formal manner. Our purpose is to provide a mathematical foundation for IPA. To this end, we first choose the closed Jackson network and the GI/G/1 queue as the basic cases for the discussion. The same principle is then applied to the networks with general service time distributions, for which the so-called product-form solution does not exist. The important point is that the principle illustrated in this chapter applies to many other DEDS1. The process of establishing these results will also illustrate the limitation of the infinitesimal perturbation analysis; the limitation leads to the extensions of IPA, which will be discussed in the next several chapters. For technical integrity, this chapter contains more details, many of which can be passed over in a first reading, than other chapters. This chapter is not crucial for the understanding of the other chapters. However, we shall, wherever appropriate, provide a “forest view” for readers not interested in the technical details.
Yu-Chi Ho, Xi-Ren Cao

Chapter Five. Extensions of Infinitesimal Perturbation Analysis

Abstract
The IPA algorithm described in Chapters 3 and 4 is very efficient whenever applicable. However, it is also clear that one can easily construct examples of DEDS which will not satisfy the conditions for interchange discussed in Chapter 4. Generally, such cases involve event order changes on the nominal and perturbed sample paths leading to different state sequences of these two sample paths and discontinuities in the sample performance measure. In such circumstances, one can always adopt directly the general techniques to be described in Chapters 6 and 7 which overcome these difficulties at some cost. In this chapter, however, we would like to present a series of examples or classes of examples which on the surface violate the requirements of the simple IPA yet can still be solved by extensions of the IPA approach. Two main ideas are involved. First (in Section 5.1), we show that the model representation of a DEDS can be important. The same system can be represented in more than one way, some of which are hostile to the simple IPA algorithm while others are not. Secondly, discontinuities in sample performance can often be smoothed out. By replacing infrequent but finite perturbations with equivalent frequent infinitesimal perturbations, we can often extend the applicability of IPA (Sections 5.3–4).
Yu-Chi Ho, Xi-Ren Cao

Chapter Six. Finite Perturbation Analysis

Abstract
One can view the basic goal of PA for DEDS as the reconstruction of an arbitrarily perturbed sample path from a nominal path. Under deterministic similarity, the simple infinitesimal perturbation analysis (IPA) rules described in Chapters 3 and 4 and extended in Chapter 5 compute a perturbed path infinitesimally different from the nominal for the purpose of gradient calculation. The computation of the perturbation propagations can be efficiently done since the “critical timing path” or the “future event schedule” between the nominal and perturbed paths remains the same. However, as pointed out before in the limit of a path of a very long duration or an experiment with very large ensembles of runs, deterministic similarity will always be violated1. In such cases, the IPA rules which ignore the order changes of events have been proved to give unbiased and consistent estimates for performance gradients of only certain classes of DEDS (see Chapters 4 and 5). Although the domain of application of IPA is constantly being expanded, it is nevertheless important to devise methodologies which can overcome the basic problem of “event sequence order change,” or “discontinuous performance measure.” In other words, the key question is “how can one reconstruct the perturbed path from the nominal path short of essentially running a separate new simulation I experiment?” In this and the next chapter we address this question directly and suggest an alternative which we believe is more efficient than brute force reconstruction. To initiate the discussion, let us first dispel the seemingly intuitive notion that one cannot generate a sample path x(t;θ+Δθ,ξ) from an x(t;θ,ξ) when Δθ≠0. A simple view of this general problem of perturbation analysis of DEDS and the efficient construction of multiple sample paths of a DEDS under different values of a can be obtained by appealing to some fundamental procedures in discrete event simulation.
Yu-Chi Ho, Xi-Ren Cao

Chapter Seven. Generalized Sensitivity Analysis

Abstract
Chapters 3, 4, and 5 primarily discussed a straightforward approach to the problem of constructing sensitivity estimates of a DEDS performance from a single sample path. The principal issue dealt with is the IPA algorithm, the consistency of the IPA estimate, and some extensions of the applicability of the IPA algorithms to situations where it does not appear to be applicable at the first glance. Chapter 6 by way of the “cut-and-paste” idea begins to address the broad issue of computing the performance of a general DEDS,J(θ), at different values of the system parameter(s), θ, or the response surface exploration in the experimental design terminology. In this chapter, we would like to continue this viewpoint and treat PA not so much as a particular algorithm or a group of algorithms but as an approach or mind-set towards the efficient processing and analysis of DEDS performance and to view PA as the first step in the eventual construction of a theory of DEDS very much in parallel with the control theory for CVDS (see Appendix C). We submit that PA-like techniques complement extant simulation and queueing network methodologies. The first three sections of this chapter are basically independent from each other and can be read in any order. However, they mutually reinforce one another by providing many connections and share perspectives with each other and with other parts of this book.
Yu-Chi Ho, Xi-Ren Cao

Chapter Eight. Aggregation, Decomposition, and Equivalence

Abstract
The idea of aggregation is a fundamental concept in engineering. Whenever we make a model of a real system for analysis or optimization purposes, we are consciously or unconsciously making aggregation decisions as to what features to include and what details to gloss over in our model. Of course, depending on the problem, the actual aggregation methodology may be intuitive, informal, approximate, formal, precise, or exact. However, there are two universal desirable characteristics for any aggregation technique. We denote the first characteristic as External Independence. By this we mean that whatever aggregated equivalent we synthesize to represent a complex part of a system, it can only depend on the part to be aggregated and should not depend on anything external to the part. From the practical and computational viewpoint, this is a necessary requirement. The idea is that the aggregation effort is similar to a one-time set-up charge. Once carried out, the simpler equivalent model can be used over and over in analysis when other parts of the system change. Only through external independence, can one realize any practical computational saving. If each time the external part of the system changes, the aggregated equivalent, whether exact or approximate, must be re-computed, then no saving results. The second less important but still desirable aggregation property is what we shall call Closure. The idea here is that each time we aggregate a system into a simpler equivalent, the resultant system should contain no new constructs that do not present in the original system. For example, A complex queueing network is aggregated to form another simpler queueing network. Both networks contain only customers and servers. We do not require the creation of any new element. Similarly, a Markov system when aggregated should retain the Markov property. Otherwise, much of the advantage is lost. This closure property is particularly important when a hierarchical view of aggregating a complex system into many levels is adopted. We shall in this chapter judge various aggregation and PA techniques using the external independence and closure yardsticks and measure the extent to which they exactly or approximately achieve these requirements. After a review of the known results on aggregation of queueing networks in Section 1, we treat in Section 2 the relationship between aggregation and perturbation analysis. The principal idea here is that aggregation properly used actually extends the applicability of IPA. In Sections 3 and 4 we take a rather broad view of aggregation and demonstrate a new approach to the computation of the performance measures and their derivatives of very large arbitrary Markov chains.
Yu-Chi Ho, Xi-Ren Cao

Backmatter

Weitere Informationen