Skip to main content
main-content

Über dieses Buch

This IMA Volume in Mathematics and its Applications DISCRETE EVENT SYSTEMS, MANUFACTURING SYSTEMS AND COMMUNICATION NETWORKS is based on the proceedings of a workshop that was an integral part of the 1992-93 IMA program on "Control Theory. " The study of discrete event dynamical systems (DEDS) has become rapidly popular among researchers in systems and control, in communication networks, in manufacturing, and in distributed computing. This development has created problems for re­ searchers and potential "consumers" of the research. The first problem is the veritable Babel of languages, formalisms, and approaches, which makes it very difficult to determine the commonalities and distinctions among the competing schools of approaches. The second, related, problem arises from the different traditions, paradigms, values, and experience that scholars bring to their study of DEDS, depending on whether they come from control, com­ munication, computer science, or mathematical logic. As a result, intellectual exchange among scholars becomes compromised by unexplicated assumptions. The purpose of the Workshop was to promote exchange among scholars representing some of the major "schools" of thought in DEDS with the hope that (1) greater clarity will be achieved thereby, and (2) cross-fertilization will lead to more fruitful questions. We thank P. R. Kumar and P. P. Varaiya for organizing the workshop and editing the proceedings. We also take this opportunity to thank the National Science Foundation and the Army Research Office, whose financial support made the workshop possible. A vner Friedman Willard Miller, Jr.

Inhaltsverzeichnis

Frontmatter

Markovian Fragments of COCOLOG Theories

Abstract
The COCOLOG (Conditional Observer and Controller Logic) system is a partially ordered family of first order logical theories expressed in the typed first order languages {L k ;k ≥ 0z describing the controlled evolution of the state of a given partially observed finite machine M The initial theory of the system, denoted Th 0, gives the theory of M. without data being given on the initial state. Later theories, \(\{ Th(o_1^k);k \ge 1\}\) , depend upon the (partially ordered lists of) observed input-output trajectories, where new data is accepted in the form of the new axioms AXM obs (L k ),k ≥ 1. A feedback control input U(k) is determined via the solution of control problems posed in the form of a set of conditional control rules, denoted CCR(Lk), which is paired with the theory \(Th(o_1^k)\). The disadvantage of this formulation is that the accumulation of observation axioms may handicap the speed of reasoning. In this paper, by use of a restricted subset, \(L_k^m\), of each language L k , k ≥ 1, we introduce a restricted version of COCOLOG; this is called a system of Markovian fragments of COCOLOG and it is designed so that a smaller amount of information than in the full COCOLOG system is communicated from one theory to the next. Systems of Markovian fragments are associated with a restricted set of candidate control problems, denoted \( CCR(L_k^m)\) k ≥ 1. It is shown that, under certain conditions, a Markovian fragment theory \(MTh(o_1^k)\) contains a large subset of \(Th(o_1^k)\) which includes, in particular, the state estimation theorems of the corresponding full COCOLOG system, and, for the set of control rules \(CCR(L_k^m)\), possesses what may be informally termed the same control reasoning power. In formal terms, this means that \(MTh(o_1^k)\) with respect to the well formed formulas in \(L_k^m\). Hence a theoretical basis is supplied for the increased theorem proving efficiency of the fragment systems versus the full COCOLOG systems. Finally some computer generated examples are given illustrating these results.
P. E. Caines, Y. J. Wei

On-Line Optimization of Queues Using Infinitesimal Perturbation Analysis

Abstract
Infinitesimal perturbation analysis (IPA) is a method for estimating the gradient of a performance measure in a discrete event system by observing a single sample path of the system. The method lends itself naturally to recursive optimization using gradient-based algorithms. Such algorithms can be used in on-line optimization applications, or in single-run optimization of simulation models. We describe the use of such algorithms for optimization of single server queues. We give sufficient conditions that guarantee convergence of the algorithm to the optimizing point. The convergence proof is simple, and provides insight into how the algorithm behaves under different update times.
Edwin K. P. Chong

A New Paradigm for Stochastic Optimization and Parallel Simulation

Abstract
This paper advocates some mind-set changes concerning the problem of optimization of the performance of general discrete event dynamic systems under uncertainty via simulation. We present arguments and evidence that orders of magnitude improvement in computational efficiency are possible and summarize a set of works by the Harvard DEDS group in this area over the past decade.
Y. C. Ho

Dynamic Set-Up Scheduling of Flexible Manufacturing Systems: Design and Stability of Near Optimal General Round Robin Policies

Abstract
Dynamic set-up scheduling is an important issue when a Flexible Manufacturing System (FMS) recovers from a failure or other disruption induced excursion from its economic-lot size steady state cycle. In fact, FMSs usually spend more time recovering from disruptions than being in steady state. We present recent results on the structure of optimal and near optimal policies for dynamic set-up scheduling that minimize backlog costs for a given vector of demands for each of many part types. When different part-types are nonhomogeneous or differ significantly in importance or value, non-Round Robin (general Round Robin) set-up change policies may be needed. Theoretical considerations and numerical solution results are used to deduce structural properties of the optimal policies (not necessarily Round Robin) and design near optimal policies that are tractable for high dimensional systems. The near optimal policies are based on primary and secondary set-up change hyper plane surfaces (thresholds). Primary switching surfaces (PSSs) are of dimension n — 1 (for n part-type systems) and when encountered indicate that the current set-up must be changed. Secondary switching surfaces (SSSs) are of lower dimension and indicate what set-up to change to. Stability issues as well as issues on convergence to a limit cycle are raised and analyzed, and sufficient conditions are obtained. A procedure for designing primary and secondary hyper plane set-up switching surfaces is proposed and implemented.
Junjie Hu, Michael Caramanis

Homomorphic Reduction of Coordination Analysis

Abstract
Automata-theoretic verification is based upon the language containment test
$${\cal L}({P_0} \otimes {P_1} \otimes \ldots \otimes {P_k}) \subset {\cal L}(T)$$
where the Pi’s are automata which together model a system with its fairness constraints, ⊗ is a parallel composition for automata and T defines a specification. The complexity of that test typically grows exponentially with k. This growth, often called “state explosion”, has been a major impediment to computer-aided verification, and many heuristics which are successful in special cases, have been developed to combat it. We describe here a general reduction methodology which subsumes many of these heuristics. It is based upon two transformations: decomposition of T into “local” properties T1 ,…, Tn such that ) ∩L(Tj) ⊂ L(Tj), and localization of each test L(⊗Pi) ⊂ L(Tj) to a small (tractible) test.
R. P. Kurshan

Discrete-Time Markov-Reward Models of Production Systems

Abstract
In this paper we consider the discrete-time version of performability modeling. The discrete-time approach is well-suited for the performance studies of Automated Manufacturing Systems (AMS) in the presence of failures, repairs and reconfigurations. AMS exist in various configuration states and this transitional behavior is modeled using discrete-time Markov chains. In addition, the performance in each configuration state is modeled by a Markov reward structure, that is similar to the continuous-time versions. In this paper, we derive recursive expressions for the conditional densities and moments of the cumulative reward function, when the underlying Markov chain describing the evolution of the configuration states is homogenous. Examples are used to illustrate the methods obtained in the paper.
Ranga Mallubhatla, Krishna R. Pattipati, N. Viswanadham

Modeling Real-Time Systems using Rate Automata

Abstract
Recently, real-time system models have been proposed which fall into the general category of timed automata. A timed automaton consists of a discrete event component represented as a finite automaton, coupled with a temporal component represented as a finite collection of clocks marking time between event occurrences. For timed automata it is possible to reduce certain verification problems to those of checking language containment or language emptiness. We extend this model to allow the clocks to rim at rates other than one. The extension is well suited to the analysis of scheduling problems, find we demonstrate this by the modeling a simple scheduling policy. We give conditions for reduction of the rate automaton to a finite state automaton.
Jennifer McManis, Pravin Varaiya

Symbolic Discrete-Event Simulation

Abstract
The event-scheduling view of the discrete-event simulation technique is widely used to model and simulate dynamic systems. This paper presents DMOD, a formalization of this technique. DMOD offers two major advantages. First, it retains the powerful intutions behind this technique, yet makes it easier to specify them. Second, it permits reasoning about its models. This is done by means of symbolic simulation, i.e. simulation in which input parameters can be constrained variables. DMOD can be used to model hybrid systems including those with non-linearities. It has been implemented in Prolog and applied to analysis of a variety of industrial systems.
Sanjai Narain, Ritu Chadha

Decentralized Discrete-Event Systems and Computational Complexity

Abstract
A summary is given of computational complexity results for decentralized discrete-event control problems. These results generalize the earlier work of Tsitsiklis, who showed that for a special class of centralized supervisory control problems under partial observation, there is an algorithm for determining in polynomial-time whether or not a solution exists. The negative complexity results associated with Tsitsiklis’ work also carry over to the decentralized case, so that solution existence for the more general class is not decidable in polynomial time. Moreover, even when it can be shown that supervisor solutions exist for problems in either the special or general class, there is no polynomial-time algorithm for producing such solutions.
Karen Rudie, Jan C. Willems

Starvation-Based Instability of Distributed Scheduling Policies in Non-Acyclic Fluid and Queuing Networks

Abstract
We review the stability condition of a class of guided distributed scheduling policies in queuing systems and show that the same stability condition also holds for fluid systems. This condition is interpreted as requiring a contraction of machine starvation delays in the cycles of material flow in the system. Machine starvation is the cause of instability and arises naturally in queuing systems due to the discrete nature of operations (it can also be caused by the scheduling policy). In fluid systems, however, machine starvation can only be caused by the scheduling policy. Although the cause of machine starvation can be different in queuing and fluid systems, our results show that this is inconsequential to the condition for stability.
Noting that instability is caused by machine starvation which leads to slow down (or stoppage) of machines, we show that the condition for stability of a system studied by Kumar and Seidman can be expressed as an ordinary capacity condition for the corresponding slowed down system.
Ali Sharifnia
Weitere Informationen