Skip to main content
Top

2003 | Book

Modeling, Control and Optimization of Complex Systems

In Honor of Professor Yu-Chi Ho

Editors: Weibo Gong, Leyuan Shi

Publisher: Springer US

Book Series : The International Series on Discrete Event Dynamic Systems

insite
SEARCH

About this book

Modeling, Control And Optimization Of Complex Systems is a collection of contributions from leading international researchers in the fields of dynamic systems, control theory, and modeling. These papers were presented at the Symposium on Modeling and Optimization of Complex Systems in honor of Larry Yu-Chi Ho in June 2001. They include exciting research topics such as:

-modeling of complex systems,
-power control in ad hoc wireless networks,
-adaptive control using multiple models,
-constrained control,
-linear quadratic control,
-discrete events,
-Markov decision processes and reinforcement learning,
-optimal control for discrete event and hybrid systems,
-optimal representation and visualization of multivariate data and functions in low-dimensional spaces.

Table of Contents

Frontmatter
Chapter 1. Optimal Representation and Visualization of Multivariate Data and Functions in Low-Dimensional Spaces
Abstract
The analysis and processing of massive amount of multivariate data and high-dimensional functions have become a basic need in many areas of science and engineering. To reduce the dimensionality for compact representation and visualization of high-dimensional information appear imperative in exploratory research and engineering modeling. Since D. Hilbert raised the 13thproblem in 1900, the study on possibility to express high-dimensional functions via composition of lower-dimensional functions has gained considerable success[1, 2]. Nonetheless, no methods of realization are ever indicated, and not even all integrable functions can be treated this way, a fortiori functions in L2(Ω). The common practice is to expand high-dimensional functions into a convergent series in terms of a chosen orthonormal basis with lower dimensional ones. However, the length and rapidity of convergence of the expansion heavily depend upon the choice of basis. In this paper we briefly report some new results of study in seeking an optimal basis for a given function provided with fewest terms and rapidest convergence. All elements of the optimal basis turned out to be products of single-variable functions taken from the unit balls of ingredient spaces. The proposed theorems and schemes may find wide applications in data processing, visualization, computing, engineering simulation and decoupling of nonlinear control systems. The facts established in the theorems may have their own theoretical interests.
Jian Song
Chapter 2. Modeling of Complex Systems
Abstract
The role of modeling is becoming increasingly important in design and operation of complex natural and man-made systems. Models are also appearing as components of control systems because of the increased use of model based control strategies such as Kalman filters and model predictive control. Mechanized tools are necessary to fully exploit modeling. The development of modeling is therefore closely associated with the development of computational tools for simulation. There has been a very dynamic development from the beginning in the 1920s. At that time, the technology was available only at a handful of university groups who had access to mechanical differential analyzers. Today modeling and simulation are available at low cost on the desk of everyone who needs it. This paper presents a perspective on modeling of complex systems leading up to the recent development of Modelica, which draws on object oriented methodology in computer science, differential algebraic equations in numerical mathematics and control theory.
K. J. Åström
Chapter 3. Power Control in Ad Hoc Wireless Networks: An Architectural Solution for a Melee of Multiple Agents, Cost Criteria, and Information Patterns
Abstract
Wireless networking is a fertile source of problematical situations with multiple agents, multiple cost criteria, and multiple information patterns. It offers an arena for imaginative solutions for a range of problems.
One example is the power control problem. Choosing the optimal transmit power level of a node in a system as complex as a wireless ad hoc network is an interesting optimization problem. There are several objectives, including maximizing network capacity and minimizing battery usage. Also, the chosen power level affects the functioning of various other network protocols as well. Not only are there theoretical issues, but there are also many issues about harmoniously integrating any power control scheme in the existing network hierarchy.
The COMPOW (for common power) protocol we detail simultaneously increases the network's traffic carrying capacity, provides power aware routes, and reduces the contention for the shared medium. The nodes of the network can run this algorithm in a distributed and asynchronous manner. The software has been implemented in the Linux kernel. To our knowledge it is the first power control protocol to be implemented on a real wireless ad hoc network.
Swetha Narayanaswamy, Vikas Kawadia, R. S. Sreenivas, P. R. Kumar
Chapter 4. Some Examples of Optimal Control
Abstract
As a long-time friend and colleague of Larry’s, it is a great pleasure to be here to help celebrate his 67th birthday and retirement from teaching. He has been and still is an outstanding contributor to our field of automatic control and systems
A. E. Bryson
Chapter 5. Adaptive Control Using Multiple Models: A Methodology
Abstract
A procedure based on multiple models, switching, and tuning for adaptively controlling dynamical systems in time-varying environments was introduced in the early 1990s. During the past decade, this procedure has evolved into a general methodology for adaptive control. Some of the recent advances in this new area are discussed in the paper.
Kumpati S. Narendra, Osvaldo A. Driollet, Koshy George
6. Constrained Control: Polytopic Techniques
Abstract
There is a resurgence of interest in the control of dynamic systems with hard constraints on states and controls and many significant advances have been made. A major reason for the success of model predictive control, which, with over 2000 applications, is the most widely used modern control technique, is precisely its ability to handle effectively hard constraints. But there are also other important developments. The solution of the constrained linear quadratic regulator problem has been characterized permitting, at least in principle, explicit determination of the value function and the optimal state feedback controller. Maximal output admissible sets have been effectively harnessed to provide easily implementable regulation and control of constrained linear systems. The solution of the robust, constrained time-optimal control problem has also been characterized. A common feature of all these advances is their reliance on polytopic techniques. Knowledge of robust controllability sets is required in model predictive control of constrained dynamic systems; these sets are polytopes when the system being controlled is linear and the constraints polytopic. In other problems, such as robust time optimal control and (unconstrained) $1 optimal control, the value function itself is polytopic. Partitioning of the state space into polytopes is required for the characterization of the solution of the constrained linear quadratic regulator problem for which the value function is piecewise quadratic and the optimal control piecewise affine. It is possible that polytopic computation may become as useful a tool for the
D. Q. Mayne
Chapter 7. An Introduction to Constrained Control
Abstract
An ubiquitous problem in control system design is that the system must operate subject to various constraints. Although the topic of constrained control has a long history in practice, there have been recent significant advances in the supporting theory. In this chapter, we give an introduction to constrained control. In particular, we describe contemporary work which shows that the constrained optimal control problem for discrete-time systems has an interesting geometric structure and a simple local solution. We also discuss issues associated with the output feedback solution to this class of problems, and the implication of these results in the closely related problem of anti-windup. As an application, we address the problem of rudder roll stabilization for ships.
Graham C. Goodwin, Tristan Pérez, José A. De Doná
Chapter 8. On Feasibility of Interplanetary Travel: the Flight from Earth to Mars and Back
Abstract
This paper deals with the major issues of round-trip Mars missions, namely: flight time, characteristic velocity, andmassratio. Two classes of trajectories are considered: (i) slow transfer trajectories, for which the round-trip angular travel of Earth exceeds that of the spacecraft by 360 degrees; (ii) fast transfer trajectories, for which the round-trip angular travels of Earth and spacecraft are the same.
For robotic spacecraft, the best trajectory is the minimum energy trajectory, which is of type (i). For manned spacecraft, the comfort of the crew might require a trajectory of type (ii), in which a substantial decrease in flight time is achieved at the expense of a large increase in characteristic velocity and an even larger increase in mass ratio.
At this time, it appears that the best policy is to continue the exploration of Mars via robotic spacecraft, since the present state of the art is not consistent with the safe and economic exploration of Mars via manned spacecraft. Major advances have yet to be achieved in three areas: spacecraft structural factors, engine specific impulses, and life support systems.
Angelo Miele
Chapter 9. Linear Quadratic Control Revisited: A View Through Semidefinite Programming
Abstract
We present a unified approach to both deterministic and stochastic linear-quadratic (LQ) control via the duality theory of semi-definite programming (SDP). This new framework allows the control cost matrix to be singular or even indefinite (in the stochastic setting), a useful feature in applications such as the optimal portfolio selection of financial assets. We show that the complementary duality condition of the SDP is necessary and sufficient for the existence of an optimal LQ control under certain stability conditions. When the complementary duality does hold, an optimal state feedback control is constructed explicitly in terms of the solution to the primal SDP. Furthermore, if thestrictcomplementarity holds, then a new optimal feedback control, which is always stabilizing, is generated via the dual SDP. On the other hand, for cases where the complementary duality fails and the LQ problem has no attainable optimal solution, we develop an e-approximation scheme that achieves asymptotic optimality.
David D. Yao, Shuzhong Zhang, Xun Yu Zhou
Chapter 10. Discrete Events: Timetables, Capacity Questions, and Planning Issues for Railway Systems
Abstract
The theory of Discrete Event Systems (DES’s) is a research area of current vitality. The development of this theory is largely stimulated by discovering general principles which are (or are hoped to be) useful to a wide range of application domains. In particular, technological and/or `man-made’ manufacturing systems, communication networks, transportation systems, and logistic systems, all fall within the class of DES’s. One of the key features that characterize these systems is that. their dynamics areevent-drivenas opposed totime-driven, i.e., the behavior of a DES is governed only by occurrences of different types of events over time rather than by ticks of a clock.
Geert Jan Olsder, Antoine F. de Kort
Chapter 11. A Sensitivity View of Markov Decision Processes and Reinforcement Learning
Abstract
The goals of perturbation analysis (PA), Markov decision processes (MDPs), and reinforcement learning (RL) are common: to make decisions to improve the system performance based on the information obtained by analyzing the current system behavior. In this paper, we study the relations among these closely related fields. We show that MDP solutions can be derived naturally from performance sensitivity analysis provided by PA. Performance potential plays an important role in both PA and MDPs; it also offers a clear intuitive interpretation for many results. Reinforcement learning, TD(A), neuro-dynamic programming, etc, are efficient ways of estimating the performance potentials and related quantities based on sample paths. This new view of PA, MDPs and RL leads to the gradient-based policy iteration method that can be applied to some nonstandard optimization problems such as those with correlated actions. Sample path-based approaches are also discussed.
Xi-Ren Cao
Chapter 12. Optimal Control for Discrete Event and Hybrid Systems
Abstract
As time-driven and event-driven systems are rapidly coming together, the field of optimal control is presented with an opportunity to expand its horizons to these new “hybrid” dynamic systems. In this paper, we consider a general optimal control problem formulation for such systems and describe a modeling framework allowing us to decompose the problem into lower and higher-level components. We then show how to apply this setting to a class of switched linear systems with a simple event-driven switching process, in which case explicit solutions may often be obtained. For a different class of problems, where the complexity lies in the nondifferentiable nature of event-driven dynamics, we show that a different type of decomposition still allows us to obtain explicit solutions for a class of such problems. These two classes of problems illustrate the differences between various sources of complexity that one needs to confront in tackling optimal control problems for discrete-event and hybrid systems.
Christos G. Cassandras, Kagan Gokbayrak
Metadata
Title
Modeling, Control and Optimization of Complex Systems
Editors
Weibo Gong
Leyuan Shi
Copyright Year
2003
Publisher
Springer US
Electronic ISBN
978-1-4615-1139-7
Print ISBN
978-1-4613-5411-6
DOI
https://doi.org/10.1007/978-1-4615-1139-7