Skip to main content

1997 | Buch

Nondifferentiable and Two-Level Mathematical Programming

verfasst von: Kiyotaka Shimizu, Yo Ishizuka, Jonathan F. Bard

Verlag: Springer US

insite
SUCHEN

Über dieses Buch

The analysis and design of engineering and industrial systems has come to rely heavily on the use of optimization techniques. The theory developed over the last 40 years, coupled with an increasing number of powerful computational procedures, has made it possible to routinely solve problems arising in such diverse fields as aircraft design, material flow, curve fitting, capital expansion, and oil refining just to name a few. Mathematical programming plays a central role in each of these areas and can be considered the primary tool for systems optimization. Limits have been placed on the types of problems that can be solved, though, by the difficulty of handling functions that are not everywhere differentiable. To deal with real applications, it is often necessary to be able to optimize functions that while continuous are not differentiable in the classical sense. As the title of the book indicates, our chief concern is with (i) nondifferentiable mathematical programs, and (ii) two-level optimization problems. In the first half of the book, we study basic theory for general smooth and nonsmooth functions of many variables. After providing some background, we extend traditional (differentiable) nonlinear programming to the nondifferentiable case. The term used for the resultant problem is nondifferentiable mathematical programming. The major focus is on the derivation of optimality conditions for general nondifferentiable nonlinear programs. We introduce the concept of the generalized gradient and derive Kuhn-Tucker-type optimality conditions for the corresponding formulations.

Inhaltsverzeichnis

Frontmatter
1. Introduction
Abstract
When attempting to optimize the performance or design of a system, it is often pos-sible to formulate the underlying problem as the minimization or maximization of a specific objective function subject to a variety of physical, technical, and operational constraints. Typical examples include the minimization of transportation costs sub-ject to capacity restrictions on delivery vehicles and time restrictions on customer visits, the minimization of the sum of the absolute errors between a given data set and an approximating function, and the maximization of expected returns from an investment portfolio subject to acceptable risks levels and cash flow requirements. In each of these instances, mathematical programming provides a general framework for modeling the problem and organizing the data. With the growth of theory and the expansion of applications it is natural to partition the field according to the types of functions used in the models and the nature of the decision variables — discrete, continuous, or parametric. The discipline known as nonlinear programming plays a fundamental role in almost all areas of optimization and has proven its worth in the design, planning, and control of many different types of systems.
Kiyotaka Shimizu, Yo Ishizuka, Jonathan F. Bard
2. Mathematical Preliminaries
Abstract
This subsection is devoted to introducing notation and definitions that are used throughout the book. Constants, variables and functions are boldfaced to show that they are vectors or vector-valued. Vectors are column oriented unless otherwise specified. For two vectors
$$ x = \left( {\begin{array}{*{20}{c}} {{x_1}} \\ \vdots \\ {{x_n}} \end{array}} \right),{\text{ }}y = \left( {\begin{array}{*{20}{c}} {{y_1}} \\ \vdots \\ {{y_n}} \end{array}} \right)$$
the inner product,\( \Sigma _{i = 1}^n{x_i}{y_i} \), of x and y is denoted by x T y. Superscript T stands for the transposition. When x is a row vector, we write the inner product as xy.
Kiyotaka Shimizu, Yo Ishizuka, Jonathan F. Bard
3. Differentiable Nonlinear Programming
Abstract
In this chapter we deal with the following basic optimization problem: Among the n-tuples (x 1,…, x n ) that satisfy in m inequalities g i (x 1, …,x n ) 0, i = 1,…, m, and ℓ equalities h i (x 1,…, x n ) = 0, i = 1,…, ℓ, find a solution (\( x_1^*, \ldots ,x_n^* \)) which minimizes the function f (x 1,…,x n ).
Kiyotaka Shimizu, Yo Ishizuka, Jonathan F. Bard
4. Nondifferentiable Nonlinear Programming
Abstract
In this chapter we shall discuss optimality conditions and solutions for nondifferentiable optimization problems — optimization problems with nondifferentiable objective and constraint functions. The Kuhn-Tucker conditions [K6] are well known optimality conditions for nonlinear programming problems consisting of differentiable functions. Here, we shall derive Kuhn-Tucker like conditions for two classes of non-differentiable optimization problems consisting of locally Lipschitz functions and of quasidifferentiable functions.
Kiyotaka Shimizu, Yo Ishizuka, Jonathan F. Bard
5. Linear Programming
Abstract
The simplest type of constrained optimization problem is realized when the functions f(x), g(x), and h(x) in (3.1.1) are all linear in x. The resulting formulation is known as a linear programming (LP) problem and plays a central role in virtually every branch of optimization. Many real situations can be formulated or approximated as LPs, optimal solutions are relatively easy to calculate, and computer codes for solving very large instances consisting of millions of variables and hundreds of thousands of constraints are commercially available. Another attractive feature of linear programs is that various subsidiary questions related, for example, to the sensitivity of the optimal solution to changes in the data and the inclusion of additional variables and constraints can be analyzed with little effort.
Kiyotaka Shimizu, Yo Ishizuka, Jonathan F. Bard
6. Optimal-Value Functions
Abstract
In the following nine chapters we study optimization problems whose formulations contain minimization and maximization operations in their description — optimization problems with a two-level structure. In many instances, these problems include optimal-value functions that are not necessarily differentiable and hence difficult to work with. In this chapter we highlight a number of important properties of optimal-value functions that derive from results by Clarke [C9], Gauvin-Dubeau [G4], Fiacco [F3], and Hogan [H12] on differentiable stability analysis for nonlinear programs. These results provide the basis for several computational techniques discussed presently.
Kiyotaka Shimizu, Yo Ishizuka, Jonathan F. Bard
7. Two-Level Mathematical Programming Problem
Abstract
In the latter half of this book we are concerned with theories and solution methods for various two-level optimization problems, which are generally called two-level mathematical programs. The purpose of this chapter is to present a unified treatment of various two-level optimization problems in the context of general two-level nonlinear programming. In so doing, we show how several variants fit the general model. As such, one can achieve a unified framework for each individual problem discussed in the following chapters.
Kiyotaka Shimizu, Yo Ishizuka, Jonathan F. Bard
8. Large-Scale Nonlinear Programming: Decomposition Methods
Abstract
The use of primal and dual methods are at the heart of finding solutions to large-scale nonlinear programming problems. Both methods are algorithms of a two-level type where the lower-level decision makers work independently to solve their individual subproblems generated by the decomposition of the original (overall) problem. At the same time, the upper-level decision maker solves his coordinating problem by using the results coming from the lower-level optimizations. These algorithms perform optimization calculations successively by an iterative exchange of information between the two levels.
Kiyotaka Shimizu, Yo Ishizuka, Jonathan F. Bard
9. Min-Max Problem
Abstract
The min-max problem is a model for decision making under uncertainty. The aim is to minimize the function f (x, y) but the decision maker only has control of the vector xR n . After he selects a value for x, an “opponent” chooses a value for yR m which alternatively can be viewed as a vector of disturbances. When the decision maker is risk averse and has no information about how y will be chosen, it is natural for him to assume the worst. In other words, the second decision maker is completely antagonistic and will try to maximize f (x,y) once x is fixed. The corresponding solution is called the min-max solution and is one of several conservative approaches to decision making under uncertainty. When stochastic information is available for y other approaches might be more appropriate (e.g., see [S4, E3]).
Kiyotaka Shimizu, Yo Ishizuka, Jonathan F. Bard
10. Satisfaction Optimization Problem
Abstract
This chapter deals with an optimization problem involving unknown parameters (uncertainty). We consider a decision problem whose objective function is minimized under the condition that a certain performance function should always be less than or equal to a prescribed permissible level (for every value of the unknown parameters). In the case that the set in which the unknown parameters must lie contains an infinite number of elements, we say that the corresponding optimization problem has an infinite number of inequality constraints and call it an infinitely constrained optimization problem.
Kiyotaka Shimizu, Yo Ishizuka, Jonathan F. Bard
11. Two-Level Design Problem (Mathematical Programming with Optimal-Value Functions)
Abstract
This chapter is concerned with a two-level design problem [S23, T1, I4] in which the central system makes a decision on parameter values to be assigned to the subsystems so as to optimize its objective, taking the values of the subsystem performance into account. In the second stage, the subsystems optimize their individual objective functions under the given parameters from the center. Such a problem is mathematically formulated as an optimization problem whose objective and constraint functions include the optimal-value functions obtained from the optimization of the subsystem performance indices.
Kiyotaka Shimizu, Yo Ishizuka, Jonathan F. Bard
12. General Resource Allocation Problem for Decentralized Systems
Abstract
In a decentralized system, overall optimization is achieved from a central point of view but each subsystem is invested with considerable power. Such a system is characterized by multiple decision makers in the subsystems, each of whom can exercise strong initiatives at the local level in pursuit of its own goal.
Kiyotaka Shimizu, Yo Ishizuka, Jonathan F. Bard
13. Min-Max Type Multi-Objective Programming Problem
Abstract
Let f i , j = 1, …, N, be the criterion functions that depend on the decision maker’s variable xR n and opponent’s (or disturbance) variables yiR mj, j =1, …, N. In the absence of information about the opponent’s strategies a conservative decision maker would assume that he will experience the worst possible outcome at the hands of his opponents. In such a case, the objective functions to be minimized by the decision maker are defined as follows:
Kiyotaka Shimizu, Yo Ishizuka, Jonathan F. Bard
14. Best Approximation Problem by the Chebyshev Norm
Abstract
In this chapter, we study a min-max approximation problem with satisfactory conditions as constraints. The underlying formulation contains several functions to be approximated and several error functions to be minimized. Specifically, we are concerned with the problem of realizing (or approximating) the desired characteristics of a device or system being designed. The functions to be approximated measure the desired characteristics, while the error functions are viewed as performance indices and represent the difference between the desired and attainable characteristics of the device or system.
Kiyotaka Shimizu, Yo Ishizuka, Jonathan F. Bard
15. The Stackelberg Problem: General Case
Abstract
The Stackelberg problem is the most challenging two-level structure that we examine in this book. It has numerous interpretations but originally it was proposed as a model for a leader-follower game in which two players try to minimize their individual objective functions F(x, y) and f (x, y), respectively, subject to a series of interdependent constraints [S28, S27]. Play is defined as sequential and the mood as noncooperative. The decision variables are partitioned between the players in such a way that neither can dominate the other. The leader goes first and through his choice of xR n is able to influence but not control the actions of the follower. This is achieved by reducing the set of feasible choices available to the latter. Subsequently, the follower reacts to the leader’s decision by choosing a y ∈ R m in an effort to minimizes his costs. In so doing, he indirectly affects the leader’s solution space and outcome.
Kiyotaka Shimizu, Yo Ishizuka, Jonathan F. Bard
16. The Stackelberg Problem: Linear and Convex Case
Abstract
The vast majority of research on two-level programming has centered on the linear Stackelberg game, alternatively known as the linear bilevel programming problem (BLPP). In this chapter we present several of the most successful algorithms that have been developed for this case, and when possible, compare their performance. We begin with some basic notation and a discussion of the theoretical character of the problem.
Kiyotaka Shimizu, Yo Ishizuka, Jonathan F. Bard
Backmatter
Metadaten
Titel
Nondifferentiable and Two-Level Mathematical Programming
verfasst von
Kiyotaka Shimizu
Yo Ishizuka
Jonathan F. Bard
Copyright-Jahr
1997
Verlag
Springer US
Electronic ISBN
978-1-4615-6305-1
Print ISBN
978-1-4613-7895-2
DOI
https://doi.org/10.1007/978-1-4615-6305-1