Skip to main content

2000 | Buch

Practical Optimization Methods

With Mathematica® Applications

verfasst von: M. Asghar Bhatti

Verlag: Springer New York

insite
SUCHEN

Über dieses Buch

The goal of this book is to present basic optimization theory and modern computational algorithms in a concise manner. The book is suitable for un­ dergraduate and graduate students in all branches of engineering, operations research, and management information systems. The book should also be use­ ful for practitioners who are interested in learning optimization and using these techniques on their own. Most available books in the field tend to be either too theoretical or present computational algorithms in a cookbook style. An approach that falls some­ where in between these two extremes is adopted in this book. Theory is pre­ sented in an informal style to make sense to most undergraduate and graduate students in engineering and business. Computational algorithms are also de­ veloped in an informal style by appealing to readers' intuition rather than mathematical rigor. The available, computationally oriented books generally present algorithms alone and expect readers to perform computations by hand or implement these algorithms by themselves. This obviously is unrealistic for a usual introductory optimization course in which a wide variety of optimization algorithms are discussed. There are some books that present programs written in traditional computer languages such as Basic, FORTRAN, or Pascal. These programs help with computations, but are of limited value in developing understanding of the algorithms because very little information about the intermediate steps v ' Preface VI -------------------------------------------------------- is presented.

Inhaltsverzeichnis

Frontmatter
Chapter One. Optimization Problem Formulation
Abstract
Optimization problems arise naturally in many different disciplines. A structural engineer designing a multistory building must choose materials and proportions for different structural components in the building in order to have a safe structure that is as economical as possible. A portfolio manager for a large mutual fund company must choose investments that generate the largest possible rate of return for its investors while keeping the risk of major losses to acceptably low levels. A plant manager in a manufacturing facility must schedule the plant operations such that the plant produces products that maximize company’s revenues while meeting customer demands for different products and staying within the available resource limitations. A scientist in a research laboratory may be interested in finding a mathematical function that best describes an observed physical phenomenon.
M. Asghar Bhatti
Chapter Two. Graphical Optimization
Abstract
Graphical optimization is a simple method for solving optimization problems involving one or two variables. For problems involving only one optimization variable, the minimum (or maximum) can be read simply from a graph of the objective function. For problems with two optimization variables, it is possible to obtain a solution by drawing contours of constraint functions and the objective function. The procedure is discussed in detail in the first section. After developing the procedure in detail with hand calculations, a Mathematica function called GraphicalSolution is presented that automates the process of generating the complete contour plots. The second section presents solutions of several optimization problems using this function.
M. Asghar Bhatti
Chapter Three. Mathematical Preliminaries
Abstract
This chapter presents some of the basic mathematical concepts that are used frequently in the later chapters. A review of matrix notation and some of the basic concepts from linear algebra are presented in the first section. Taylor series approximation, which plays an important role in developing various optimization techniques, is presented in section 2. Definitions of the gradient vector and the Hessian matrix of functions of n variables are introduced in this section also. As an application of the Taylor series, and also since nonlinear equations are encountered frequently in solving optimization problems, the Newton-Raphson method for the numerical solution of nonlinear equations is presented in the third section. A discussion of quadratic functions and convex functions and sets is included in the last two sections.
M. Asghar Bhatti
Chapter Four. Optimality Conditions
Abstract
This chapter deals with mathematical conditions that must be satisfied by the solution of an optimization problem. A thorough understanding of the concepts presented in this chapter is a prerequisite for understanding the material presented in later chapters. The subject matter of this chapter is quite theoretical and requires use of sophisticated mathematical tools for rigorous treatment. Since the book is not intended to be a theoretical text on mathematical optimization, this chapter is kept simple by appealing to intuition and avoiding precise mathematical statements. Thus, it is implicit in the presentation that all functions are well behaved and have the necessary continuity and differentiability properties. The book by Peressini, Sullivan and Uhl, Jr. [1988] is recommended for those interested in a mathematically rigorous, yet very readable, treatment of most of the concepts presented in this chapter. More details can also be found in Beveridge, Gordon, and Schechter [1970], Jahn [1996], Jeter [1986], McAloon and Tretkoff [1996], Pierre and Lowe [1975], and Shor [1985].
M. Asghar Bhatti
Chapter Five. Unconstrained Problems
Abstract
As seen in Chapter 4, it is possible to solve optimization problems by directly using the optimality conditions. However, setting up and solving the resulting nonlinear system of equations becomes very difficult as the problem size increases. Furthermore, at least for constrained problems, checking the second-order sufficient conditions is usually very difficult. In such cases, one must find all stationary points in order to be absolutely sure that a minimum has been found. Otherwise instead of a minimum point, one could actually end up with a maximum point. Clearly, finding all possible solutions for large systems of nonlinear equations is a daunting task, if not impossible.
M. Asghar Bhatti
Chapter Six. Linear Programming
Abstract
When the objective function and all constraints are linear functions of optimization variables, the problem is known as a linear programming (LP) problem. A large number of engineering and business applications have been successfully formulated and solved as LP problems. LP problems also arise during the solution of nonlinear problems as a result of linearizing functions around a given point.
M. Asghar Bhatti
Chapter Seven. Interior Point Methods
Abstract
The simplex method starts from a basic feasible solution and moves along the boundary of the feasible region until an optimum is reached. At each step, the algorithm brings only one new variable into the basic set, regardless of the total number of variables. Thus, for problems with a large number of variables, the method may take many steps before terminating. In fact, relatively simple examples exist in which the simplex method visits all vertices of the feasible region before finding the optimum.
M. Asghar Bhatti
Chapter Eight. Quadratic Programming
Abstract
When the objective function is a quadratic function of the optimization variables and all constraints are linear, the problem is called a quadratic programming (QP) problem. Several important practical problems may directly be formulated as QP. The portfolio management problem, presented in Chapter 1, is such an example. QP problems also arise as a result of approximating more general nonlinear problems by linear and quadratic functions.
M. Asghar Bhatti
Chapter Nine. Constrained Nonlinear Problems
Abstract
Numerical methods for solving general nonlinear constrained optimization problems are discussed in this chapter. A large number of methods and their variations are available in the literature for solving these problems. As is frequently the case with nonlinear problems, there is no single method that is clearly better than the others. Each method has its own strengths and weaknesses. The quest for a general method that works effectively for all types of problems continues. Most current journals and conferences on optimization contain new methods or refinements of existing methods for solving constrained nonlinear problems. A thorough review of all these developments is beyond the scope of this chapter. Instead, the main purpose of this chapter is to present the development of two methods that are generally considered among the best in their class, the ALPF (Augmented Lagrangian Penalty Function) method and the SQP (Sequential Quadratic Programming) method. For additional details refer to Fiacco [1983], Fiacco and McCormick [1968], Fletcher [1987], Gill, Murray, and Wright [1991], Gomez and Hennart [1994], McCormick [1983], Scales [1985], and Shapiro [1979].
M. Asghar Bhatti
Backmatter
Metadaten
Titel
Practical Optimization Methods
verfasst von
M. Asghar Bhatti
Copyright-Jahr
2000
Verlag
Springer New York
Electronic ISBN
978-1-4612-0501-2
Print ISBN
978-1-4612-6791-1
DOI
https://doi.org/10.1007/978-1-4612-0501-2