Skip to main content

2019 | Buch

Nonlinear Optimization

verfasst von: Francisco J. Aragón, Miguel A. Goberna, Marco A. López, Margarita M.L. Rodríguez

Verlag: Springer International Publishing

Buchreihe : Springer Undergraduate Texts in Mathematics and Technology

insite
SUCHEN

Über dieses Buch

This textbook on nonlinear optimization focuses on model building, real world problems, and applications of optimization models to natural and social sciences. Organized into two parts, this book may be used as a primary text for courses on convex optimization and non-convex optimization. Definitions, proofs, and numerical methods are well illustrated and all chapters contain compelling exercises. The exercises emphasize fundamental theoretical results on optimality and duality theorems, numerical methods with or without constraints, and derivative-free optimization. Selected solutions are given. Applications to theoretical results and numerical methods are highlighted to help students comprehend methods and techniques.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Preliminaries
Abstract
This chapter starts with some brief historical overview of optimization followed by four subsections devoted to the formulation and reformulation of optimization problems. The subsequent two sections revisit basic tools of mathematical analysis and matrix analysis, which will be frequently used along the book. Next we revisit the local optimality conditions for functions of several variables (also called multivariate) at interior points of their domains that are usually studied in courses on differential calculus. Then we introduce the concept of coercive function, which is used to prove the existence of global minimum of continuous functions on unbounded domains.
Francisco J. Aragón, Miguel A. Goberna, Marco A. López, Margarita M. L. Rodríguez
Correction to: Nonlinear Optimization
Francisco J. Aragón, Miguel A. Goberna, Marco A. López, Margarita M. L. Rodríguez

Analytical Optimization

Frontmatter
Chapter 2. Convexity
Abstract
This chapter starts with the study of basic properties of convex sets, including the separation and the supporting hyperplane theorems. Next, we introduce a class of functions that are very useful in optimization, the convex functions, for which the local minima are also global minima. We begin with functions of one variable and follow with functions of several variables (or multivariate). The chapter finishes with the study of two particular classes of convex functions, the strictly convex and the strongly convex functions, which are introduced in order to ensure the existence and uniqueness of global minimum. Taking into account the instrumental character of this chapter, we have preferred to sacrifice generality to simplicity in the presentation of some results; compare, e.g., Propositions 2.33, 2.37, and 2.60 below with [80, Theorem 23.4], [80, Theorem 24.7], and [25, Theorem 2.8], respectively.
Francisco J. Aragón, Miguel A. Goberna, Marco A. López, Margarita M. L. Rodríguez
Chapter 3. Unconstrained Optimization
Abstract
This chapter studies a collection of optimization problems without functional constraints, that is, problems of the form
$$\begin{aligned} \begin{array}{lll} P: &{} \text {Min} &{} f\left( x\right) \\ &{} \text {s.t.} &{} x\in C, \end{array} \end{aligned}$$
where \(\emptyset \ne C\subset \mathbb {R}^{n}\) represents a given constraint set (typically \(\mathbb {R}^{n}\) or \(\mathbb {R}_{++}^{n}\)) and \( f:C\rightarrow \mathbb {R}\) is the objective function. Unconstrained optimization problems have been widely used in astronomy and engineering. On the one hand, data fitting by least squares was a method developed by astronomers as Laplace and Gauss, in the second half of the eighteenth century, by using unconstrained quadratic optimization in order to get an accurate description of the behavior of celestial bodies to facilitate navigating the Earth’s oceans. On the other hand, at the beginning of the twentieth century, the head of the technical department of the Danish branch of the International Bell Telephone Company, an engineer and amateur mathematician called Johan Jensen proved an inequality covering several classical ones as special cases, which allows to rigorously solve isoperimetric and design problems even though the constraint set is open. Jensen’s paper “Sur les fonctions convexes et les inégalités entre les valeurs moyennes” [Acta Mathematica 30(1906) 175–193] is generally considered the inception of convex analysis.
Francisco J. Aragón, Miguel A. Goberna, Marco A. López, Margarita M. L. Rodríguez
Chapter 4. Convex Optimization
Abstract
Convex optimization deals with problems of the form
$$\begin{aligned} \begin{array}{lll} P: &{} \text {Min} &{} f\left( x\right) \\ &{} \text {s.t.} &{} x\in F, \end{array} \end{aligned}$$
where \(\emptyset \ne F\subset \mathbb {R}^{n}\) is a convex set and \(f:F\rightarrow \mathbb {R}\) is a convex function. In this chapter, we analyze four particular cases of problem P in (4.1): (a) Unconstrained convex optimization, where the constraint set F represents a given convex subset of \(\mathbb {R}^{n}\) (as \(\mathbb {R} _{++}^{n} \)). (b) Convex optimization with linear constraints, where
$$\begin{aligned} F=\left\{ x\in \mathbb {R}^{n}:g_{i}\left( x\right) \le 0,i\in I\right\} , \end{aligned}$$
with \(I=\left\{ 1,\ldots , m\right\} \), \(m\ge 1\), and \(g_{i}\) are affine functions for all \(i\in I\). In this case, F is a polyhedral convex set (an affine manifold in the particular case where F is the solution set of a system of linear equations, as each equation can be replaced by two inequalities).
Francisco J. Aragón, Miguel A. Goberna, Marco A. López, Margarita M. L. Rodríguez

Numerical Optimization

Frontmatter
Chapter 5. Unconstrained Optimization Algorithms
Abstract
In this chapter, we will present the most representative algorithms for solving an optimization problem without functional constraints, that is, the problem
$$ \begin{array}{lll} P: &{} \text {Min} &{} f(x) \\ &{} \text {s.t.} &{} x\in C, \end{array} $$
where \(\emptyset \ne C\subset \mathop {\mathrm {dom}}f\subset \mathbb {R}^{n}.\) We will usually assume that f is smooth on C, i.e., that \(f\in \mathcal {C}^{1}\left( V\right) ,\) where V is an open set such that \(C\subset V\subset \mathop {\mathrm {dom}}f\). In fact, in many cases, to simplify, we will assume that \(C=\mathbb {R}^{n},\) which means that P is the problem (1.​1) with neither functional nor constraint sets. We will present conceptual algorithms that generate infinite sequences, which can be converted into implementable algorithms by adding some stopping criteria as the ones inspired in errors introduced in Subsection 5.2.1 or some approximate optimality conditions.
Francisco J. Aragón, Miguel A. Goberna, Marco A. López, Margarita M. L. Rodríguez
Chapter 6. Constrained Optimization
Abstract
This chapter is devoted to the numerical methods for solving the problem
$$\begin{aligned} \begin{array}{lll} P: &{} {{\mathrm { Min}}} &{} f(x) \\ &{} \text {s.t.} &{} h_{j}\!\left( x\right) =0,\, j=1,\ldots , m, \\ &{} &{} g_{i}(x)\le 0, \, i=1,\ldots , p, \end{array} \end{aligned}$$
and their theoretical foundations, where the constraint set C is the whole space \(\mathbb {R}^{n}\). First, in Section 6.1, the so-called penalty and barrier methods are presented. These methods are based on the idea of approximating constrained optimization problems by unconstrained ones, which can be solved by any of the methods studied in Chapter 5. Both types of methods are driven by a parameter that determines the weight assigned in each iteration to constraint satisfaction relative to minimization of the objective function. In Subsection 6.1.4, a logarithmic barrier approach to linear programming is described as an illustration of the methodology of barrier methods. The subsequent Sections 6.26.4, of more theoretical flavor, are focused on the formulation of necessary and sufficient optimality conditions, of first and second order, for each of the three types of possible problems: those with equality constraints, with inequality constraints, and with both types of constraints. Conditions of Lagrange, Karush–Kuhn–Tucker, and Fritz John are, respectively, derived through a deep study of the so-called constraint qualifications.
Francisco J. Aragón, Miguel A. Goberna, Marco A. López, Margarita M. L. Rodríguez
Backmatter
Metadaten
Titel
Nonlinear Optimization
verfasst von
Francisco J. Aragón
Miguel A. Goberna
Marco A. López
Margarita M.L. Rodríguez
Copyright-Jahr
2019
Electronic ISBN
978-3-030-11184-7
Print ISBN
978-3-030-11183-0
DOI
https://doi.org/10.1007/978-3-030-11184-7

Premium Partner