Skip to main content

2014 | Buch

Newton-Type Methods for Optimization and Variational Problems

insite
SUCHEN

Über dieses Buch

This book presents comprehensive state-of-the-art theoretical analysis of the fundamental Newtonian and Newtonian-related approaches to solving optimization and variational problems. A central focus is the relationship between the basic Newton scheme for a given problem and algorithms that also enjoy fast local convergence. The authors develop general perturbed Newtonian frameworks that preserve fast convergence and consider specific algorithms as particular cases within those frameworks, i.e., as perturbations of the associated basic Newton iterations. This approach yields a set of tools for the unified treatment of various algorithms, including some not of the Newton type per se. Among the new subjects addressed is the class of degenerate problems. In particular, the phenomenon of attraction of Newton iterates to critical Lagrange multipliers and its consequences as well as stabilized Newton methods for variational problems and stabilized sequential quadratic programming for optimization. This volume will be useful to researchers and graduate students in the fields of optimization and variational analysis.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Elements of Optimization Theory and Variational Analysis
Abstract
In this chapter we state the problem settings that will be investigated in the book and discuss some theoretical issues necessary for the subsequent analysis of numerical methods. We emphasize that the concept of this chapter is rather minimalistic. We provide only the material that would be directly used later on and prove only those facts which cannot be regarded as “standard” (i.e., their proofs cannot be found in well-established sources). For the material that we regard as rather standard, we limit our presentation to the statements, references, and comments.
Alexey F. Izmailov, Mikhail V. Solodov
Chapter 2. Equations and Unconstrained Optimization
Abstract
In this chapter, we start our discussion of Newton-type methods, which are based on the fundamental principle of linear/quadratic approximation of the problem data (or of some part of the problem data). The underlying idea is extremely important, as it serves as a foundation for numerous computationally efficient algorithms for optimization and variational problems. We also discuss linesearch and trust-region globalizations of local algorithms, as well as the case of semismooth problems.
Alexey F. Izmailov, Mikhail V. Solodov
Chapter 3. Variational Problems: Local Methods
Abstract
In this chapter, we present local analysis of Newton-type algorithms for variational problems, starting with the fundamental Josephy–Newton method for generalized equations. This method is an important extension of classical Newtonian techniques to more general variational problems. For example, as a specific application, the Josephy–Newton method provides a convenient tool for analyzing the sequential quadratic programming algorithm for optimization. We also discuss semismooth Newton methods for complementarity problems, and active-set methods with identifications based on error bounds.
Alexey F. Izmailov, Mikhail V. Solodov
Chapter 4. Constrained Optimization: Local Methods
Abstract
This chapter is the central part of the book; it is devoted to local convergence analysis of Newton-type methods for optimization with constraints. The perturbed sequential quadratic programming (SQP) framework is introduced, which allows to treat, in addition and in a unified manner, not only various modifications of SQP itself (truncated, augmented Lagrangian based versions, and second-order corrections) but also a number of other algorithms even though their iterative subproblems are very different from SQP (linearly constrained Lagragian algorithm, sequential quadratically constrained quadratic programming, inexact restoration, and a certain interior feasible directions method).
Alexey F. Izmailov, Mikhail V. Solodov
Chapter 5. Variational Problems: Globalization of Convergence
Abstract
Newtonian methods for variational problems discussed in Chap. 3 have attractive local convergence and rate of convergence properties, but they are local by nature: for guaranteed convergence, they all require a starting point close enough to a solution. Therefore, to arrive to practical algorithms based on these methods, the process of computing and automatically accepting an appropriate “starting point” must be incorporated into the overall iterative scheme. Such extensions of locally convergent methods are referred to as globalization of convergence. This chapter discusses some strategies for globalization of convergence of Newtonian methods for variational problems.
Alexey F. Izmailov, Mikhail V. Solodov
Chapter 6. Constrained Optimization: Globalization of Convergence
Abstract
In this chapter we discuss approaches to globalizing the fundamental sequential quadratic programming (SQP) algorithm. This can be done in a number of ways. First, candidate points can be produced either using linesearch in the computed SQP direction or solving SQP subproblems with an additional trust-region constraint. Second, the generated candidate points can be evaluated either by computing the value of a suitable merit function or by a filter dominance criterion. The so-called elastic mode modification is also considered, in the context of the sequential quadratically constrained quadratic programming method.
Alexey F. Izmailov, Mikhail V. Solodov
Chapter 7. Degenerate Problems with Nonisolated Solutions
Abstract
The common feature of Newtonian methods discussed in the previous chapters is their local superlinear convergence under reasonable assumptions, which, however, always subsumed that the solution in question is isolated. On the other hand, sometimes one has to deal with problems (or even problem classes) whose solutions are not isolated, and thus are degenerate in some sense. One important example of degeneracy, to which we pay special attention, is the case of nonunique Lagrange multipliers associated with a solution of a given problem. We first put in evidence that in such cases, Newton-type methods for optimization are attracted to certain special Lagrange multipliers, called critical, which violate the second-order sufficient optimality conditions. This, in fact, is the reason for slow convergence typically observed in degenerate cases. To deal with degeneracy, we then consider special modifications of Newtonian methods; in particular, stabilized methods for variational problems and stabilized sequential quadratic programming for optimization. Finally, mathematical programs with complementarity constraints are discussed, which is an important class of inherently degenerate problems.
Alexey F. Izmailov, Mikhail V. Solodov
Backmatter
Metadaten
Titel
Newton-Type Methods for Optimization and Variational Problems
verfasst von
Alexey F. Izmailov
Mikhail V. Solodov
Copyright-Jahr
2014
Electronic ISBN
978-3-319-04247-3
Print ISBN
978-3-319-04246-6
DOI
https://doi.org/10.1007/978-3-319-04247-3