Skip to main content
Top

2020 | Book

Nonlinear Conjugate Gradient Methods for Unconstrained Optimization

insite
SEARCH

About this book

Two approaches are known for solving large-scale unconstrained optimization problems—the limited-memory quasi-Newton method (truncated Newton method) and the conjugate gradient method. This is the first book to detail conjugate gradient methods, showing their properties and convergence characteristics as well as their performance in solving large-scale unconstrained optimization problems and applications. Comparisons to the limited-memory and truncated Newton methods are also discussed. Topics studied in detail include: linear conjugate gradient methods, standard conjugate gradient methods, acceleration of conjugate gradient methods, hybrid, modifications of the standard scheme, memoryless BFGS preconditioned, and three-term. Other conjugate gradient methods with clustering the eigenvalues or with the minimization of the condition number of the iteration matrix, are also treated. For each method, the convergence analysis, the computational performances and the comparisons versus other conjugate gradient methods are given.

The theory behind the conjugate gradient algorithms presented as a methodology is developed with a clear, rigorous, and friendly exposition; the reader will gain an understanding of their properties and their convergence and will learn to develop and prove the convergence of his/her own methods. Numerous numerical studies are supplied with comparisons and comments on the behavior of conjugate gradient algorithms for solving a collection of 800 unconstrained optimization problems of different structures and complexities with the number of variables in the range [1000,10000]. The book is addressed to all those interested in developing and using new advanced techniques for solving unconstrained optimization complex problems. Mathematical programming researchers, theoreticians and practitioners in operations research, practitioners in engineering and industry researchers, as well as graduate students in mathematics, Ph.D. and master students in mathematical programming, will find plenty of information and practical applications for solving large-scale unconstrained optimization problems and applications by conjugate gradient methods.

Table of Contents

Frontmatter
Chapter 1. Introduction: Overview of Unconstrained Optimization
Abstract
Unconstrained optimization consists of minimizing a function which depends on a number of real variables without any restrictions on the values of these variables. When the number of variables is large, this problem becomes quite challenging. The most important gradient methods for solving unconstrained optimization problems are described in this chapter. These methods are iterative.
Neculai Andrei
Chapter 2. Linear Conjugate Gradient Algorithm
Abstract
The linear conjugate gradient algorithm is dedicated to minimizing convex quadratic functions (or solving linear algebraic systems of equations with positive definite matrices). This algorithm was introduced by Hestenes and Stiefel (1952).
Neculai Andrei
Chapter 3. General Convergence Results for Nonlinear Conjugate Gradient Methods
Abstract
General convergence results for nonlinear conjugate gradient methods.
Neculai Andrei
Chapter 4. Standard Conjugate Gradient Methods
Abstract
The purpose of this chapter is to present the standard conjugate gradient algorithms as well as their convergence for solving unconstrained optimization problems.
Neculai Andrei
Chapter 5. Acceleration of Conjugate Gradient Algorithms
Abstract
It is common knowledge that in conjugate gradient algorithms, the search directions tend to be poorly scaled and consequently the line search must perform more function evaluations in order to obtain a suitable stepsize \( \alpha_{k} \).
Neculai Andrei
Chapter 6. Hybrid and Parameterized Conjugate Gradient Methods
Abstract
Numerical experiments with standard conjugate gradient methods showed that the methods FR, DY, and CD have modest numerical performances, being affected by jamming, although they have strong convergence properties. On the other hand, the computational performances of HS, PRP, and LS methods are better, even if their convergence properties are weaker.
Neculai Andrei
Chapter 7. Conjugate Gradient Methods as Modifications of the Standard Schemes
Abstract
Due to their simplicity and low memory requirements, conjugate gradient methods represent an important contribution to the class of methods for solving unconstrained optimization problems. These methods have good convergence properties and their iterations do not involve any matrices, making them extremely attractive for solving large-scale problems.
Neculai Andrei
Chapter 8. Conjugate Gradient Methods Memoryless BFGS Preconditioned
Abstract
Conjugate gradient methods are widely acknowledged to be among the most efficient and robust methods for solving the large-scale unconstrained nonlinear optimization problems
Neculai Andrei
Chapter 9. Three-Term Conjugate Gradient Methods
Abstract
This chapter is dedicated to presenting three-term conjugate gradient methods. For solving the nonlinear unconstrained.
Neculai Andrei
Chapter 10. Preconditioning of the Nonlinear Conjugate Gradient Algorithms
Abstract
Preconditioning is a technique to accelerate the conjugate gradient algorithms. In Chapter 2, the preconditioning of the linear conjugate gradient algorithm has been presented. For linear systems \( Ax = b, \) preconditioning modifies the system of equations in order to improve the eigenvalue distribution of A.
Neculai Andrei
Chapter 11. Other Conjugate Gradient Methods
Abstract
As already seen, the conjugate gradient algorithms presented so far use some principles based on: hybridization or modifications of the standard schemes, the memoryless or the scaled memoryless BFGS preconditioned or the three-term concept. The corresponding conjugate gradient algorithms are defined by the descent condition, the “pure” conjugacy or the Dai–Liao conjugacy conditions or by the minimization of the quadratic approximation with one or two parameters of the objective function. There are a number of convergence results, mainly based on the Zoutendijk and on the Nocedal conditions under the Wolfe line search (Dai, 2011). These algorithms have good numerical performances, being able to solve large-scale unconstrained optimization problems and applications. However, in the frame of conjugate gradient methods, which is a very active area of research, some other computational schemes were introduced in order to improve their numerical performances. They are too numerous to be presented in this study. However, a short description of some of them is as follows.
Neculai Andrei
Chapter 12. Discussions, Conclusions, and Large-Scale Optimization
Abstract
Having a very simple computational scheme with a very well elaborated convergence theory and requiring modest computational resources for their implementation in computer codes, the conjugate gradient methods are of prime importance for solving large-scale unconstrained optimization problems and real applications.
Neculai Andrei
Backmatter
Metadata
Title
Nonlinear Conjugate Gradient Methods for Unconstrained Optimization
Author
Neculai Andrei
Copyright Year
2020
Electronic ISBN
978-3-030-42950-8
Print ISBN
978-3-030-42949-2
DOI
https://doi.org/10.1007/978-3-030-42950-8