Skip to main content

2016 | Buch

Numerical Optimization with Computational Errors

insite
SUCHEN

Über dieses Buch

This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative.

This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s method.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Introduction
Abstract
In this book we study behavior of algorithms for constrained convex minimization problems in a Hilbert space. Our goal is to obtain a good approximate solution of the problem in the presence of computational errors. We show that the algorithm generates a good approximate solution, if the sequence of computational errors is bounded from above by a constant. In this section we discuss several algorithms which are studied in the book.
Alexander J. Zaslavski
Chapter 2. Subgradient Projection Algorithm
Abstract
In this chapter we study the subgradient projection algorithm for minimization of convex and nonsmooth functions and for computing the saddle points of convex–concave functions, under the presence of computational errors. We show that our algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Moreover, for a known computational error, we find out what an approximate solution can be obtained and how many iterates one needs for this.
Alexander J. Zaslavski
Chapter 3. The Mirror Descent Algorithm
Abstract
In this chapter we analyze the convergence of the mirror descent algorithm under the presence of computational errors. We show that the algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Moreover, for a known computational error, we find out what an approximate solution can be obtained and how many iterates one needs for this.
Alexander J. Zaslavski
Chapter 4. Gradient Algorithm with a Smooth Objective Function
Abstract
In this chapter we analyze the convergence of a projected gradient algorithm with a smooth objective function under the presence of computational errors. We show that the algorithm generates a good approximate solution, if computational errors are bounded from above by a small positive constant. Moreover, for a known computational error, we find out what an approximate solution can be obtained and how many iterates one needs for this.
Alexander J. Zaslavski
Chapter 5. An Extension of the Gradient Algorithm
Abstract
In this chapter we analyze the convergence of a gradient type algorithm, under the presence of computational errors, which was introduced by Beck and Teboulle [20] for solving linear inverse problems arising in signal/image processing. We show that the algorithm generates a good approximate solution, if computational errors are bounded from above by a small positive constant. Moreover, for a known computational error, we find out what an approximate solution can be obtained and how many iterates one needs for this.
Alexander J. Zaslavski
Chapter 6. Weiszfeld’s Method
Abstract
In this chapter we analyze the behavior of Weiszfeld’s method for solving the Fermat–Weber location problem. We show that the algorithm generates a good approximate solution, if computational errors are bounded from above by a small positive constant. Moreover, for a known computational error, we find out what an approximate solution can be obtained and how many iterates one needs for this.
Alexander J. Zaslavski
Chapter 7. The Extragradient Method for Convex Optimization
Abstract
In this chapter we study convergence of the extragradient method for constrained convex minimization problems in a Hilbert space. Our goal is to obtain an ε-approximate solution of the problem in the presence of computational errors, where ε is a given positive number. We show that the extragradient method generates a good approximate solution, if the sequence of computational errors is bounded from above by a constant.
Alexander J. Zaslavski
Chapter 8. A Projected Subgradient Method for Nonsmooth Problems
Abstract
In this chapter we study the convergence of the projected subgradient method for a class of constrained optimization problems in a Hilbert space. For this class of problems, an objective function is assumed to be convex but a set of admissible points is not necessarily convex. Our goal is to obtain an ε-approximate solution in the presence of computational errors, where ε is a given positive number.
Alexander J. Zaslavski
Chapter 9. Proximal Point Method in Hilbert Spaces
Abstract
In this chapter we study the convergence of a proximal point method under the presence of computational errors. Most results known in the literature show the convergence of proximal point methods when computational errors are summable. In this chapter the convergence of the method is established for nonsummable computational errors. We show that the proximal point method generates a good approximate solution if the sequence of computational errors is bounded from above by some constant.
Alexander J. Zaslavski
Chapter 10. Proximal Point Methods in Metric Spaces
Abstract
In this chapter we study the local convergence of a proximal point method in a metric space under the presence of computational errors. We show that the proximal point method generates a good approximate solution if the sequence of computational errors is bounded from above by some constant. The principle assumption is a local error bound condition which relates the growth of an objective function to the distance to the set of minimizers, introduced by Hager and Zhang [55].
Alexander J. Zaslavski
Chapter 11. Maximal Monotone Operators and the Proximal Point Algorithm
Abstract
In a finite-dimensional Euclidean space, we study the convergence of a proximal point method to a solution of the inclusion induced by a maximal monotone operator, under the presence of computational errors. The convergence of the method is established for nonsummable computational errors. We show that the proximal point method generates a good approximate solution, if the sequence of computational errors is bounded from above by a constant.
Alexander J. Zaslavski
Chapter 12. The Extragradient Method for Solving Variational Inequalities
Abstract
In a Hilbert space, we study the convergence of the subgradient method to a solution of a variational inequality, under the presence of computational errors. The convergence of the subgradient method for solving variational inequalities is established for nonsummable computational errors. We show that the subgradient method generates a good approximate solution, if the sequence of computational errors is bounded from above by a constant.
Alexander J. Zaslavski
Chapter 13. A Common Solution of a Family of Variational Inequalities
Abstract
In a Hilbert space, we study the convergence of the subgradient method to a common solution of a finite family of variational inequalities and of a finite family of fixed point problems under the presence of computational errors. The convergence of the subgradient method is established for nonsummable computational errors. We show that the subgradient method generates a good approximate solution, if the sequence of computational errors is bounded from above by a constant.
Alexander J. Zaslavski
Chapter 14. Continuous Subgradient Method
Abstract
In this chapter we study the continuous subgradient algorithm for minimization of convex functions, under the presence of computational errors. We show that our algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Moreover, for a known computational error, we find out what an approximate solution can be obtained and how much time one needs for this.
Alexander J. Zaslavski
Chapter 15. Penalty Methods
Abstract
In this chapter we use the penalty approach in order to study constrained minimization problems in infinite dimensional spaces. A penalty function is said to have the exact penalty property if there is a penalty coefficient for which a solution of an unconstrained penalized problem is a solution of the corresponding constrained problem. Since we consider optimization problems in general Banach spaces, not necessarily finite-dimensional, the existence of solutions of original constrained problems and corresponding penalized unconstrained problems is not guaranteed. By this reason we deal with approximate solutions and with an approximate exact penalty property which contains the classical exact penalty property as a particular case. In our recent research we established the approximate exact penalty property for a large class of inequality-constrained minimization problems. In this chapter we improve this result and obtain an estimation of the exact penalty.
Alexander J. Zaslavski
Chapter 16. Newton’s Method
Abstract
In this chapter we study the convergence of Newton’s method for nonlinear equations and nonlinear inclusions in a Banach space. Nonlinear mappings, which appear in the right-hand side of the equations, are not necessarily differentiable. Our goal is to obtain an approximate solution in the presence of computational errors. In order to meet this goal, in the case of inclusions, we study the behavior of iterates of nonexpansive set-valued mappings in the presence of computational errors.
Alexander J. Zaslavski
Backmatter
Metadaten
Titel
Numerical Optimization with Computational Errors
verfasst von
Alexander J. Zaslavski
Copyright-Jahr
2016
Electronic ISBN
978-3-319-30921-7
Print ISBN
978-3-319-30920-0
DOI
https://doi.org/10.1007/978-3-319-30921-7

Premium Partner