Skip to main content
main-content

Über dieses Buch

As suggested by the title of this book Numerical Toolbox for Verified Computing, we present an extensive set of sophisticated tools to solve basic numerical problems with a verification of the results. We use the features of the scientific computer language PASCAL-XSC to offer modules that can be combined by the reader to his/her individual needs. Our overriding concern is reliability - the automatic verification of the result a computer returns for a given problem. All algorithms we present are influenced by this central concern. We must point out that there is no relationship between our methods of numerical result verification and the methods of program verification to prove the correctness of an imple~entation for a given algorithm. This book is the first to offer a general discussion on • arithmetic and computational reliability, • analytical mathematics and verification techniques, • algorithms, and • (most importantly) actual implementations in the form of working computer routines. Our task has been to find the right balance among these ingredients for each topic. For some topics, we have placed a little more emphasis on the algorithms. For other topics, where the mathematical prerequisites are universally held, we have tended towards more in-depth discussion of the nature of the computational algorithms, or towards practical questions of implementation. For all topics, we present exam­ ples, exercises, and numerical results demonstrating the application of the routines presented.

Inhaltsverzeichnis

Frontmatter

Introduction

Chapter 1. Introduction

Abstract
This is a reference book for numerical methods with automatic result verification. The methods presented here are practical, reliable, and elegant. We provide theory, algorithmic descriptions, and implementations for methods to solve some basic numerical problems in a reliable way. Also, this book can help you learn how to develop such methods and how to proceed for other problems beyond the scope of the book.
Ulrich Kulisch, Rolf Hammer, Dietmar Ratz, Matthias Hocks

Preliminaries

Frontmatter

Chapter 2. The Features of PASCAL-XSC

Abstract
In this chapter, we give a short overview of the new concepts of the programming language PASCAL-XSC, a universal PASCAL eXtension for Scientific Computation with extensive predefined modules for scientific computation. For a complete language reference and examples, we refer to [44] and [45].
Ulrich Kulisch, Rolf Hammer, Dietmar Ratz, Matthias Hocks

Chapter 3. Mathematical Preliminaries

Abstract
Interval arithmetic is the basic mathematical tool in verified numerical computing. This chapter summarizes the mathematical terms used in this book. After an introduction to real, complex, and extended interval arithmetic, we make some comments on the realization of arithmetics on a digital computer. We also touch on the problem of data conversion. Finally, we point out how to use fixed-point theorems to derive algorithms for verified numerical computing.
Ulrich Kulisch, Rolf Hammer, Dietmar Ratz, Matthias Hocks

One-Dimensional Problems

Frontmatter

Chapter 4. Evaluation of Polynomials

Abstract
In this chapter, we consider the evaluation of a polynomial function of a single variable. We usually compute the value of an arithmetic function by replacing each arithmetic operation by its corresponding floating-point machine operation (see Section 3.5). Roundoff errors and cancellations sometimes cause the calculated result to be drastically wrong. For similar reasons, a naive interval evaluation of a polynomial may lead to intervals so large as to be practically useless. Roundoff and cancellation errors are especially dangerous if we are evaluating a function close to a root, as we will see in Chapter 9 when we compute verified enclosures of zeros of polynomials.
Ulrich Kulisch, Rolf Hammer, Dietmar Ratz, Matthias Hocks

Chapter 5. Automatic Differentiation

Abstract
In many applications of numerical and scientific computing, it is necessary to compute derivatives of functions. Simple examples are methods for finding the zeros, maxima, or minima of nonlinear functions. There are three different methods to get the values of the derivatives: numerical differentiation, symbolic differentiation, and automatic differentiation.
Ulrich Kulisch, Rolf Hammer, Dietmar Ratz, Matthias Hocks

Chapter 6. Nonlinear Equations in One Variable

Abstract
One of the most important tasks in scientific computing is the problem of finding zeros (or roots) of nonlinear functions. In classical numerical analysis, root-finding methods for nonlinear functions begin with an approximation and apply an iterative method (such as Newton’s or Halley’s methods), which hopefully improves the approximation. It is a myth that no numerical algorithm is able to compute all zeros of a nonlinear equation with guaranteed error bounds, or even more, that no method is able to give concrete information about the existence and uniqueness of solutions of such a problem.
Ulrich Kulisch, Rolf Hammer, Dietmar Ratz, Matthias Hocks

Chapter 7. Global Optimization

Abstract
We want to find the global minimum in an interval [x] of a function f that may have many local minima. We want to compute the minimum value of f and the point(s) at which the minimum value is attained. This is a very difficult problem for classical methods because narrow, deep valleys may escape detection. In contrast, the interval method presented here evaluates f on a continuum of points, including those points that are not finitely representable, so valleys, no matter how narrow, are recognized with certainty. Further, interval techniques often can reject large regions in which the optimum can be guaranteed not to lie, so they can be faster overall than classical methods for many problems.
Ulrich Kulisch, Rolf Hammer, Dietmar Ratz, Matthias Hocks

Chapter 8. Evaluation of Arithmetic Expressions

Abstract
The evaluation of arithmetic expressions using floating-point arithmetic may lead to unpredictable results due to an accumulation of roundoff errors. As an example, the evaluation of x + 1 — x for x > 1020 using the standard floating-point format on almost every digital computer yields the wrong result 0. Since the evaluation of arithmetic expressions is a basic task in digital computing, we should have a method to evaluate a given expression to an arbitrary accuracy. We will develop such a method for real expressions composed of the operations +, —, •, /, and ↑,where ↑ denotes exponentiation by an integer. The method is based on the principle of iterative refinement as discussed in Section 3.8.
Ulrich Kulisch, Rolf Hammer, Dietmar Ratz, Matthias Hocks

Chapter 9. Zeros of Complex Polynomials

Abstract
We consider the complex polynomial p: C → C defined by
$$p(z)=\sum\limits_{i=0}^n{{p_i}{z^i}},{p_i}\in\mathbb{C}{\text{, }}i=0, . . . , n, {p_n}\ne 0.$$
(1)
(9.1) The Fundamental Theorem of algebra asserts that this polynomial has n zeros counted by multiplicity. Finding these roots is a non trivial problem in numerical mathematics. Most algorithms deliver only approximations of the exact zeros without any or with only weak statements concerning the accuracy.
Ulrich Kulisch, Rolf Hammer, Dietmar Ratz, Matthias Hocks

Multi-Dimensional Problems

Frontmatter

Chapter 10. Linear Systems of Equations

Abstract
Finding the solution of a linear system of equations is one of the basic problems in numerical algebra. We will develop a verification algorithm for square systems with full matrix based on a Newton-like method for an equivalent fixed-point problem.
Ulrich Kulisch, Rolf Hammer, Dietmar Ratz, Matthias Hocks

Chapter 11. Linear Optimization

Abstract
A linear programming problem consists of a linear function to be maximized (or minimized) subject to linear equality and inequality constraints. Any linear program (LP) can be put by well-known transformations into standard form
$$ \begin{gathered} \mathop {({\text{LP}})\quad \left( {\begin{array}{*{20}{c}} {z = \quad {c^{\text{T}}}x\quad = \quad \max !} \\ {Ax\quad = \quad b} \\ {x\quad \geqslant \quad 0} \end{array}} \right)}\limits_ \Leftrightarrow \hfill \\ \max \{ {c^{\text{T}}}x|x \in X\} ,\quad X: = \{ x \in {\mathbb{R}^n}|Ax = b,x \geqslant 0\} , \hfill \\ \end{gathered} $$
(11.1)
where A is a real m x n matrix, b ∈ R m , c R n , and m > n. The input data of (11.1) are given by the triple P = (A, b, c) ∈ R m.n+m+n.
Ulrich Kulisch, Rolf Hammer, Dietmar Ratz, Matthias Hocks

Chapter 12. Automatic Differentiation for Gradients, Jacobians, and Hessians

Abstract
In Chapter 5, we considered automatic differentiation in one variable, but there are also many applications of numerical and scientific computing where it is necessary to compute derivatives of multi-dimensional functions. In this chapter, we extend the concept of automatic differentiation to the multi-dimensional case as given by Rail [66] and many others. We apply well-known differentiation rules for gradients, Jacobians, or Hessians with the computation of numerical values, combining the advantages of symbolic and numerical differentiation. Only the algorithm or formula for the function is required. No explicit formulas for the gradient, Jacobian, or Hessian have to be derived and coded.
Ulrich Kulisch, Rolf Hammer, Dietmar Ratz, Matthias Hocks

Chapter 13. Nonlinear Systems of Equations

Abstract
In Chapter 6, we considered the problem of finding zeros (or roots) of nonlinear functions of a single variable. Now, we consider its generalization, the problem of finding the solution vectors of a system of nonlinear equations. We give a method for finding all solutions of a nonlinear system of equations f(x) = 0 for a continuously differentiate function f : IR n → IR n in a given interval vector (box). Our method computes close bounds on the solution vectors, and it delivers information about existence and uniqueness of the computed solutions. The method we present is a variant of the interval Gauss-Seidel method based on the method of Hansen and Sengupta [3], [29], and a modification of Ratz [73]. Our method makes use of the extended interval operations defined in Section 3.3.
Ulrich Kulisch, Rolf Hammer, Dietmar Ratz, Matthias Hocks

Chapter 14. Global Optimization

Abstract
In Chapter 7, we considered the problem of finding the global minimizers of one-dimensional nonlinear functions. Now, we consider its generalization, the problem of finding the global minimizers of multi-dimensional nonlinear functions. Our algorithm is based on the method of Hansen [26]. The algorithm computes enclosures for all global minimizers and for the global minimum value of a twice continuously differentiate function in the interior of a given interval vector.
Ulrich Kulisch, Rolf Hammer, Dietmar Ratz, Matthias Hocks

Backmatter

Weitere Informationen