Skip to main content

Über dieses Buch

On the occasion of this new edition, the text was enlarged by several new sections. Two sections on B-splines and their computation were added to the chapter on spline functions: Due to their special properties, their flexibility, and the availability of well-tested programs for their computation, B-splines play an important role in many applications. Also, the authors followed suggestions by many readers to supplement the chapter on elimination methods with a section dealing with the solution of large sparse systems of linear equations. Even though such systems are usually solved by iterative methods, the realm of elimination methods has been widely extended due to powerful techniques for handling sparse matrices. We will explain some of these techniques in connection with the Cholesky algorithm for solving positive definite linear systems. The chapter on eigenvalue problems was enlarged by a section on the Lanczos algorithm; the sections on the LR and QR algorithm were rewritten and now contain a description of implicit shift techniques. In order to some extent take into account the progress in the area of ordinary differential equations, a new section on implicit differential equa­ tions and differential-algebraic systems was added, and the section on stiff differential equations was updated by describing further methods to solve such equations.



1. Error Analysis

Assessing the accuracy of the results of calculations is a paramount goal in numerical analysis. One distinguishes several kinds of errors which may limit this accuracy:
errors in the input data,
roundoff errors,
approximation errors.
J. Stoer, R. Bulirsch

2. Interpolation

Consider a family of functions of a single variable x,
$$ \Phi \left( {x;{a_o}, \cdots ,{a_n}} \right), $$
having n + 1 parameters αo, ..., αn whose values characterize the individual functions in this family. The interpolation problem for Φ consists of determining these parameters ai so that for n + 1 given real or complex pairs of numbers (xi, fi), i=0, ..., n, with xi ≠ xk for i ≠ k,
$$ \Phi \left( {{x_i};{a_o}, \cdots ,{a_n}} \right) = {f_i},i = 0, \ldots ,n, $$
holds. We will call the pairs (x i, f i) support points, the locations x i support abscissas, and the values f i support ordinates. Occasionally, the values of derivatives of Φ are also prescribed.
J. Stoer, R. Bulirsch

3. Topics in Integration

Calculating the definite integral of a given real function f (x),
$$\int_a^b {f(x)dx,} $$
is a classic problem. For some simple integrands f (x), the indefinite integral
$$\int_a^x {f\left( x \right)} dx = F\left( x \right),F'\left( x \right) = f\left( x \right),$$
can be obtained in closed form as an algebraic expression in x and wellknown transcendental functions of x. Then
$$\int_a^b {f(x)dx = F(b) - F(a).} $$
See Gröbner and Hofreiter (1961) for a comprehensive collection of formulas describing such indefinite integrals and many important definite integrals.
J. Stoer, R. Bulirsch

4. Systems of Linear Equations

In this chapter direct methods for solving systems of linear equations
$$Ax = b.A = \left[ {\begin{array}{*{20}{c}} {{a_{11}}}& \cdots &{{a_{{1_n}}}} \\ \vdots &{}& \vdots \\ {{a_{{n_1}}}}& \cdots &{{a_{nn}}} \end{array}} \right],b = \left[ {\begin{array}{*{20}{c}} {{b_1}} \\ \vdots \\ {{b_n}} \end{array}} \right]$$
will be presented. Here A is a given n × n matrix, and b is a given vector. We assume in addition that A and b are real, although this restriction is inessential in most of the methods. In contrast to the iterative methods (Chapter 8), the direct methods discussed here produce the solution in finitely many steps, assuming computations without roundoff errors.
J. Stoer, R. Bulirsch

5. Finding Zeros and Minimum Points by Iterative Methods

Finding the zeros of a given function f, that is arguments ξ for which f(ξ)= 0, is a classical problem. In particular, determining the zeros of a polynomial (the zeros of a polynomial are also known as its roots)
$$ p\left( x \right) = {a_0} + {a_1}x + \cdots + {a_n}{x^n} $$
has captured the attention of pure and applied mathematicians for centuries. However, much more general problems can be formulated in terms of finding zeros, depending upon the definition of the function f: E → F, its domain E, and its range F.
J. Stoer, R. Bulirsch

6. Eigenvalue Problems

Many practical problems in engineering and physics lead to eigenvalue problems. Typically, in all these problems, an overdetermined system of equations is given, say n + 1 equations for n unknowns ξ 1,..., ξ n of the form
$$F(x;\lambda ): \equiv \left[ {\begin{array}{*{20}{c}} {{f_1}({\xi _1}, \ldots ,{\xi _n};\lambda )} \\ { \ldots \ldots \ldots \ldots \ldots \ldots \ldots } \\ {{f_{n + 1}}({\xi _1}, \ldots ,{\xi _n};\lambda )} \end{array}} \right] = 0,$$
in which the functions f i also depend on an additional parameter λ. Usually, (6.0.1) has a solution x = [ξ 1,..., ξ n ] T only for specific values λ = λ i , i = 1, 2, ..., of this parameter. These values λ i are called eigenvalues of the eigenvalue problem (6.0.1), and a corresponding solution x = x(λ i ) of (6.0.1) eigensolution belonging to the eigenvalue λ i .
J. Stoer, R. Bulirsch

7. Ordinary Differential Equations

Many problems in applied mathematics lead to ordinary differential equations. In the simplest case one seeks a differentiable function y = y(x) of one real variable x, whose derivative y’(x) is to satisfy an equation of the form y’(x) = f (x, y(x)), or more briefly,
$$y' = f\left( {x,y} \right);$$
one then speaks of an ordinary differential equation. In general there are infinitely many different functions y which are solutions of (7.0.1). Through additional requirements one can single out certain solutions from the set of all solutions. Thus, in an initial-value problem, one seeks a solution y of (7.0.1) which for given xo, y o satisfies an initial condition of the form
$$ y\left( {{x_o}} \right) = {y_o} $$
J. Stoer, R. Bulirsch

8. Iterative Methods for the Solution of Large Systems of Linear Equations. Some Further Methods

Many problems in practice require the solution of very large systems of linear equations Ax = b in which the matrix A, fortunately, is sparse, i.e., has relatively few nonvanishing elements. Systems of this type arise, e.g., in the application of difference methods or finite-element methods to the approximate solution of boundary-value problems in partial differential equations. The usual elimination methods (see Chapter 4) cannot normally be applied here, since without special precautions they tend to lead to the formation of more or less dense intermediate matrices, making the number of arithmetic operations necessary for the solution much too large, even for present-day computers, not to speak of the fact that the intermediate matrices no longer fit into the usually available computer memory.
J. Stoer, R. Bulirsch


Weitere Informationen