Skip to main content
main-content

Über dieses Buch

The present book is an edition of the manuscripts to the courses "Numerical Methods I" and "Numerical Mathematics I and II" which Professor H. Rutishauser held at the E.T.H. in Zurich. The first-named course was newly conceived in the spring semester of 1970, and intended for beginners, while the two others were given repeatedly as elective courses in the sixties. For an understanding of most chapters the funda­ mentals of linear algebra and calculus suffice. In some places a little complex variable theory is used in addition. However, the reader can get by without any knowledge of functional analysis. The first seven chapters discuss the direct solution of systems of linear equations, the solution of nonlinear systems, least squares prob­ lems, interpolation by polynomials, numerical quadrature, and approxima­ tion by Chebyshev series and by Remez' algorithm. The remaining chapters include the treatment of ordinary and partial differential equa­ tions, the iterative solution of linear equations, and a discussion of eigen­ value problems. In addition, there is an appendix dealing with the qd­ algorithm and with an axiomatic treatment of computer arithmetic.

Inhaltsverzeichnis

Frontmatter

Chapter 1. An Outline of the Problems

Abstract
The object of numerical mathematics is to devise a numerical approach for solving mathematically defined problems, i.e., to exhibit a detailed description of the computational process which eventually produces the solution of the problem in numerical form (for example, a numerical table). In so doing, one must, of course, be cognizant of the fact that a numerical computation almost never is entirely exact, but is more or less perturbed by the so-called rounding errors. The computing process, indeed, is executed in finite arithmetic, for example in floatingpoint arithmetic (number representation: z = a x 10 b ), where only a finite number of digits are at disposal both for the mantissa a and for the exponent b.
Heinz Rutishauser

Chapter 2. Linear Equations and Inequalities

Abstract
The solution of systems of linear equations (briefly called equations) is probably the most important type of numerical computer application, because countless problems in applied mathematics ultimately — if only approximately — can be reduced to linear equations. Not surprisingly, therefore, interest in this problem has grown enormously in the computer age; what previously was viewed as tedious work has since become a legitimate and actively pursued area of mathematical research.(1)
Heinz Rutishauser

Chapter 3. Systems of Equations With Positive Definite Symmetric Coefficient Matrix

Abstract
We have seen that in the general case the solution of a linear system of equations may present difficulties because of pivot selection. These difficulties disappear when the coefficient matrix A of the system is symmetric and positive definite. We therefore wish to examine this class of matrices in more detail.
Heinz Rutishauser

Chapter 4. Nonlinear Equations

Abstract
To introduce the subject, we consider a few examples of nonlinear equations:
$${x^3} + x + 1 = 0$$
is an algebraic equation; there is only one unknown, but it occurs in the third power. There are three solutions, of which two are conjugate complex.
$$2x - \tan x = 0$$
is a transcendental equation. Again, only one unknown is present, but now in a transcendental function. There are denumerably many solutions.
$$\sin x + 3 \cos x = 2$$
is a transcendental equation only in an unessential way, since it can be transformed at once into a quadratic equation for eix. While there are infinitely many solutions, they can all be derived from two solutions through addition of multiples of 2π.
$${x^3} + {y^2} + 5 = 0$$
$$2x + {y^3} + 5y = 0$$
is a system of two nonlinear algebraic equations in two unknowns x and y. It can be reduced to one algebraic equation of degree 9 in only one unknown. This latter equation has nine solutions which generate nine pairs of numbers (x i ,y i ), i = 1,…, 9, satisfying the given system. (There are fewer if only real x,y are admitted.)
Heinz Rutishauser

Chapter 5. Least Squares Problems

Abstract
We consider once again a system of nonlinear equations
$$\begin{array}{*{20}{c}} {{f_1}\left( {{x_1},{x_2}, \cdots ,{x_p}} \right) = 0} \\ {{f_2}\left( {{x_1},{x_2}, \cdots ,{x_p}} \right) = 0} \\ \vdots \\ {{f_n}\left( {{x_1},{x_2}, \cdots ,{x_p}} \right) = 0,} \end{array}$$
but now assume that the number n of equations is larger than the number pof unknowns.
Heinz Rutishauser

Chapter 6. Interpolation

Abstract
Interpolation is the art of reading between the lines of a mathematical table. It can be used to express nonelementary functions approximately in terms of the four basic arithmetic operations, thus making them accessible to computer evaluation.
Heinz Rutishauser

Chapter 7. Approximation

Abstract
While interpolation attempts to approximate a functionpiecewise by polynomials which pass exactly through prescribed support points, we shall now try to approximate a given function f(x) on a (relatively large) intervalI by one polynomial. Such an approximation polynomial, naturally, must be of a higher degree than in the case where f(x) is approximated by polynomial pieces.
Heinz Rutishauser

Chapter 8. Initial Value Problems for Ordinary Differential Equations

Abstract
It is a well-known fact that differential equations occurring in science and engineering can generally not be solved exactly, that is, by means of analytical methods. Even when this is possible, it may not necessarily be useful. For example, the second-order differential equation with two initial conditions,
$$y'' + 5y' + 4y = 1 - {e^x},\,\,\,\,\,\,\,\,\,\,\,y\left( 0 \right) = y'\left( 0 \right) = 0,$$
(1)
has the exact solution
$$y = \frac{1}{4} - \frac{1}{3}x{e^{ - x}} - \frac{2}{9}{e^{ - x}} - \frac{1}{{36}}{e^{ - 4x}},$$
(2)
but when this formula is evaluated, say at the point x =.01, one obtains with 8-digit computation
$$y = .25 - 00330017 - .22001107 - .02668860 = .00000016,$$
which is no longer very accurate.
Heinz Rutishauser

Chapter 9. Boundary Value Problems For Ordinary Differential Equations

Abstract
For a differential equation of order n, or a system of differential equations whose orders add up to n, one needs n conditions in order to single out one solution from among a family of ∞n. if these n conditions refer to a single point x0, one speaks of an initial value problem, since — apart from singular cases — one has enough information to integrate away from x0.
Heinz Rutishauser

Chapter 10. Elliptic Partial Differential Equations, Relaxation Methods

Abstract
The classical model examples of partial differential equations are:
a)
Dirichletproblem (elliptic case):
$$\frac{{{\partial ^2}u}}{{\partial {x^2}}}\, + \,\frac{{{\partial ^2}u}}{{\partial {y^2}}}\, = \,f(x,y) in the domain B of the \left( {x,y} \right) - plane,\,$$
(1)
u (or ∂u/n in the so-called Neumann problem) given on the boundary of B.
 
b)
Heat equation (parabolic case):
$$\frac{{\partial u}}{{\partial t}} = \frac{{{\partial ^2}u}}{{\partial {x^2}}}for a \leqslant x \leqslant b, t > 0,$$
(2)
$$u\left( {x,t} \right) given at t = 0 for all x,$$
$$u or \partial u/\partial x given at x = a,x = b for all t.$$
 
c)
Wave equation (hyperbolic case):
$$\frac{{{\partial ^2}u}}{{\partial {t^2}}}{\mkern 1mu} + {\mkern 1mu} \frac{{{\partial ^2}u}}{{\partial {x^2}}}for a \leqslant x \leqslant b, t > 0,$$
(3)
$$u and \partial u/\partial t given at t = 0 for all x,$$
$$u or \partial u/\partial x given at x = a, x = b for all t.$$
 
Heinz Rutishauser

Chapter 11. Parabolic and Hyperbolic Partial Differential Equations

Abstract
We consider the temperature distribution y(x,t) along a homogeneous rod of length L, which at one end (x=L) is held at temperature 0, while at the other end (x=0) the temperature is prescribed as a function b(t) of time. Let the thermal conductivity of the rod be f(x), the initial temperature be given as a(x), and let there be interior heat generation g(x,t) (cf. Fig. 11.1).
Heinz Rutishauser

Chapter 12. The Eigenvalue Problem For Symmetric Matrices

Abstract
Matrix eigenvalue problems arise, for example, from Hamilton’s principle; the latter states: A mechanical system whose kinetic and potential energy are given by
$$T = \sum\limits_{i = 1}^n {} \sum\limits_{j = 1}^n {} {P_i}_j({q_1},...,{q_n}){\dot q_i}{\dot q_j},\,\,\,\,U = U({q_1},...,{q_n})$$
(1)
, evolves between the time instances t0 and t1 in such a way that the functions q i (t) describing the motion make the action integral
$$J = \int_{{t_0}}^{{t_1}} {} (T - U)dt$$
stationary, the values q i (t 0 ) 2nd qi(t 1) being held fixed.
Heinz Rutishauser

Chapter 13. The Eigenvalue Problem For Arbitrary Matrices

Abstract
The determination of the eigenvalues of nonsymmetric matrices is much more difficult, if for no other reason than the fact that for such matrices a concept analogous to the quadratic form is missing, and consequently, there are no extremal properties either. In accordance with these facts, the statement that eigenvalues are changed only a little by small perturbations in the matrix elements is also no longer valid.
Heinz Rutishauser

Backmatter

Weitere Informationen