Skip to main content

2000 | Buch

Matrix Iterative Analysis

verfasst von: Richard S. Varga

Verlag: Springer Berlin Heidelberg

Buchreihe : Springer Series in Computational Mathematics

insite
SUCHEN

Über dieses Buch

This is the softcover reprint of a very popular hardcover edition, a revised version of the first edition, originally published by Prentice Hall in 1962 and regarded as a classic in its field. In some places, newer research results, e.g. results on weak regular splittings, have been incorporated in the revision, and in other places, new material has been added in the chapters, as well as at the end of chapters, in the form of additional up-to-date references and some recent theorems to give the reader some newer directions to pursue. The material in the new chapters is basically self-contained and more exercises have been provided for the readers. While the original version was more linear algebra oriented, the revision attempts to emphasize tools from other areas, such as approximation theory and conformal mapping theory, to access newer results of interest. The book should be of great interest to researchers and graduate students in the field of numerical analysis.

Inhaltsverzeichnis

Frontmatter
1. Matrix Properties and Concepts
Abstract
The title of this book, Matrix Iterative Analysis, suggests that we might consider here all matrix numerical methods which are iterative in nature. However, such an ambitious goal is in fact replaced by the more practical one where we seek to consider in some detail that smaller branch of numerical analysis concerned with the efficient solution, by means of iteration, of matrix equations arising from discrete approximations to partial differential equations. These matrix equations are generally characterized by the property that the associated square matrixes are sparse, i.e., a large percentage of the entries of these matrices are zero. Furthermore, the nonzero entries of these matrices occur in some natural pattern, which, relative to a digital computer, permits even very large-order matrices to be efficiently stored. Cyclic iterative methods are ideally suited for such matrix equations, since each step requires relatively little digital computer storage or arithmetic computation. As an example of the magnitude of problems that have been successfully solved on digital computers by cyclic iterative methods, the Bettis Atomic Power Laboratory of the Westinghouse Electric Corporation had in daily use in 1960 a two-dimensional program which would treat, as a special case, Laplacian-type matrix equations of order 20,000.
Richard S. Varga
2. Nonnegative Matrices
Abstract
In investigations of the rapidly of convergence of various iterative methods, the spectral radius ρ(A), we have thus far only the upper bounds for ρ(A) of Sect. 1.4, provided by extensions of Gerschgorin's Theorem 1.11. In this section, we shall look closely into the Perron-Frobenius theory of square matrices having nonnegative real numbers as entries. Not only will this theory provide us with both nontrivial upper and lower bounds for the spectral radius for this class of matrices, but the structure of this theory will be decidedly germane to our subsequent development of iterative methods.
Richard S. Varga
3. Basic Iterative Methods and Comparison Theorems
Abstract
Let A = [a i,j] be an n × n complex matrix, and let us seek the solution of the system of linear equations
$$ \begin{array}{*{20}c} {\sum\limits_{j = 1}^n {a_{i,j} x_j = k_i ,} } \hfill & {1 \le i \le n,} \hfill \\ \end{array} $$
(3.1)
which we write in matrix notation as
$$ A{\rm x} = {\rm k}, $$
(2.3)
where k is a given column vendor in Cn. The solution vector x exists in Cn and is unique if and only if A is nonsingular, and this solution vector is given explicity by
$$ {\rm x} = A^{ - 1} {\rm k}. $$
(3.3)
Richard S. Varga
4. Successive Overrelaxation Iterative Methods
Abstract
The point successice overrelaxation iterative method of Chap.3 was simultaneously introduced by Frankel (1950) and Young (1950). Whereas Frankel considered the special case of the numerical solutation of the Dirichlet problem for a rectangle and showed for this case that the point successive overrelaxation iterative method, with suitable chosen relaxation factor, gave substantially larger (by an order of magnitude) asymptotic rates of convergence than those for the point Jacobi and point Gauss-Seidel iterative methods, Young (1950)and Yound (1954a) showed that these conclusions held more generally for matrices satisfying his definition of propertly A, and that these results could be rigorously applied to the iterative solution of matrix equations arising from discrete approximations to a large class of elliptic partial differential equations for general regions. Then, Arms, Gates, and Zondek (1956) with their definition of property A Π generalized Young's results. In so doing, they enlarged the class of matrix equations to which the basic results of Young and Frankel, on the successive overrelaxation iterative method, could be rigorously applied.
Richard S. Varga
5. Semi-Iterative Methods
Abstract
In our previous investigations concerning the acceleration of basic iterative methods, we would, in the final analysis, consider the convergence properties and rates of convergence of an iterative procedure of the form
$$ \begin{array}{*{20}c} {{\rm x}^{\left( {m + 1} \right)} = M{\rm x}^{\left( m \right)} + {\rm g},} \hfill & {m \ge 0,} \hfill \\ \end{array} $$
(5.1)
where M is a fixed n × n matrix with ρ(M) < 1, and x(0) is some given vector. Such an iterative procedure converges to the unique solution vector x of
$$ \left( {I = M} \right){\rm x} = {\rm g}. $$
(5.2)
Richard S. Varga
6. Derivation and Solution of Elliptic Difference Equations
Abstract
The basis aims in this chapters are to derive finite difference approximations for certain elliptic differential equations, to study the properties of the associated matrix equations, and to describe rigorous iterative methods for the solution of such matrix equations, In certain cases, the derived properties of the associated matrix equations are sufficiently powerful to allow us to prove rather easily (Theorem 6.2) that these finite difference approximations become more accurate with finer mesh spacings, but results in this area, especially in higher dimensions, require detailed developments which we omit. Rather, in this chapter we concentrate on useful methods for deriving finite difference approximations of boundary-value problems which apply to internal interfaces, which are of interest in reactor physics.
Richard S. Varga
7. Alternating-Direction Implicit Iterative Methods
Abstract
In the previous chapter, we considered various block iterative methods, such as the line and 2-line successive overrelaxation iterative methods, which as pratical methods involved the direct (or implicit) solution of particular lower-order matrix equations. Also, as a standard for comparison of the asymptotic rates of convergence of these methods, we considered the numerical solution of the Dirichet problem for the unit square with uniform mesh spacings, calling this the model problem. This knowledge of block methods, as well as representative asymptotic rates of convergence for the model problem for the iterative methods considered thus far, serves as the basis for the introduction of the alternating-direction implicit iterative methods, due to Peaceman and Rachford (1955) and Douglas and Rachford (1956).
Richard S. Varga
8. Matrix Methods for Parabolic Partial Differential Equations
Abstract
Many of the problems of physics and engineering that require numerical approximations are special cases of the following second-order linear parabolic differential equation:
$$ \begin{array}{*{20}c} {\phi \left( {\rm x} \right)u_t \left( {{\rm x};t} \right)} \hfill & { = \sum\limits_{i = 1}^n {\left( {K_i \left( {\rm x} \right)u_{x_i } } \right)_{x_i } } + \sum\limits_{i = 1}^n {G_i \left( {\rm x} \right)u_{x_i } } } \hfill \\ \, \hfill & {\begin{array}{*{20}c} { - \sigma \left( {\rm x} \right)u\left( {{\rm x};t} \right) + S\left( {{\rm x};t} \right),} \hfill & {{\rm x} \in R,\,t > 0,} \hfill \\ \end{array}} \hfill \\ \end{array} $$
(8.1)
where R is a given finite (connected) region in Euclidean n-dimentional space, with (external) boundary conditions
$$ \begin{array}{*{20}c} {\alpha \left( {\rm x} \right)u\left( {{\rm x};t} \right) + \beta \left( {\rm x} \right)\frac{{\partial u\left( {{\rm x};t} \right)}}{{\partial n}} = \gamma \left( {\rm x} \right),} \hfill & {{\rm x} \in \Gamma ,t > 0,} \hfill \\ \end{array}$$
(8.2)
where Г is the external boundary of R. Characteristic of such problems is the additional initial condition
$$ \begin{array}{*{20}c} {u\left( {{\rm x};0} \right) = g\left( {\rm x} \right),} \hfill & {{\rm x} \in R.} \hfill \\ \end{array} $$
(8.3)
Richard S. Varga
9. Estimation of Acceleration Parameters
Abstract
Our investigations into solving systems of liner equations thus far have centered on the theoretical aspects of variants of the successive overrelaxation iterative method and the alternating-direction implicit methods. In all cases, the theoretical selection of associated optimum acceleration parameters was based on knowledge of certain key eigenvalues. Specifically, only the sepctral radius ρ(B) of the block Jacobi matrix B is needed (Theorem 4.7) to determine the optimum relaxation factor
$$ \omega _b = \frac{2}{{1 + \sqrt {1 - \rho ^2 \left( B \right)} }} $$
(9.1)
when the successive overrelaxation iterative method is applied in cases where B is a consistently ordered weakly cyclic matrix (of index 2) with real eigenvalues. Similarly, when the block Jacobi matrix has the special form
$$ B = \left[ {\begin{array}{rll} O &\vline & F \\ \hline {F^* } &\vline & O \\ \end{array}} \right], $$
(9.2)
the cyclic Chenbyshev semi-iterative method of Sect. 5.3 makes use of acceleration parameters ωm defined in (5.24) by
$$ \begin{array}{*{20}c} {\omega _m = \frac{{2C_{m - 1} \left( {1/\rho \left( B \right)} \right)}}{{\rho \left( B \right)C_m \left( {1/\rho \left( B \right)} \right)}},} & {m > 1,} & {\omega _1 = 1,} \\ \end{array} $$
(9.3)
where C m (x) is the Chebyshev polynomial of degree m. Again, the determination of optimum parameters depends only on the knowledge of ρ(B). The situation regarding variants of the alternating-direction implicit methods is not essentially different. In the commutative case H 1 V 1 = V 1 H 1 of Sect. 7.2, only the knowledge of the bounds α and β, in (7.37),
$$ \begin{array}{*{20}c} {0 < \alpha \le \sigma _j ,\tau _j \le \beta ,} & {1 \le j \le n,} \\ \end{array} $$
of the eigenvalues σj and τj of H 1 and V 1 was essential to the theoretical problem of selecting optimum acceleration parameters \( \tilde r_j \). However, to dismiss the practical problem of estimating key parameters, because in actual computations one needs but one or two eigenvalue estimates, would be overly optimistic. Those who have had even limited experience in solving such matrix problems by iterative methods have learned (often painfully) that small changes in the estimates of these key eigenvalues can sometimes drastically affect rates of convergence.1 In this section, we shall show that further use can be made of the Perron-Frobenius theory of nonnegative matrices in estimating these key eigenvalues.
Richard S. Varga
Backmatter
Metadaten
Titel
Matrix Iterative Analysis
verfasst von
Richard S. Varga
Copyright-Jahr
2000
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-05156-2
Print ISBN
978-3-642-05154-8
DOI
https://doi.org/10.1007/978-3-642-05156-2