main-content

Numerical linear algebra is one of the most important subjects in the field of statistical computing. Statistical methods in many areas of application require computations with vectors and matrices. This book describes accurate and efficient computer algorithms for factoring matrices, solving linear systems of equations, and extracting eigenvalues and eigenvectors. Although the book is not tied to any particular software system, it describes and gives examples of the use of modern computer software for numerical linear algebra. An understanding of numerical linear algebra requires basic knowledge both of linear algebra and of how numerical data are stored and manipulated in the computer. The book begins with a discussion of the basics of numerical computations, and then describes the relevant properties of matrix inverses, matrix factorizations, matrix and vector norms, and other topics in linear algebra; hence, the book is essentially self- contained. The topics addressed in this book constitute the most important material for an introductory course in statistical computing, and should be covered in every such course. The book includes exercises and can be used as a text for a first course in statistical computing or as supplementary text for various courses that emphasize computations. James Gentle is University Professor of Computational Statistics at George Mason University. During a thirteen-year hiatus from academic work before joining George Mason, he was director of research and design at the world's largest independent producer of Fortran and C general-purpose scientific software libraries. These libraries implement many algorithms for numerical linear algebra. He is a Fellow of the American Statistical Association and member of the International Statistical Institute. He has held several national

### Chapter 1. Computer Storage and Manipulation of Data

Abstract
The computer is a tool for a variety of applications. The statistical applications include storage, manipulation, and presentation of data. The data may be numbers, text, or images. For each type of data, there are several ways of coding that can be used to store the data, and specific ways the data may be manipulated.
James E. Gentle

### Chapter 2. Basic Vector/Matrix Computations

Abstract
Vectors and matrices are useful in representing multivariate data, and they occur naturally in working with linear equations or when expressing linear relationships among objects. Numerical algorithms for a variety of tasks involve matrix and vector arithmetic. An optimization algorithm to find the minimum of a function, for example, may use a vector of approximate first derivatives and a matrix of second derivatives; and a method to solve a differential equation may use a matrix with a few diagonals for computing differences. There are various precise ways of defining vectors and matrices, but we will think of them merely as arrays of numbers, or scalars, on which an algebra is defined.
James E. Gentle

### Chapter 3. Solution of Linear Systems

Abstract
One of the most common problems in numerical computing is to solve the linear system Ax = b, that is, for given A and b, to find x such that the equation holds. The system is said to be consistent if there exists such an x, and in that case a solution x may be written as A-b, where A - is some inverse of A. If A is square and of full rank, we can write the solution as A-1b.
James E. Gentle

### Chapter 4. Computation of Eigenvectors and Eigenvalues and the Singular Value Decomposition

Abstract
Before we discuss methods for computing eigenvalues, we mention an interesting observation. Consider the polynomial, f(& λ),
$${{\lambda }^{p}} + {{a}_{{p - 1}}}{{\lambda }^{{p - 1}}} + ... + {{a}_{1}}\lambda + {{a}_{0}}$$
Now form the matrix, A,
$$\left[ \begin{gathered} 0 1 0 ... 0 \hfill \\ 0 0 1 ... 0 \hfill \\ \ddots \hfill \\ 0 0 0 ... 0 \hfill \\ - {{a}_{0}} - {{a}_{1}} - {{a}_{2}} ... - {{a}_{{p - 1}}} \hfill \\ \end{gathered} \right]$$
The matrix A is called the companion matrix of the polynomial f. It is easy to see that the characteristic equation of A, equation (2.11) on page 68, is the polynomial f(λ):
$$\det (A - \lambda I) = f(\lambda )$$
Thus, given a general polynomial f, we can form a matrix A whose eigenvalues are the roots of the polynomial. It is a well-known fact in the theory of equations that there is no general formula for the roots of a polynomial of degree greater than 4. This means that we cannot expect to have a direct method for calculating eigenvalues; rather, we will have to use an iterative method.
James E. Gentle

### Chapter 5. Software for Numerical Linear Algebra

Abstract
Because of the importance of linear algebraic computations, there is a wide range of software for these computations. The Guide to Available Mathematical Software (GAMS) (see the bibliography) is a good source of information about software.
James E. Gentle

### Chapter 6. Applications in Statistics

Abstract
One of the most common structures for statistical datasets is a two-dimensional array. A matrix is often a convenient object for representing numeric data structured this way; the variables on the dataset generally correspond to the columns, and the observations correspond to the rows. If the data are in the matrix X, a useful statistic is the sums of squares and cross-products matrix, XTX, or the “ adjusted” squares and cross-products matrix, where X a is the matrix formed by subtracting from each element of X the mean of the column containing that element. The matrix where n is the number of observations (the number of rows in X), is the sample variance-covariance matrix. This matrix is nonnegative definite (see Exercise 6.1a, page 176). Estimates of the variance-covariance matrix or the correlation matrix of the underlying distribution may not be positive definite, however, and in Exercise 6.1d we describe a possible way of adjusting a matrix to be positive definite.
James E. Gentle