Skip to main content

2008 | Buch

Numerical Linear Algebra

verfasst von: Grégoire Allaire, Sidi Mahmoud Kaber

Verlag: Springer New York

Buchreihe : Texts in Applied Mathematics

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter
1. Introduction
As we said in the preface, linear algebra is everywhere in numerical simulations, often well hidden for the average user, but always crucial in terms of performance and efficiency. Almost all numerical computations in physics, mechanics, chemistry, engineering, economics, finance, etc., involve numerical linear algebra, i.e., computations involving matrices. The purpose of this introduction is to give a few examples of applications of the two main types of matrix computations: solving linear systems of equations on the one hand, and computing eigenvalues and eigenvectors on the other hand. The following examples serve as a motivation for the main notions, methods, and algorithms discussed in this book.
2. Definition and Properties of Matrices
Throughout this book we consider matrices with real or complex entries.Most of the results are valid for real and complex matrices (but not all of them!). That is, in order to avoid tedious repetition,we denote by 핂 a field that is either the field of all real numbers ℝ or the field of all complex numbers ℂ, i.e.,핂 =ℝ, ℂ.
3. Matrix Norms,Sequences,and Series
We recall the definition of a norm on the vector space 핂n (with 핂 =ℝ or ℂ.)
4. Introduction to Algorithmics
This chapter is somewhat unusual in comparison to the other chapters of this course. Indeed, it contains no theorems, but rather notions that are at the crossroads of mathematics and computer science. However, the reader should note that this chapter is essential from the standpoint of applications, and for the understanding of the methods introduced in this course. For more details on the fundamental notions of algorithmics, the reader can consult [1].
5. Linear Systems
We call the problem that consists in finding the (possibly multiple) solution x ∈ 핂 p , if any, of the following algebraic equation
6. Direct Methods for Linear Systems
This chapter is devoted to the solution of systems of linear equations of the form Ax = b, (6.1) where A is a nonsingular square matrix with real entries, b is a vector called the “right-hand side,” and x is the unknown vector. For simplicity, we invariably assume that A ∈ ℳn(葷) and b ∈ 葷n. We call a method that allows for computing the solution x within a finite number of operations (in exact arithmetic) a direct method for solving the linear system Ax = b. In this chapter, we shall study some direct methods that are much more efficient than the Cramer formulas in Chapter 5. The first method is the celebrated Gaussian elimination method, which reduces any linear system to a triangular one. The other methods rely on the factorization of the matrix A as a product of two matrices A = BC. The solution of the system Ax = b is then replaced by the solution of two easily invertible systems (the matrices B and C are triangular or orthogonal) By = b, and Cx = y.
7. Least Squares Problems
The origin of the least squares data-fitting problem is the need of a notion of “generalized solutions” for a linear system Ax = b that has no solution in the classical sense (that is, b does not belong to the range of A). The idea is then to look for a vector x such that Ax is “the closest possible” to b. Several norms are at hand to measure the distance between Ax and b, but the simplest choice (which corresponds to the denomination “least squares”) is the Euclidean vector norm. In other words, a least squares problem amounts to finding the solution (possibly nonunique) x ∈ ℝ p to the following minimization problem:
8. Simple Iterative Methods
This chapter is devoted to solving the linear system
9. Conjugate Gradient Method
From a practical viewpoint, all iterative methods considered in the previous chapter have been supplanted by the conjugate gradient method, which is actually a direct method used as an iterative one. For simplicity, we will restrict ourselves, throughout this chapter, to real symmetric matrices. The case of complex self-adjoint matrices is hardly more difficult. However, that of non-self-adjoint matrices is relatively more delicate to handle.
10. Methods for Computing Eigenvalues
For a squared matrix A ∈ ℳn (ℂ), the problem that consists in finding the solutions λ ∈ ℂ and nonzero x ∈ ℂn of the algebraic equation
11. Solutions and Programs
Backmatter
Metadaten
Titel
Numerical Linear Algebra
verfasst von
Grégoire Allaire
Sidi Mahmoud Kaber
Copyright-Jahr
2008
Verlag
Springer New York
Electronic ISBN
978-0-387-68918-0
Print ISBN
978-0-387-34159-0
DOI
https://doi.org/10.1007/978-0-387-68918-0

Premium Partner