Skip to main content

1997 | Buch

Matrix Analysis

verfasst von: Rajendra Bhatia

Verlag: Springer New York

Buchreihe : Graduate Texts in Mathematics

insite
SUCHEN

Über dieses Buch

A good part of matrix theory is functional analytic in spirit. This statement can be turned around. There are many problems in operator theory, where most of the complexities and subtleties are present in the finite-dimensional case. My purpose in writing this book is to present a systematic treatment of methods that are useful in the study of such problems. This book is intended for use as a text for upper division and gradu­ ate courses. Courses based on parts of the material have been given by me at the Indian Statistical Institute and at the University of Toronto (in collaboration with Chandler Davis). The book should also be useful as a reference for research workers in linear algebra, operator theory, mathe­ matical physics and numerical analysis. A possible subtitle of this book could be Matrix Inequalities. A reader who works through the book should expect to become proficient in the art of deriving such inequalities. Other authors have compared this art to that of cutting diamonds. One first has to acquire hard tools and then learn how to use them delicately. The reader is expected to be very thoroughly familiar with basic lin­ ear algebra. The standard texts Finite-Dimensional Vector Spaces by P.R.

Inhaltsverzeichnis

Frontmatter
I. A Review of Linear Algebra
Abstract
In this chapter we review, at a brisk pace, the basic concepts of linear and multilinear algebra. Most of the material will be familiar to a reader who has had a standard Linear Algebra course, so it is presented quickly with no proofs. Some topics, like tensor products, might be less familiar. These are treated here in somewhat greater detail. A few of the topics are quite advanced and their presentation is new.
Rajendra Bhatia
II. Majorisation and Doubly Stochastic Matrices
Abstract
Comparison of two vector quantities often leads to interesting inequalities that can be expressed succinctly as “majorisation” relations. There is an intimate relation between majorisation and doubly stochastic matrices. These topics are studied in detail here. We place special emphasis on majorisation relations between the eigenvalue n -tuples of two matrices. This will be a recurrent theme in the book.
Rajendra Bhatia
III. Variational Principles for Eigenvalues
Abstract
In this chapter we will study inequalities that are used for localising the spectrum of a Hermitian operator. Such results are motivated by several interrelated considerations. It is not always easy to calculate the eigenvalues of an operator. However, in many scientific problems it is enough to know that the eigenvalues lie in some specified intervals. Such information is provided by the inequalities derived here. While the functional dependence of the eigenvalues on an operator is quite complicated, several interesting relationships between the eigenvalues of two operators A, B and those of their sum A + B are known. These relations are consequences of variational principles. When the operator B is small in comparison to A, then A + B is considered as a perturbation of A or an approximation to A. The inequalities of this chapter then lead to perturbation bounds or error bounds.
Rajendra Bhatia
IV. Symmetric Norms
Abstract
In this chapter we study norms on the space of matrices that are invariant under multiplication by unitaries. Their properties are closely linked to those of symmetric gauge functions on ℝn. We also study norms that are invariant under unitary conjugations. Some of the inequalities proved in earlier chapters lead to inequalities involving these norms.
Rajendra Bhatia
V. Operator Monotone and Operator Convex Functions
Abstract
In this chapter we study an important and useful class of functions called operator monotone functions. These are real functions whose extensions to Hermitian matrices preserve order. Such functions have several special properties, some of which are studied in this chapter. They are closely related to properties of operator convex functions. We shall study both of these together.
Rajendra Bhatia
VI. Spectral Variation of Normal Matrices
Abstract
Let A be an n x n Hermitian matrix, and let λ↓1 (A) ≥ λ↓2 (A) ≥ … ≥ λ↓n (A) be the eigenvalues of A arranged in decreasing order. In Chapter III we saw that λ↓j (A), 1 ≤ jn , are continuous functions on the space of Hermitian matrices. This is a very special consequence of Weyl’s Perturbation Theorem: if A, B are two Hermitian matrices, then.
In turn, this inequality is a special case of the inequality (IV.62), which says that if Eig↓ (A) denotes the diagonal matrix with entries λ↓j (A) down its diagonal, then we have for all Hermitian matrices A, B and for all unitarily invariant norms.
Rajendra Bhatia
VII. Perturbation of Spectral Subspaces of Normal Matrices
Abstract
In Chapter 6 we saw that the eigenvalues of a (normal) matrix change continuously with the matrix. The behaviour of eigenvectors is more complicated. The following simple example is instructive. Let H , where H is Hermitian. The eigenvalues of the first 2 x 2 block of A are 1-ɛ, 1 — ɛ. The same is true for B. The corresponding normalised eigenvectors are (1,0) and (0,1) for A, and (1,1) and (1, —1) for B. As ε → 0, B and A approach each other, but their eigenvectors remain stubbornly apart. Note, however, that the eigenspaces that these two eigenvectors of A and B span are identical. In this chapter we will see that interesting and useful perturbation bounds may be obtained for eigenspaces corresponding to closely bunched eigenvalues of normal matrices.
Rajendra Bhatia
VIII. Spectral Variation of Nonnormal Matrices
Abstract
In Chapter 6 we saw that if A and B are both Hermitian or both unitary, then the optimal matching distance d (σ(A), σ(B)) is bounded by ||A — B ||. We also saw that for arbitrary normal matrices A, B this need not always be true (Example VI.3.13). However, in this case, we do have a slightly weaker inequality d(σ(A), σ(B)) ≤ 3|| A-B|| (Theorem VII.4.1). If one of the matrices A, B is Hermitian and the other is arbitrary, then we can only have an inequality of the form d(σ(A), σ(B)) ≤ c(n)||A — B||, where c(n) is a constant that grows like log n (Problems VI.8.8 and VI.8.9).
Rajendra Bhatia
IX. A Selection of Matrix Inequalities
Abstract
In this chapter we will prove several inequalities for matrices. From the vast collection of such inequalities, we have selected a few that are simple and widely useful. Though they are of different kinds, their proofs have common ingredients already familiar to us from earlier chapters.
Rajendra Bhatia
X. Perturbation of Matrix Functions
Abstract
In earlier chapters we derived several inequalities that describe the variation of eigenvalues, eigenvectors, determinants, permanents, and tensor powers of a matrix. Similar problems for some other matrix functions are studied in this chapter.
Rajendra Bhatia
Backmatter
Metadaten
Titel
Matrix Analysis
verfasst von
Rajendra Bhatia
Copyright-Jahr
1997
Verlag
Springer New York
Electronic ISBN
978-1-4612-0653-8
Print ISBN
978-1-4612-6857-4
DOI
https://doi.org/10.1007/978-1-4612-0653-8