Skip to main content

1987 | Buch | 3. Auflage

Linear Algebra

verfasst von: Serge Lang

Verlag: Springer New York

Buchreihe : Undergraduate Texts in Mathematics

insite
SUCHEN

Über dieses Buch

Linear Algebra is intended for a one-term course at the junior or senior level. It begins with an exposition of the basic theory of vector spaces and proceeds to explain the fundamental structure theorems for linear maps, including eigenvectors and eigenvalues, quadric and hermitian forms, diagonalization of symmetric, hermitian, and unitary linear maps and matrices, triangulation, and Jordan canonical form. The book also includes a useful chapter on convex sets and the finite-dimensional Krein-Milman theorem. The presentation is aimed at the student who has already had some exposure to the elementary theory of matrices, determinants, and linear maps. However, the book is logically self-contained. In this new edition, many parts of the book have been rewritten and reorganized, and new exercises have been added.

Inhaltsverzeichnis

Frontmatter
Chapter I. Vector Spaces
Abstract
As usual, a collection of objects will be called a set. A member of the collection is also called an element of the set. It is useful in practice to use short symbols to denote certain sets. For instance, we denote by R the set of all real numbers, and by C the set of all complex numbers. To say that “x is a real number” or that “x is an element of R” amounts to the same thing. The set of all n-tuples of real numbers will be denoted by R n . Thus “X is an element of R n ” and “x is an n-tuple of real numbers” mean the same thing. A review of the definition of C and its properties is given an Appendix.
Serge Lang
Chapter II. Matrices
Abstract
We consider a new kind of object, matrices. Let K be a field. Let n, m be two integers ≧ 1. An array of numbers in K
$$\left( {\begin{array}{*{20}{c}} {{{a}_{{11}}}} & {{{a}_{{12}}}} & {{{a}_{{13}}}} & {{{a}_{{1n}}}} \\ {{{a}_{{21}}}} & {{{a}_{{22}}}} & {{{a}_{{23}}}} & {{{a}_{{2n}}}} \\ \vdots & \vdots & \vdots & \vdots \\ {{{a}_{{m1}}}} & {{{a}_{{m2}}}} & {{{a}_{{m3}}}} & {{{a}_{{mn}}}} \\ \end{array} } \right)$$
is called a matrix in K. We can abbreviate the notation for this matrix by writing it (a ij ), i = 1,... ,m and j = 1,... ,n. We say that it is an m by n matrix, or an m × n matrix. The matrix has m rows and n columns.
Serge Lang
Chapter III. Linear Mappings
Abstract
We shall define the general notion of a mapping, which generalizes the notion of a function. Among mappings, the linear mappings are the most important. A good deal of mathematics is devoted to reducing questions concerning arbitrary mappings to linear mappings. For one thing, they are interesting in themselves, and many mappings are linear. On the other hand, it is often possible to approximate an arbitrary mapping by a linear one, whose study is much easier than the study of the original mapping. This is done in the calculus of several variables.
Serge Lang
Chapter IV. Linear Maps and Matrices
Abstract
Let
$$A = \left( {\begin{array}{*{20}{c}} {{{a}_{{11}}}\quad \cdots \quad {{a}_{{1n}}}} \\ { \vdots \quad \quad \quad \quad \vdots } \\ {{{a}_{{m1}}}\quad \cdots \quad {{a}_{{mn}}}} \\ \end{array} } \right)$$
be an m × n matrix. We can then associate with A a map
$${L_A}:{K^n} \to {K^m}$$
by letting
$${L_A}(X) = AX$$
for every column vector X in K n . Thus L A is defined by the association XAX, the product being the product of matrices. That L A is linear is simply a special case of Theorem 3.1, Chapter II, namely the theorem concerning properties of multiplication of matrices. Indeed, we have (math) for all vectors X, Y in K n and all numbers c. We call L A the linear map associated with the matrix A.
Serge Lang
Chapter V. Scalar Products and Orthogonality
Abstract
Let V be a vector space over a field K. A scalar product on V is an association which to any pair of elements v, w of V associates a scalar, denoted by <v, w>, or also v·w, satisfying the following properties:
SP 1.
We have <v, w> = <w, v> for all v, wV.
 
SP 2.
If, u, v, w are elements of V, then
$$<u,v+ w>=<u,v>+<u,w>$$
.
 
SP 3.
If xK, then
$$<xu,v>= x<u,v>$$
and
$$u,xv>=x<u,v>$$
 
Serge Lang
Chapter VI. Determinants
Abstract
We have worked with vectors for some time, and we have often felt the need of a method to determine when vectors are linearly independent. Up to now, the only method available to us was to solve a system of linear equations by the elimination method. In this chapter, we shall exhibit a very efficient computational method to solve linear equations, and determine when vectors are linearly independent.
Serge Lang
Chapter VII. Symmetric, Hermitian, and Unitary Operators
Abstract
Let V be a finite dimensional vector space over the real or complex numbers, with a positive definite scalar product. Let
$$A:V \to V$$
be a linear map. We shall study three important special cases of such maps, named in the title of this chapter. Such maps are also represented by matrices bearing the same names when a basis of V has been chosen.
Serge Lang
Chapter VIII. Eigenvectors and Eigenvalues
Abstract
This chapter gives the basic elementary properties of eigenvectors and eigenvalues. We get an application of determinants in computing the characteristic polynomial. In §3, we also get an elegant mixture of calculus and linear algebra by relating eigenvectors with the problem of finding the maximum and minimum of a quadratic function on the sphere. Most students taking linear algebra will have had some calculus, but the proof using complex numbers instead of the maximum principle can be used to get real eigenvalues of a symmetric matrix if the calculus has to be avoided. Basic properties of the complex numbers will be recalled in an appendix.
Serge Lang
Chapter IX. Polynomials and Matrices
Abstract
Let K be a field. By a polynomial over K we shall mean a formal expression
$$f(t) = {a_n}{t^n} + ... + {a_0}$$
. where t is a “variable”. We have to explain how to form the sum and product of such expressions. Let
$$g(t) = {b_n}{t^m} + ... + {b_0}$$
be another polynomial with b j K. If, say, nm we can write b j = 0 if j > m,
$${g}t = 0{t^n} + ... + {b_m}{t^m} + ... + {b_0}$$
, and then we can write the sum f + g as
$$(f + g)(t) = ({a_n} + {b_n}){t^n} + ... + ({a_0} + {b_0})$$
.
Serge Lang
Chapter X. Triangulation of Matrices and Linear Maps
Abstract
Let V be a finite dimensional vector space over the field K, and assume n = dim V ≧ 1. Let A: VV be a linear map. Let W be a subspace of V. We shall say that W is an invariant subspace of A, or is A -invariant, if A maps W into itself. This means that if wW, then Aw is also contained in W. We also express this property by writing AW a W. By a fan of A (in V) we shall mean a sequence of subspaces {V 1,..., V n } such that V i is contained in V i + 1 for each i = 1,... , n - 1, such that dim V i = i, and finally such that each V i is A-invariant. We see that the dimensions of the subspaces V 1,..., V n increases by 1 from one subspace to the next. Furthermore, V = V n .
Serge Lang
Chapter XI. Polynomials and Primary Decomposition
Abstract
We have already defined polynomials, and their degree, in Chapter IX. In this chapter, we deal with the other standard properties of polynomials. The basic one is the Euclidean algorithm, or long division, taught (presumably) in all elementary schools.
Serge Lang
Chapter XII. Convex Sets
Abstract
Let S be a subset of R m . We say that S is convex if given points P, Q in S, the line segment joining P to Q is also contained in S.
Serge Lang
Backmatter
Metadaten
Titel
Linear Algebra
verfasst von
Serge Lang
Copyright-Jahr
1987
Verlag
Springer New York
Electronic ISBN
978-1-4757-1949-9
Print ISBN
978-1-4419-3081-1
DOI
https://doi.org/10.1007/978-1-4757-1949-9