Skip to main content
main-content

Inhaltsverzeichnis

Frontmatter

Chapter 0. Preliminaries

Abstract
In this chapter we explain a few preliminary concepts. Some of the results are stated without proofs. Those who are familiar with the topics may skip this chapter and go to the next.
A. Ramachandra Rao, P. Bhimasankaram

Chapter 1. Vector spaces

Abstract
In Physics we learn that a force applied at a point O has both magnitude and direction. It is represented by an arrow OP as in Figure 1.1.1, where the length OP represents the magnitude and O to P the direction of the force. If we now apply another force OQ at the point O, the resultant (also called the sum) of the two forces is obtained by the parallelogram law: it is OR where OPRQ is a parallelogram. Also, if the strength of the force OP is doubled without changing the direction, the new force is OS where S is the point on the line OP such that OS = 2 OP. If the direction of the force OP is reversed without altering the magnitude, the new force is OT where T is the point on OP such that OT = −OP with the usual convention. In general, α times the force OP is OW where W is a point on OP (extended either way, if necessary) such that OW/OP = α, where a may be positive, negative or zero.
A. Ramachandra Rao, P. Bhimasankaram

Chapter 2. Algebra of matrices

Abstract
Linear Algebra is mainly the study of linear transformations on vector spaces and matrices which can be used to represent them. These occur in almost every branch of knowledge. One can cite any number of examples: rotations, reflections, projections, differentiation and integration operators, rigid body motion, shear transformation, linear models occurring in Statistics, Economics, etc. They can also be used as approximations to certain non-linear transformations.
A. Ramachandra Rao, P. Bhimasankaram

Chapter 3. Rank and inverse

Abstract
Some important problems in linear algebra are those of solving a system of linear equations, finding the dimension of a subspace or a flat, finding the inverse of a matrix, determining whether a set of vectors is linearly independent, finding a basis contained in a generating set of a subspace, etc. It turns out that rank of a matrix, which is the maximum number of linearly independent rows (or columns), plays a crucial role in the solution of most of these problems. Some other closely related concepts are those of nullity and inverse of a matrix, both useful in solving linear equations. In this chapter we define rank and study its basic properties. We also study nullity, existence and properties of inverse and a few other topics like projectors and change of bases. Computational procedures will be taken up in the next chapter.
A. Ramachandra Rao, P. Bhimasankaram

Chapter 4. Elementary operations and reduced forms

Abstract
In this chapter we shall study some simple operations called elementary operations which can be used to reduce any given matrix to one with a simple form thereby facilitating the solution of some problems to be solved for the original matrix.
A. Ramachandra Rao, P. Bhimasankaram

Chapter 5. Linear equations

Abstract
Systems of linear equations occur in every branch of knowledge. We mention a few examples within Linear Algebra. Expressing b as a linear combination of a1, a2,…, a k is the same as solving Ax = b where A = [a1 : a2 : ⋯ : a k ]. The vectors a1, a2,…, a k are linearly dependent iff Ax = 0 has a non-null solution. Solution of linear equations also plays an important role in obtaining approximate solutions of non-linear equations. In this chapter, we make a systematic study of the theoretical aspects of the solution of linear equations and give some computational procedures.
A. Ramachandra Rao, P. Bhimasankaram

Chapter 6. Determinants

Abstract
Determinant is a scalar associated with a square matrix in a particular way. One of the most important uses of determinants within Linear Algebra is in the study of eigenvalues (Chapter 8). They also occur in Cramer’s rule for solving linear equations and can be used to give a formula for the inverse of a non-singular matrix. In the Calculus of several variables, the Jacobian used in transforming a multiple integral uses determinant. This use arises from the fact that determinant is the volume of a certain parallelopiped. Determinants are also useful in various other subjects like Physics, Astronomy and Statistics.
A. Ramachandra Rao, P. Bhimasankaram

Chapter 7. Inner product and orthogonality

Abstract
Till now we have studied concepts like subspace, dimension, linear transformations and their representations, linear equations, inverse and determinant, which are valid over any field F. We have also given the geometric interpretation of some of these over ℝ. In the Euclidean spaces ℝ2 and ℝ3 there are two other concepts, viz., length (or distance) and angle which have no analogues over a general field. In this chapter we study these two concepts.
A. Ramachandra Rao, P. Bhimasankaram

Chapter 8. Eigenvalues

Abstract
In this chapter we study eigenvalues and eigenvectors associated with a complex square matrix. These are useful in the study of canonical forms of a matrix under similarity and in the study of quadratic forms. They have applications in many subjects like Geometry, Mechanics, Astronomy, Engineering, Economics and Statistics.
A. Ramachandra Rao, P. Bhimasankaram

Chapter 9. Quadratic forms

Abstract
Quadratic forms are homogeneous polynomials of degree 2 in several variables like 2x2 + y2 − 3z2 + 2xy + yz. They occur in the study of conics in geometry, energy in Physics, and have wide applications in various subjects including Statistics. Within Linear Algebra they occur in the context of inner products.
A. Ramachandra Rao, P. Bhimasankaram

Backmatter

Weitere Informationen