Home | 18.022 | Chapter 16

Tools    Index    Up    Previous    Next


16.1 Matrices and Linear Transformations

Given a linear transformation, T, on a vector space, and a basis, such as (i, j, k), we can describe T completely by indicating its action on the basis vectors. Its action on any vector is its action on a linear combination of the basis vectors, which yields the same linear combination of its action on them: by T's linearity:

T(a i + b j + c k) = a T i + b T j + c T k

We will consider transformations T acting on column vectors, and we put T to the left of the vector it acts on: you can of course do the same thing with row vectors in which case the transformation is put on the right of the vector it acts on.)
We associate a matrix with this Transformation and basis by letting its kth column consist of the components of the vector that the transformation takes the kth basis vector into:

Example

A transformation can take a vector in a space of n dimensions into one in a space of m dimensions; its matrix will then have n columns, and m rows.
Given two transformations, we can add them if they have the same domain and range, ie, if their matrices have the same shape parameters n and m; you can also perform one transformation and then the other, if the domain vector space of the second is the range space of the first.
In particular if we have a transformation that takes vectors into vectors in the same space, so that its matrix is square, and we have transformations A and B, we can act with B on the result of acting with A on v and form B(A(v))or as it is normally written BAv. (Notice that the transformation performed later appears on the left here.)
We now consider: what is the relation between the matrix that represents this composite transformation, BA and the matrices that represent A and B separately, (all with reference to the same basis)?
In general this composite can exist only if the domain space of B is the range space of A: so that the length of a column of A must be the number of columns of B. We will draw two conclusions:

1. The matrix BA representing the composition or product transformation BA, the result of performing A and then B, is the matrix product of the matrix, B, representing B and A, representing A, in any basis. This matrix product has as its element in its ith row and jth column, the dot product of the ith row of B and the jth column of A:

Proof

2. The determinant of BA is the product of the determinants of B and A.

Proof

An application of the second fact is the statement that the determinant of a matrix M representing a transformation T is independent of the basis used in its definition. The determinant is an intrinsic property of the transformation itself.

Proof

Example