## Projections and Eigenvectors

September 13th, 2011

I was thinking of the immutability of eigenvectors and the immutability of certain vectors when projected and realized these two qualities are one in the same.  Eigenvectors can be viewed and explained in terms of a projection matrix, which may be a more intuitive and easier way to understand eigenvectors than is commonly taught.  Certainly it relies on much less math — only the concept of rows or columns as vectors and a basic understanding of the fundamental and canonical equation that defines eigenvectors and eigenvalues.

Projecting one vector onto another gives a scalar length of the first vector on the second.  It is the amount that the second vector “contributes” to the first.  This contribution is easily calculated by taking the dot product.

In the above example, the vectors are given as column vectors, and since their dot product must result in a scalar quantity, it is calculated as aTb.

The projection of b onto a creates another vector, let’s call it p, that is in the same direction as a, but whose length is determined by the length and direction of b.

Strang’s Introduction to Linear Algebra book gives a great explanation of projection.  In it, he rearranges the terms of p to get

Pa is a rank 1 projection matrix, which makes sense because we are projecting onto the scalar’s one-dimensional space [1].  The projection matrix, Pa, is a transform that projects a vector onto a.  The result of this transform is a scaled version of the vector a.

The lambda parameter gives the amount of scaling on a.  The answer gives how much of a is in b, sometimes referred to as the component of a in b.

Projection is a way of moving one vector onto another, that is, the projection changes the direction of a vector to line up with another.  More generally, an n-by-n matrix is a transform that changes an n-element vector into another n-element vector, possibly pointing in a different direction.

The vectors that are able to survive this transform unmoved are called eigenvectors.  They may be scaled differently as a result of the transform, but they are pointing in their same original direction before the transform was applied.  The amount that the transform scales the vector is the eigenvalue.

This is the familiar linear algebra definition for eigenvalues and eigenvectors.  The vector x is an eigenvector of A, if the above equation holds for some nonzero x.  Lambda gives the eigenvalue for that eigenvector, x.

Numerically eigenvectors and eigenvalues look like this,

Where [1;2] is an eigenvector of [1,3;4,5] with eigenvalue 7.  The vector [1;2] survives the transform pointed in the same direction, but with a different scale value (7).

Some vectors do not survive the transform without a change in direction.  For instance,

The vectors that are able to survive this transform unmoved, i.e. pointed in the same direction, are the eigenvectors of that transform.  The eigenvalues are the scale values resulting from the transform.

[1] Strang, Gilbert.  Introduction to Linear Algebra.  Wellesley, MA: Cambridge-Wellesley Press, 2009.

Tags: , ,
Posted in Uncategorized | Comments Off