\(\newcommand{\augmatrix}[2]{\left(\begin{array}{@{}#1 |c@{}} #2 \end{array}\right)} \newcommand{\lt}{ < } \newcommand{\gt}{ > } \newcommand{\amp}{ & } \)

Section6.1Eigenvectors and Eigenvalues

SubsectionThe assignment

  • Read section 6.1 of Strang (pages 283-292).
  • Read the following and complete the exercises below.

SubsectionDiscussion: Eigenvalues and Eigenvectors

We have discussed the “transformational view” of the geometry of a system of \(m\) linear equations \(Ax = b\) in \(n\) unknowns \(x\), where we view the \(m \times n\) matrix \(A\) as defining a function from \(\mathbb{R}^m\) to \(\mathbb{R}^n\). In the case of a square matrix \(m=n\), the domain and the target are the same space \(\mathbb{R}^n\). So we can think of \(A\) as making a function from one space to itself.

This means it might be interesting to think about how \(A\) moves a vector about inside of \(\mathbb{R}^n\). Usually, the vector \(v\) will get turned into a vector \(Av\) which has a different length and points in a completely different direction. But sometimes, sometimes, \(v\) and \(Av\) will point in the same direction. This is an eigenvector.

A number \(\lambda\) is called an eigenvalue of the matrix \(A\) when the matrix \(A-\lambda I\) is singular. A vector \(v\) is called an eigenvector of \(A\) corresponding to \(\lambda\) when \(v\) is not zero but still lies in the null space of \(A-\lambda I\). We exclude \(0\) from being an eigenvector because it is boring. The zero vector lies in every subspace, including the nullspace of any matrix.

As Strang discusses, the eigenvalues are found as roots of the characteristic polynomial \(\det(A-\lambda \cdot I) = 0\). That's right, we only need to find the roots of a polynomial! Sounds great, but as a general thing this is pretty hard. Don't get too excited. Have you heard this fact before? It is both depressing and interesting: there is no general formula to find the roots of a polynomial of degree 5 or more.

SubsectionSage and Eigenvectors

Since eigenvalues and eigenvectors are found using standard techniques, we can use Sage to compute them without any new techniques.

SubsubsectionUsing Nullspaces and root finding commands

Let's use basic sage commands we have seen before to compute eigenvalues and eigenvectors. We start by finding the characteristic polynomial of the mundane example matrix \(X\) below.

Now we need the roots of that polynomial. Sage has a simple built-in for that, which returns a list of pairs: (root, multiplicity).

In this case each of the three roots has (algebraic) multiplicity equal to one. For now, we will look at just the first one. Let's pull it out of this list, give it a more convenient name, and use it to find a corresponding eigenvector.

So, that looks like a mess. We can get Sage to display the basis vector more nicely. This is our eigenvector.

Well, that probably needs a simplification or two. But there it is!

SubsubsectionBuilt-in Sage Commands

Sage has useful built-in commands that get at the same computations. But for them to work, your matrix must be defined over a set of numbers that is big enough to take roots of polynomials. We will use AA, which stands for the real algebraic numbers.

The eigenvectors can be computed with this command, which again returns pairs: (eigenvalue, eigenvector).

Or you can ask for the eigenspaces which return pairs: (eigenvalue, subspace consisting of all eigenvectors which correspond).

SubsubsectionOne more example...

This matrix has only one eigenvalue, but it has algebraic multiplicity 2.

But 5 has geometric multiplicity only equal to one, because the corresponding eigenspace has dimension one.

SubsectionQuestions for Section 6.1

Exercise 5 from section 6.1 of Strang.
Exercise 6 from section 6.1 of Strang.
Exercise 7 from section 6.1 of Strang.
Exercise 12 from section 6.1 of Strang.
Exercise 14 from section 6.1 of Strang.
Exercise 19 from section 6.1 of Strang.