\(\newcommand{\augmatrix}[2]{\left(\begin{array}{@{}#1 |c@{}} #2 \end{array}\right)} \newcommand{\lt}{ < } \newcommand{\gt}{ > } \newcommand{\amp}{ & } \)

Section2.5Matrix Algebra

SubsectionThe Assignment

  • Read section 2.4 of Strang.
  • Watch a video or two from the YouTube series Essence of Linear Algebra by user 3Blue1Brown.
  • Read the following and complete the exercises below.

SubsectionLearning Goals

Before class, a student should be able to:

  • Add and subtract matrices of the same size.
  • Multiply matrices of appropriate sizes by one method.
  • Compute powers \(A^p\) of a given square matrix \(A\).
  • Use the distributive law for matrix multiplication and matrix addition correctly.

Sometime after our meeting, a student should be able to:

  • Multiply block matrices.
  • Multiply matrices by three methods.
  • Give examples to show how matrix multiplication is not like ordinary multiplication of real numbers: including the trouble with commutativity, and the difficulty with inverses.

SubsectionDiscussion on Matrix Algebra

At the simplest level, this section is just about how to deal with the basic operations on matrices. We can add them and we can multiply them. We have already encountered matrix multiplication, and addition is even more natural.

But a subtle and important thing is happening here. Matrices are taking on a life of their own. They are becoming first class objects, whose properties are interesting and possibly useful.

This is an instance of the beginnings of Modern Algebra, which is the study of the algebraic structures of abstracted objects. In this case, we study whole collections of matrices of a common shape, and we try to treat them like generalized numbers. Then the natural questions are how much like “regular numbers” are these matrices?

Addition is about as well-behaved as you can expect, but multiplication is a bit trickier. Suddenly, two properties of multiplication for numbers don't quite work for matrices:

  • multiplication does not necessarily commute: It need not be the case that \(AB\) is the same as \(BA\).
  • we may not always have inverses: just because there is a matrix \(A\) which is not the zero matrix, it may not be the case that we can make sense of \(A^{-1}\) and get \(AA^{-1} = I\).

SubsectionSageMath and Matrix Algebra

SageMath is aware of the basic matrix operations, and it won't let you get away with nonsense. Matrix multiplication and matrix addition are only defined if the dimensions of the matrices line up properly.

Let's see which of these SageMath doesn't like. Can you predict, before evaluating the cells below, which of these will return an error?

SubsubsectionSageMath and Matrix Addition

Matrix addition works a lot like addition of integers, as long as you fix a size first.

  • There is a zero element.
  • There are additive inverses (i.e. negatives).
  • The operation is commutative.

Let us add A and Z:

We can check that adding Z doesn't change anything.

And we can do the natural thing to get an additive inverse.

Finally, this last thing should return zero.

SubsubsectionSageMath and Matrix Multiplication

SageMath already has the structure of matrix multiplication built-in, and it can help with investigating the ways that matrix multiplicaiton is different from regular multiplication of numbers.

We have seen above that Sage will not let us multiply matrices whose sizes do not match correctly. Of course, one way around that trouble is to stick to square matrices. But even there we can have trouble with the fact that matrix multiplication might not commute. It is rarely the case that \(XY = YX\).

For those of you who will eventually study Modern Algebra, the collection of all \(n\)-square matrices is an example of a non-commutative ring with unit.

Sage knows about the ring structure. We can check for an inverse.

And we can ask for the inverse in a couple of ways.

One can even construct the whole ring of all \(n\times n\) matrices and play around inside it.

It is then not too hard to construct the identity element, which is the regular identity matrix of the correct size.

Also, the zero matrix of the correct size is easy to make.

In fact, this allows you to short-cut the construction of any matrix in M. This can be really useful if you are going to work with a lot of matrices of the same shape.

You can even use this to make complicated expressions out of matrix operations. As long as everything makes sense, SageMath will do all the work.

SubsectionExercises

Task41
Make an example of a \(2\times 3\) matrix and a \(3\times 3\) matrix, and use this to demonstrate the three most important ways to multiply matrices: organized by rows, or by columns, or by using dot products.
Task42
Give an example of a pair of \(2\times 2\) matrices \(A\) and \(B\) so that \(AB = 0\) but \(BA\neq 0\), or explain why this is impossible.
Task43
Give an example of a \(3\times 3\) matrix \(A\) such that neither \(A\) nor \(A^2\) is the zero matrix, but \(A^3=0\).
Task44
Find all examples of matrices \(A\) which commute with both \(B = \left( \begin{smallmatrix} 1 & 0 \\ 0 & 0 \end{smallmatrix}\right)\) and \(C = \left( \begin{smallmatrix} 0 & 1 \\ 0 & 0 \end{smallmatrix}\right)\). That is, find all matrices \(A\) so that \(AB = BA\) and \(AC= CA\). How do you know you have all such matrices?
Task45

Consider the matrix \begin{equation*} A = \begin{pmatrix} 2 & 1 & 0 \\ -2 & 0 & 1 \\ 8 & 5 & 3 \end{pmatrix}. \end{equation*} Which elimination matrices \(E_{21}\) and \(E_{31}\) produce zeros in the \((2,1)\) and \((3,1)\) positions of \(E_{21}A\) and \(E_{31}A\)?

Find a single matrix \(E\) which produces both zeros at once. Multiply \(EA\) to verify your result.

Task46
Suppose that we have already solved the equation \(Ax=b\) for the following three special choices of \(b\): \begin{equation*} Ax_1 = \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} \text{ , } Ax_2 = \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix} \text{, and } Ax_3 = \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}. \end{equation*} If the three solutions are called \(x_1\), \(x_2\) and \(x_3\) and then bundled together to make the columns of a matrix \begin{equation*}X = \begin{pmatrix} | & | & | \\ x_1 & x_2 & x_3 \\ | & | & | \end{pmatrix}, \end{equation*} what is the matrix \(AX\)? What does this mean about \(X\)?