# SubsectionThe Assignment

• Read section 2.5 of Strang.
• Watch a video from the YouTube series Essence of Linear Algebra by user 3Blue1Brown.
• Read the following and complete the exercises below.

# SubsectionLearning Goals

Before class, a student should be able to:

• State the definition of invertible matrix.
• Solve an equation $Ax = b$ using the inverse of $A$ if it exists.
• State how inverses and multiplication interact.
• Use Gauss-Jordan elimination to compute the inverse of a matrix.
• State a test for invertibility of square matrices using pivots.

Some time after class, a student should be able to:

• Describe the connection between Gauss-Jordan elimination and solving $n$ different systems of equations.
• Describe the connection between Gauss-Jordan elimination, computing matrix inverses, and the process of elimination by matrix multiplication.
• State the definition of the determinant of a square matrix.
• State the connection between the determinant of a square matrix and invertibility.
• State the distinction between a matrix being invertible and a matrix being singular.

# SubsubsectionMatrix Inverses

The main point of this section is to start focusing on the first big problem in linear algebra. How can you tell, in advance, that a system of $n$ equations in $n$ unknowns will have a solution?

Of course, like all things we have been studying, this will have several different faces, all of which are equivalent. The one front and center right now is this: When does an $n \times n$ square matrix have an inverse?

# SubsubsectionFinding an Inverse: Gauss-Jordan Elimination

There is an effective method for finding the inverse, and it is Gauss-Jordan elimination. (This is sometimes just called Gaussian elimination.) Essentially, you wish to solve $n$ different systems $Ax= b$ of size $n\times n$ all at the same time, with specially chosen right hand sides.

The process is an algorithm, so it is very specific. If you do this some other way, you aren't doing Gauss-Jordan Elimination. The name is applied to the process.

Gauss-Jordan Elimination

• Augment: Tack on a copy of the identity matrix of the same size to the right hand side of your matrix. It should now look like $(A \mid I)$.
• Forward Pass: This is a nested procedure:
• preparation: If necessary, use a row swap to make a non-zero entry in the upper left entry.
• make zeros: The upper left entry is our first pivot. Use the operation of adding a multiple of the first row to the other rows to kill the entries below this first pivot.
• step down: Step down to the second row and repeat the above, but ignoring rows and columns above and to the left. Repeat as necessary till you run out of rows.
If at any point in the process you get a row consisting of only zeros, perform a row switch to suffle it to the bottom. When the forward pass is complete, you should have an upper triangular matrix.
• Backward Pass: This is also nested, like the forward pass, except that instead of working down and to the right, you begin at the lower right with the last pivot and work up and to the left. When complete, the matrix should have at most one non-zero entry in each row. This entry will be a pivot.
• Rescale: rescale rows to make the pivots into $1$'s.

At the end of the whole process, you should have something that looks like this: $(I \mid B)$. The wonderful part: $B$ is the inverse of $A$. Well, almost. The process can fail! If along the line you find that the left hand block of your big augmented matrix doesn't have $n$ pivots in it, then your matrix was not invertible.

What you have computed in the left hand block with the Gauss-Jordan elimination is the reduced row-echelon form of your original matrix.

# SubsubsectionThe Big Theorem: invertibility, singularity, and the determinant

What is the key?

This is huge. The algorithm is not difficult, and it answers an important question exactly.

Note that we said a square matrix was singular when it did not have enough pivots. So what the above says is that a matrix is invertible if and only if it is non-singular.

# SubsubsectionA simple test

We can use the above to make a simple numerical test of when a matrix is invertible. First do the forward pass of elimination to obtain an upper triangular matrix. Take the product of the diagonal entries. This will be zero if and only if one of the diagonal entries is zero, which will only happen if there are fewer than $n$ pivots. This product is then helpful enough to test for invertibility, and so it deserves its own name: the determinant. We shall learn more about this quantity later.

# SubsectionSageMath and Gauss-Jordan Elimination

We have already seen that SageMath has commands for constructing matrices and performing row operations. Those are the operations used to perform Gauss-Jordan Elimination. But there are several interesting and useful commands in this neighborhood we have not yet discussed.

Let us construct my favorite matrix so we have something to play with.

We can use the .is_invertible() method to check that A is invertible. In general, this method returns True or False.

And we can get SageMath to just compute the inverse for us.

Just so we can see what happens if the matrix is not invertible, we try another matrix.

We can also ask SageMath to compute determinants with the .determinant() method.

SageMath is also capable of computing the reduced row echelon form (the “rref”) of a matrix with the appropriately named .rref() method.

The method .rref() does not change the matrix A. There is another command which will work the same way for our purposes, .echelon_form().

There is a related command which will find the rref and then update the matrix. It is called .echelonize(). Because I don't really want to mess with A, we will make a copy first.

Now we ask sage to print those out for us.

Now, we can be just a bit more hands-on with Gauss-Jordan elimination if we do it this way. We will combine commands we have used before to do this.

Now we do the algorithm.

That was good. But we only need the right-hand submatrix. We can get SageMath to report just that!

It is often convenient to chain methods together like this. Then you can read what happens from left to right.

# SubsectionExercises

Keep this in mind. The computations are simple, but tedious. Perhaps you want to use an appropriate tool.

Use Gauss-Jordan elimination to find the inverse of the matrix $A$ below. \begin{equation*} A = \begin{pmatrix} 3 & 17 \\ 1 & 6 \end{pmatrix} \end{equation*} Be sure to clearly write down the operations you use and the matrices which perform the operations by left multiplication.
Use Gauss-Jordan elimination to find the inverse of the matrix $X$ below. \begin{equation*} X = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \end{equation*} Be sure to clearly write down the operations you use and the matrices which perform the operations by left multiplication.
Use Gauss-Jordan elimination to find the inverse of the matrix $B$ below. \begin{equation*} B = \begin{pmatrix} 3 & 4 & -1\\ 1 & 6 & 1 \\ 0 & 3 & -1 \end{pmatrix} \end{equation*} Be sure to clearly write down the operations you use and the matrices which perform the operations by left multiplication.
Use Gauss-Jordan elimination to find the inverse of the matrix $B$ below. \begin{equation*} B = \begin{pmatrix} 0 & 3 & 4 & -1\\ 0 & 1 & 6 & 1 \\ 2 & 0 & 3 & -1 \\ 5 & -1 & 1 & 3 \end{pmatrix} \end{equation*} Be sure to clearly write down the operations you use and the matrices which perform the operations by left multiplication.
Use Gauss-Jordan elimination to find the inverse of the matrix $D$ below. \begin{equation*} D = \begin{pmatrix} 3 & 17 & -1 & 3 & 1 \\ 1 & 6 & -2 & 1 & 1 \\ 2 & 2 & 1 & -5 & 1 \\ 0 & 0 & 3 & 1 & -3 \\ -2 & 3 & 4 & 1 & 1 \end{pmatrix} \end{equation*}
Suppose that for the matrix $D$ in the last exercise we imagine solving the matrix equation $Dx = b$ for some vector $b$ of the appropriate size. What might one mean by the row picture in this case? What might the column picture mean?
Design a $6 \times 6$ matrix which has the following properties: