# Section3.4Solving a System¶ permalink

# SubsectionThe Assignment

- Read chapter 3 section 4 of Strang.
- Read the following and complete the tasks below.

# SubsectionLearning Goals

Before class, a student should be able to:

- Identify a particular solution to a matrix-vector equation \(Ax=b\). (Provided there is one.)
- Find the complete solution to a matrix-vector equation \(Ax=b\) as a parametrized object. (Provided there is one.)

After class, a student should be able to:

- Describe the complete solution to a matrix-vector equation \(Ax=b\) as an implicit object, cut out by equations.
- Describe the possibilities for the number of solutions to a matrix-vector equation \(Ax=b\) in terms of the shape of the matrix.

# Subsection Discussion: The Complete Solution to a System of Equations

This is the big day! We finally learn how to write out the general solution to a system of linear equations. We have spent so much time understanding things related to this, that it should go pretty quickly.

The tiny little facts underneath the analysis for this section are these: For a matrix \(A\), vectors \(v\) and \(w\) and a scalar \(\lambda\), all chosen so that the equations make any sense, \begin{equation*} \begin{array}{rcl} A(v+w) &= &Av + Aw \\ A(\lambda v) &= &\lambda ( Av ) \end{array} \end{equation*}

The first is a kind of *distributive property*, and the second
is a kind of *commutative property*. When taken together, these
things say that the operation of “left-multiply by the matrix \(A\)”
is a special kind of function. The kind of function here is important
enough that we have a special word for this combined property: it is
called *linearity*. That is, left-multiplication by \(A\) is a
*linear operation* or a *linear transformation*.

The linearity property makes it possible to check the following two results.

##### Theorem3.3

Let \(Ax=b\) be a system of linear equations, and let \(Ax=0\) be the associated homogeneous system. If \(x_p\) is some particular solution to \(Ax=b\) and \(x_n\) is some solution to \(Ax=0\), then \(x_p + x_n\) is another solution to \(Ax=b\).And if we put these two theorems together, we find this result which sounds fancier, but has exactly the same content.

##### Theorem3.4

The complete set of solutions to the system \(Ax=b\) is the set \begin{equation*} \left\{ x_p + x_n \mid x_n \in \mathrm{null}(A) \right\}, \end{equation*} where \(x_p\) is any one particular solution to \(Ax=b\).This leads us to Strang's very sensible advice about finding the complete solution:

- Form the augmented matrix \(\left( A \mid b \right)\) and use Gauss-Jordan elimination to put it in reduced row echelon form \(\left( R \mid d \right)\).
- Use the information from the RREF to find a particular solution \(x_p\) by solving for the pivot variables from the vector \(d\) and setting the free variables to zero.
- Use the special solutions \(s_1, s_2, \dots, s_k\) (if any exist!) to describe the nullspace \(\mathrm{null}(A)\).
- Write down the resulting general solution: \begin{equation*} x = x_p + a_1 s_1 + a_2 s_2 + \dots + a_k s_k, \quad \text{for any scalars } a_i \in \mathbb{R}. \end{equation*}

# SubsectionSageMath and Solving General Systems

SageMath has many built-in methods for solving systems of linear equations. We will investigate three common ones with a single example considered several times.

# SubsubsectionMethod One: RREF and the Nullspace

First we find a particular solution.

This clearly has three pivots, and all belong in the original matrix. So there will be a solution. We pull out the particular solution.

Since we typed that in by hand, we should check our work.

Now we need to find the nullspace and the special solutions.

The basis has only one row, so there is only one special solution. This matches our expectation. Our system is \(3\times 4\) and has rank \(3\). So there is only one free column, and hence only one special solution.

Now we can check the “general solution”.

# SubsubsectionMethod Two: A SageMath built-in

SageMath has a built-in method that looks like “Matrix division”. Here we “left divide” by the matrix. This is odd notation, and is just something SageMath allows.

It is weird, but this works even if `A` is not invertible, like now.

The downside to this particular method is that in only gives you one particular solution. It does not produce the complete solution. You have to do that bit for yourself, maybe like the above.

# SubsubsectionMethod Three: Another SageMath Built-in

Finally, SageMath will also try to solve the system if you apply the
`.solve_right()` method to `A`. You have to supply the vector
`b` as an argument to the command.

Again, this only pulls out a single particular solution. It is up to you to figure out the rest.

# SubsectionExercises

##### Task85

(Strang ex 3.4.4) Find the complete solution (also called the general solution) to \begin{equation*} \begin{pmatrix} 1 & 3 & 1 & 2 \\ 2 & 6 & 4 & 8 \\ 0 & 0 & 2 & 4 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \\ t \end{pmatrix} = \begin{pmatrix} 1 \\ 3 \\ 1 \end{pmatrix}. \end{equation*}##### Task86

(Strang ex 3.4.6) What conditions on \(b_1\), \(b_2\), \(b_3\) and \(b_4\) make each of these systems solvable? Find a solution in those cases.- \begin{equation*} \begin{pmatrix} 1 & 2 \\ 2 & 4 \\ 2 & 5 \\ 3 & 9 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} = \begin{pmatrix} b_1 \\ b_2 \\ b_3 \\ b_4 \end{pmatrix}. \end{equation*}
- \begin{equation*} \begin{pmatrix} 1 & 2 & 3\\ 2 & 4 & 6\\ 2 & 5 & 7\\ 3 & 9 & 12\end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3\end{pmatrix} = \begin{pmatrix} b_1 \\ b_2 \\ b_3 \\ b_4 \end{pmatrix}. \end{equation*}

##### Task87

(Strang ex 3.4.11) It is impossible for a \(1 \times 3\) system of equations to have \(x_p = (2,4,0)\) and \(x_n = \text{ any multiple of } (1,1,1)\). Explain why.##### Task88

(Strang ex 3.4.13) Each of the statments below is false. Find a \(2\times 2\) counterexample to each one.- The complete solution is any linear combination of \(x_p\) and \(X_n\).
- A system \(Ax=b\) has at most one particular solution.
- The solution \(x_p\) with all free variables zero is the shortest solution, in that it has the minimum norm \(||x_p||\).
- If \(A\) is an invertible matrix, there is no solution \(x_n\) in the nullspace.

##### Task89

(Strang ex 3.4.21) Find the complete solution in the form \(x_p + x_n\) to these full rank systems.- \begin{equation*} x+y+z = 4\end{equation*}
- \begin{equation*} \begin{array}{ccccccc} x & + & y & + & z & = & 4 \\ x & - & y & + & z & = & 4 \end{array} \end{equation*}

##### Task90

(Strang ex 3.4.24) Give examples of matrices \(A\) for which the number of solutions to \(Ax = b\) is- \(0\) or \(1\), depending on \(b\);
- \(\infty\), regardless of \(b\);
- \(0\) or \(\infty\), depending on \(b\);
- \(1\), regardless of \(b\).

##### Task91

(Strang ex 3.4.31) Find examples of matrices with the given property, or explain why it is impossible:- The only solution of \(Ax = \left(\begin{smallmatrix} 1 \\ 2 \\ 3 \end{smallmatrix}\right)\) is \(x = \left(\begin{smallmatrix} 0 \\ 1 \end{smallmatrix}\right)\).
- The only solution of \(Bx = \left(\begin{smallmatrix} 0 \\ 1 \end{smallmatrix}\right)\) is \(x = \left(\begin{smallmatrix} 1 \\ 2 \\ 3 \end{smallmatrix}\right)\).

##### Task92

(Strang ex 3.4.33) The complete solution to the equation \(Ax = \left(\begin{smallmatrix} 1 \\ 3\end{smallmatrix}\right)\) is \(x = \left(\begin{smallmatrix} 1 \\ 0\end{smallmatrix}\right) + c\left(\begin{smallmatrix}0\\ 1 \end{smallmatrix}\right)\). Find the matrix \(A\). Write the set of equations that corresponds to \(Ax = b\). (This is the*implicit*description of this set!)