ELIAS EBNER

  • Home
  • Blog
  • Courses
  • About

Linear Algebra

  1. Vectors
    1. Scalars
    2. Vectors vs Sets
    3. Addition and Subtraction
    4. Scalar Multiplication
    5. Zero Vectors
    6. Linear Combinations
    7. Real Dot Product
    8. Length of a Vector
    9. Orthogonal Vectors
    10. Parallel Vectors
    11. Angle Between Vectors
    12. Unit Vectors
  2. Matrices
    1. Notation
    2. Indexing
    3. Submatrices
    4. Matrix-by-Vector Product
    5. Addition and Subtraction
    6. Scalar Multiplication
    7. Transpose
    8. Symmetries
    9. Matrix Multiplication
    10. Identity Matrix
    11. Non-Negative Integer Powers
    12. Reverse Order Law of Transposition
  3. Linear Systems
    1. Inverse Matrices
    2. Singular Matrices
    3. Linear Dependence
    4. Solutions
  4. Planes
    1. Vector Cross Product
  5. Gaussian Elimination
Linear Algebra ›Linear Systems ›Singular Matrices

Singular Matrix of a Linear System

In the previous lesson, we covered non-singular matrices which, in a linear system, makes the system have exactly one solution.

There is also the case where a matrix is a so-called singular matrix, in which case the system can either have infinite solutions, or no solution at all.

Just like with non-singular matrices, with singular matrices it is also possible to build our intuition from what we already know about scalar algebra.

Parallel Example From Scalar Algebra

We think of the equation

ax=b.ax = b.ax=b.

In this equation, xxx is unknown, whereas aaa and bbb are given. In the previous lesson, we simply divided by aaa to solve the equation. But what if you can’t divide by aaa? When aaa is 000, you can’t simply divide by aaa.

If a=0a = 0a=0, we have two scenarios: either b=0b = 0b=0 or b≠0b \ne 0b=0.

If b=0b = 0b=0, when we plug the values for aaa and bbb into our equation we’re left with

0x=00=0.\begin{align*} 0x &= 0 \\ 0 &= 0. \end{align*}0x0​=0=0.​

That means that no matter what value we choose for xxx, the equation will always hold true. That means that there are infinitely many solutions, since xxx can have any value.

We now consider the case where b≠0b \ne 0b=0. We only know the certain value of aaa, and we use that in our equation:

0x=b0=b.\begin{align*} 0x &= b \\ 0 &= b. \end{align*}0x0​=b=b.​

Now, we know for a fact that b≠0b \ne 0b=0, since we’re observing the case for b≠0b \ne 0b=0. That means that this equation will never be true. In the case where b≠0b \ne 0b=0, bbb will never be 000. That means that there is no value for xxx that we can use to make the equation true. There is, in fact, no solution.

How This Applies to Matrices

A similar idea also applies to matrices. Not all linear systems can be definitively solved by finding one solution, some systems have infinitely many and some have no solution at all.

If x⃗\vec{x}x has two components, you can think of the linear system as being a bunch of lines in a two-dimensional space. If x⃗\vec{x}x has three components, you can think of the linear system as being a bunch of planes in a three-dimensional space. A similar idea holds for greater dimensions, but we stick to two and three for now, since we can visualize those.

We observe a case in which x⃗\vec{x}x has two components:

[1111][x1x2]=[12].\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 1 \\ 2 \end{bmatrix}.[11​11​][x1​x2​​]=[12​].

From your studies you might be used to linear systems of equations as being a collection of equations. We can “convert” the above linear system which is expressed through a matrix-by-vector product to a system of equations:

{x1+x2=1x1+x2=2\begin{cases} x_1 + x_2 = 1 \\ x_1 + x_2 = 2 \end{cases}{x1​+x2​=1x1​+x2​=2​

Geometrically, if you plot these two equations in a coordinate system, they would be two parallel lines:

{x2=1−x1x2=2−x1,\begin{cases} x_2 = 1 - x_1 \\ x_2 = 2 - x_1, \end{cases}{x2​=1−x1​x2​=2−x1​,​

The red line is the first equation, and the blue line represents the second equation.

You can think of the solution to the system of equations as the intersection between these two lines. Since these two lines never cross, there is no intersection. That is, the system has no solution. You won’t be able to find values for the vector x⃗\vec{x}x such that the equations are both solved.

Now consider the following case:

[1122][x1x2]=[12].\begin{bmatrix} 1 & 1 \\ 2 & 2 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 1 \\ 2 \end{bmatrix}.[12​12​][x1​x2​​]=[12​].

That would leave us with these equations:

{x1+x2=12x1+2x2=2.\begin{cases} x_1 + x_2 = 1 \\ 2x_1 + 2x_2 = 2. \end{cases}{x1​+x2​=12x1​+2x2​=2.​

Now, this is harder to visualize graphically, but what we can do as a first step is to simplify the second equation by dividing it by 222:

{x1+x2=1x1+x2=1.\begin{cases} x_1 + x_2 = 1 \\ x_1 + x_2 = 1. \end{cases}{x1​+x2​=1x1​+x2​=1.​

As you can see, we really have the same equation twice. If we try to represent these equations graphically, we would get the same line twice. They would basically be one on top of the other. Here is a visual to help you understand:

It’s a bit hard to see, but the thicker, blue line represents the first equation, and the red line represents the second equation.

In other words, they “intersect” at every point. That is because there are infinitely many solutions. Look, try to use these values:

x⃗=[0.50.5].\vec{x} = \begin{bmatrix} 0.5 \\ 0.5 \end{bmatrix}.x=[0.50.5​].

With that, our linear system becomes

[1122][0.50.5]=[12][1⋅0.5+1⋅0.52⋅0.5+2⋅0.5]=[12][12]=[12].\begin{align*} \begin{bmatrix} 1 & 1 \\ 2 & 2 \end{bmatrix} \begin{bmatrix} 0.5 \\ 0.5 \end{bmatrix} &= \begin{bmatrix} 1 \\ 2 \end{bmatrix} \\ \begin{bmatrix} 1 \cdot 0.5 + 1 \cdot 0.5 \\ 2 \cdot 0.5 + 2 \cdot 0.5 \end{bmatrix} &= \begin{bmatrix} 1 \\ 2 \end{bmatrix} \\ \begin{bmatrix} 1 \\ 2 \end{bmatrix} &= \begin{bmatrix} 1 \\ 2 \end{bmatrix}. \end{align*}[12​12​][0.50.5​][1⋅0.5+1⋅0.52⋅0.5+2⋅0.5​][12​]​=[12​]=[12​]=[12​].​

Okay, so we found one solution. But now try

x⃗=[0.250.75].\vec{x} = \begin{bmatrix} 0.25 \\ 0.75 \end{bmatrix}.x=[0.250.75​].

In fact - and you can try these yourself - you can use any values for x1x_1x1​ and x2x_2x2​ where x1+x2=1x_1 + x_2 = 1x1​+x2​=1. That means

x⃗=[0.20.8],\vec{x} = \begin{bmatrix} 0.2 \\ 0.8 \end{bmatrix},x=[0.20.8​],

or

x⃗=[0.60.4],\vec{x} = \begin{bmatrix} 0.6 \\ 0.4 \end{bmatrix},x=[0.60.4​],

or even

x⃗=[01].\vec{x} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}.x=[01​].

There infinitely many possible vectors. If you think of the solution as being the intersection between the two lines, well, now we have any point along the lines at our disposal (since they’re the same line).

You could describe all solutions with

x⃗=[t1−t].\vec{x} = \begin{bmatrix} t \\ 1 - t \end{bmatrix}.x=[t1−t​].

The same applies for planes in a three-dimensional space. When two different planes intersect, their intersection can be described by a line. When a third (different from the first two) plane intersects them, their intersection can be described by a point. But what if two of the planes are the same? Well, then we get a line as our solution. And what if all three planes are the same? Well, then our solution is an entire plane.

So What Is a Singular Matrix?

A singular matrix is a matrix that has no inverse. That means, there does not exist a matrix with which you can multiply to get the identity matrix. In other words, if AAA is a singular matrix, there exists no A−1A^{-1}A−1 such that

AA−1=I.AA^{-1} = I.AA−1=I.

Next Steps

This can happen when the information of AAA‘s columns or rows is “redundant”. That means that some or multiple rows or columns are not linearly independent. That means that a column or row is a linear combination of the others.

We will get into the details of what this means in future lessons.

Previous
Inverse Matrix of a Linear System
Next
Linear Dependence and Independence
This website does not collect personal data, does not use cookies, and does not perform any tracking.