ELIAS EBNER

  • Home
  • Blog
  • Courses
  • About

Linear Algebra

  1. Vectors
    1. Scalars
    2. Vectors vs Sets
    3. Addition and Subtraction
    4. Scalar Multiplication
    5. Zero Vectors
    6. Linear Combinations
    7. Real Dot Product
    8. Length of a Vector
    9. Orthogonal Vectors
    10. Parallel Vectors
    11. Angle Between Vectors
    12. Unit Vectors
  2. Matrices
    1. Notation
    2. Indexing
    3. Submatrices
    4. Matrix-by-Vector Product
    5. Addition and Subtraction
    6. Scalar Multiplication
    7. Transpose
    8. Symmetries
    9. Matrix Multiplication
    10. Identity Matrix
    11. Non-Negative Integer Powers
    12. Reverse Order Law of Transposition
  3. Linear Systems
    1. Inverse Matrices
    2. Singular Matrices
    3. Linear Dependence
    4. Solutions
  4. Planes
    1. Vector Cross Product
  5. Gaussian Elimination
Linear Algebra β€ΊLinear Systems β€ΊLinear Dependence

Linear Dependence and Independence

A set of vectors is linearly dependent if one or more vectors in the set can be expressed as a linear combination of the others. If this is not possible, the vectors are said to be linearly independent.

The formal definition says that a set of vectors vβƒ—1,vβƒ—2,⋯ ,vβƒ—n\vec{v}_1, \vec{v}_2, \cdots, \vec{v}_nv1​,v2​,β‹―,vn​ is linearly independent if the only solution to

c1vβƒ—1+c2vβƒ—2+β‹―+cnvβƒ—n=0c_1 \vec{v}_1 + c_2 \vec{v}_2 + \cdots + c_n \vec{v}_n = 0c1​v1​+c2​v2​+β‹―+cn​vn​=0

is

c1=c2=β‹―=cn=0.c_1 = c_2 = \cdots = c_n = 0.c1​=c2​=β‹―=cn​=0.

The idea is β€œcan we combine the vectors to cancel each other out?”.

In this lesson I would like to build some intuition for this concept, as I think that - in general, but especially in this case - it’s the most important step to help you link this idea to other important concepts in linear algebra.

We first start with numerical examples and then move on to a more visual way to think of linear dependence.

Numerical Examples

First Example

Take the vectors

uβƒ—=[123],vβƒ—=[321],wβƒ—=[567].\vec{u} = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}, \quad \vec{v} = \begin{bmatrix} 3 \\ 2 \\ 1 \end{bmatrix}, \quad \vec{w} = \begin{bmatrix} 5 \\ 6 \\ 7 \end{bmatrix}.u=​123​​,v=​321​​,w=​567​​.

Now, the vectors are linearly dependent, because

wβƒ—=2uβƒ—+1vβƒ—=2[123]+[321]=[246]+[321]=[567].\begin{align*} \vec{w} &= 2 \vec{u} + 1 \vec{v} \\ &= 2 \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix} + \begin{bmatrix} 3 \\ 2 \\ 1 \end{bmatrix} \\ &= \begin{bmatrix} 2 \\ 4 \\ 6 \end{bmatrix} + \begin{bmatrix} 3 \\ 2 \\ 1 \end{bmatrix} \\ &= \begin{bmatrix} 5 \\ 6 \\ 7 \end{bmatrix}. \end{align*}w​=2u+1v=2​123​​+​321​​=​246​​+​321​​=​567​​.​

As you can see, w⃗\vec{w}w can be expressed as a linear combination of u⃗\vec{u}u and v⃗\vec{v}v.

Second Example

There is an even simpler case. Take the vectors

vβƒ—=[4βˆ’3],wβƒ—=[βˆ’86].\vec{v} = \begin{bmatrix} 4 \\ -3 \end{bmatrix}, \quad \vec{w} = \begin{bmatrix} -8 \\ 6 \end{bmatrix}.v=[4βˆ’3​],w=[βˆ’86​].

A β€œlinear combination” of vβƒ—\vec{v}v is just vβƒ—\vec{v}v multiplied by some scalar. But it’s still a trivial case of a linear combination.

And as you can see, you can rewrite wβƒ—\vec{w}w as βˆ’2vβƒ—-2 \vec{v}βˆ’2v:

wβƒ—=βˆ’2vβƒ—=βˆ’2[4βˆ’3]=[βˆ’86].\vec{w} = -2 \vec{v} = -2 \begin{bmatrix} 4 \\ -3 \end{bmatrix} = \begin{bmatrix} -8 \\ 6 \end{bmatrix}.w=βˆ’2v=βˆ’2[4βˆ’3​]=[βˆ’86​].

That means that w⃗\vec{w}w is linearly dependent on v⃗\vec{v}v.

Third Example

Let’s now look at an example where the vectors are linearly independent.

Take the vectors

uβƒ—=[βˆ’42βˆ’1],vβƒ—=[112],wβƒ—=[3βˆ’12].\vec{u} = \begin{bmatrix} -4 \\ 2 \\ -1 \end{bmatrix}, \quad \vec{v} = \begin{bmatrix} 1 \\ 1 \\ 2 \end{bmatrix}, \quad \vec{w} = \begin{bmatrix} 3 \\ -1 \\ 2 \end{bmatrix}.u=β€‹βˆ’42βˆ’1​​,v=​112​​,w=​3βˆ’12​​.

To check whether these vectors are independent or not, we use the definition of linear dependence:

0βƒ—=c1uβƒ—+c2vβƒ—+c3wβƒ—.\vec{0} = c_1 \vec{u} + c_2 \vec{v} + c_3 \vec{w}.0=c1​u+c2​v+c3​w.

If we can find a solution which is not setting all coefficients to 000, the vectors are dependent. If the only solution is

c1=c2=c3=0,c_1 = c_2 = c_3 = 0,c1​=c2​=c3​=0,

then the vectors are linearly independent.

So, let’s proceed:

0βƒ—=c1uβƒ—+c2vβƒ—+c3wβƒ—=c1[βˆ’42βˆ’1]+c2[112]+c3[3βˆ’12]. \begin{align*} \vec{0} &= c_1 \vec{u} + c_2 \vec{v} + c_3 \vec{w} \\ &= c_1 \begin{bmatrix} -4 \\ 2 \\ -1 \end{bmatrix} + c_2 \begin{bmatrix} 1 \\ 1 \\ 2 \end{bmatrix} + c_3 \begin{bmatrix} 3 \\ -1 \\ 2 \end{bmatrix}. \end{align*}0​=c1​u+c2​v+c3​w=c1β€‹β€‹βˆ’42βˆ’1​​+c2​​112​​+c3​​3βˆ’12​​.​

We can rewrite this as a system of equations:

{βˆ’4c1+c2+3c3=02c1+c2βˆ’c3=0βˆ’c1+2c2+2c3=0.\begin{cases} -4c_1 + c_2 + 3c_3 = 0 \\ 2c_1 + c_2 - c_3 = 0 \\ -c_1 + 2c_2 + 2c_3 = 0 \end{cases}.βŽ©βŽ¨βŽ§β€‹βˆ’4c1​+c2​+3c3​=02c1​+c2β€‹βˆ’c3​=0βˆ’c1​+2c2​+2c3​=0​.

You can solve this however you want, I will assume that you’re able to solve a linear system of equations. You will find that the only solution to the system is

c1=c2=c3=0.c_1 = c_2 = c_3 = 0.c1​=c2​=c3​=0.

Therefore, the vectors u⃗\vec{u}u, v⃗\vec{v}v, and w⃗\vec{w}w are linearly independent.

Visual Intuition

The idea that I discuss in this section is extremely important, as we will build upon it a lot in later lessons.

Take the vectors

xβƒ—=[10],yβƒ—=[01].\vec{x} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \quad \vec{y} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}.x=[10​],y​=[01​].

As you might have already noticed, these vectors are unit vectors which point in the same direction as the axes in a two-dimensional coordinate system: xβƒ—\vec{x}x points in the direction of the xxx-axis, and yβƒ—\vec{y}y​ points in the same direction as the yyy-axis.

With these two vectors, you can reach any point on the xyxyxy-plane. When you have some point in a coordinate system, say (3,2)(3,2)(3,2), you are saying that to reach that point you have to β€œmove in the positive xxx direction by 333 units and move in the positive yyy direction by 222 units”. You can express that same idea with the xβƒ—\vec{x}x and yβƒ—\vec{y}y​ vectors defined above. To reach the vector

pβƒ—=[32],\vec{p} = \begin{bmatrix} 3 \\ 2 \end{bmatrix},p​=[32​],

you can use

3xβƒ—+2yβƒ—. 3 \vec{x} + 2 \vec{y}.3x+2y​.

And just like with the point (3,2)(3,2)(3,2), you can reach any point on this plane. Therefore, the linear combination of xβƒ—\vec{x}x and yβƒ—\vec{y}y​ describes the xyxyxy-plane.

The thing is, you don’t really need the vectors to point in the same direction as the axes to reach any point on the plane. Take the vectors

xβƒ—=[12],yβƒ—=[56].\vec{x} = \begin{bmatrix} 1 \\ 2 \end{bmatrix}, \quad \vec{y} = \begin{bmatrix} 5 \\ 6 \end{bmatrix}.x=[12​],y​=[56​].

To reach the point (3,2)(3,2)(3,2), you can use

[32]=βˆ’2xβƒ—+yβƒ—=βˆ’2[12]+[56]=[βˆ’2βˆ’4]+[56]=[32].\begin{align*} \begin{bmatrix} 3 \\ 2 \end{bmatrix} &= -2 \vec{x} + \vec{y} \\ &= -2 \begin{bmatrix} 1 \\ 2 \end{bmatrix} + \begin{bmatrix} 5 \\ 6 \end{bmatrix} \\ &= \begin{bmatrix} -2 \\ -4 \end{bmatrix} + \begin{bmatrix} 5 \\ 6 \end{bmatrix} \\ &= \begin{bmatrix} 3 \\ 2 \end{bmatrix}. \end{align*}[32​]​=βˆ’2x+y​=βˆ’2[12​]+[56​]=[βˆ’2βˆ’4​]+[56​]=[32​].​

Just like before, with some linear combination of xβƒ—\vec{x}x and yβƒ—\vec{y}y​, it is possible to reach any point on the xyxyxy-plane.

But now consider the pair of vectors

xβƒ—=[11],yβƒ—=[22].\vec{x} = \begin{bmatrix} 1 \\ 1 \end{bmatrix}, \quad \vec{y} = \begin{bmatrix} 2 \\ 2 \end{bmatrix}.x=[11​],y​=[22​].

In this case, we are not able to reach every point on the xyxyxy-plane. But why? What makes this pair of vectors different from the other two?

The answer is that these two vectors are linearly dependent, because 2xβƒ—=yβƒ—2\vec{x} = \vec{y}2x=y​.

But what does this mean visually? Well, in this simple case it’s easy: the two vectors point in the same direction. In fact, yβƒ—=2xβƒ—\vec{y} = 2 \vec{x}y​=2x is the definition of a parallel vector.

The second vector yβƒ—\vec{y}y​ does not β€œunlock” any new territory. Before, the second vector allowed us to go in a different direction compared to the first vector, but now both vectors point in the same direction. By summing up scaled versions of these vectors (aka linear combination) we can only move along a line. Most of the two-dimensional space cannot be reached.

This idea extends to higher dimensions. Take the vectors

xβƒ—=[101],yβƒ—=[010],zβƒ—=[111].\vec{x} = \begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix}, \quad \vec{y} = \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}, \quad \vec{z} = \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}.x=​101​​,y​=​010​​,z=​111​​.

These are three-dimensional vectors. The vectors xβƒ—\vec{x}x and yβƒ—\vec{y}y​ together allow us to move along a plane in three-dimensional space. That is, their linear combination allows us to reach all the points on a plane in this space.

Okay, so now we need a vector to move away from this plane and reach the rest of the three-dimensional space. Let’s look at zβƒ—\vec{z}z. But wait, zβƒ—\vec{z}z is inside the plane formed by the linear combinations of xβƒ—\vec{x}x and yβƒ—\vec{y}y​. That is because

zβƒ—=xβƒ—+yβƒ—.\vec{z} = \vec{x} + \vec{y}.z=x+y​.

They’re linearly dependent. The vector zβƒ—\vec{z}z does not give us any β€œnew information”. You cannot reach spots of the 3D space that you weren’t able to reach by combining only xβƒ—\vec{x}x and yβƒ—\vec{y}y​.

Basically, each independent vector adds one new dimension. If a vector can be built from the others, it does not add new information.

If this concept is not entirely clear yet, don’t worry. We will cover this extensively in future lessons.

Previous
Singular Matrix of a Linear System
Next
Solutions to a Linear System
This website does not collect personal data, does not use cookies, and does not perform any tracking.