ELIAS EBNER

  • Home
  • Blog
  • Courses
  • About

Linear Algebra

  1. Vectors
    1. Scalars
    2. Vectors vs Sets
    3. Addition and Subtraction
    4. Scalar Multiplication
    5. Zero Vectors
    6. Linear Combinations
    7. Real Dot Product
    8. Length of a Vector
    9. Orthogonal Vectors
    10. Parallel Vectors
    11. Angle Between Vectors
    12. Unit Vectors
  2. Matrices
    1. Notation
    2. Indexing
    3. Submatrices
    4. Matrix-by-Vector Product
    5. Addition and Subtraction
    6. Scalar Multiplication
    7. Transpose
    8. Symmetries
    9. Matrix Multiplication
    10. Identity Matrix
    11. Non-Negative Integer Powers
    12. Reverse Order Law of Transposition
  3. Linear Systems
    1. Inverse Matrices
    2. Singular Matrices
    3. Linear Dependence
    4. Solutions
  4. Planes
    1. Vector Cross Product
  5. Gaussian Elimination
Linear Algebra ›Linear Systems

Linear Systems - Introduction

Recall that we introduced the idea of the matrix-by-vector product as being a linear combination of the columns of a matrix and the components of a vector.

Take the vectors

u⃗=[101],v⃗=[−1−10],w⃗=[001].\vec{u} = \begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix}, \quad \vec{v} = \begin{bmatrix} -1 \\ -1 \\ 0 \end{bmatrix}, \quad \vec{w} = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}.u=​101​​,v=​−1−10​​,w=​001​​.

Their linear combinations are denoted by cu⃗+dv⃗+ew⃗c \vec{u} + d \vec{v} + e \vec{w}cu+dv+ew, that is

c[101]+d[−1−10]+e[001]=[c−d−dc+e].c \begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix} + d \begin{bmatrix} -1 \\ -1 \\ 0 \end{bmatrix} + e \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} = \begin{bmatrix} c - d \\ -d \\ c + e \end{bmatrix}.c​101​​+d​−1−10​​+e​001​​=​c−d−dc+e​​.

We now rewrite this linear combination as a matrix-by-vector product. We put the vectors in the columns of the matrix AAA and put the scalars ccc, ddd, and eee in the column vector x⃗\vec{x}x. That is

Ax⃗=[u⃗v⃗w⃗][cde]=[1−100−10101][cde].A \vec{x} = \begin{bmatrix} \vec{u} & \vec{v} & \vec{w} \end{bmatrix} \begin{bmatrix} c \\ d \\ e \end{bmatrix} = \begin{bmatrix} 1 & -1 & 0 \\ 0 & -1 & 0 \\ 1 & 0 & 1 \end{bmatrix} \begin{bmatrix} c \\ d \\ e \end{bmatrix}.Ax=[u​v​w​]​cde​​=​101​−1−10​001​​​cde​​.

Performing the matrix–vector multiplication produces exactly this linear combination of the columns of AAA: cu⃗+dv⃗+ew⃗c \vec{u} + d \vec{v} + e \vec{w}cu+dv+ew.

Conventionally, the resulting vector is denoted by b⃗\vec{b}b.

Up until now, when performing matrix-by-vector multiplication, we were given a matrix AAA and a vector x⃗\vec{x}x to multiply with, and we were tasked with finding the resulting vector b⃗\vec{b}b. Now we change our goal: we are given some matrix AAA and the vector b⃗\vec{b}b which is the result of the product Ax⃗A \vec{x}Ax, and we are tasked with finding the vector x⃗\vec{x}x which leads to that result. We are basically “reversing” the problem.

So before we were asking the question “what is the result of the linear combination cu⃗+dv⃗+ew⃗c \vec{u} + d \vec{v} + e \vec{w}cu+dv+ew?”, and now we are asking the question “what linear combination of u⃗\vec{u}u, v⃗\vec{v}v, and w⃗\vec{w}w results in the result b⃗\vec{b}b?”.

The expression Ax⃗=b⃗A\vec{x} = \vec{b}Ax=b is referred to as a linear system, and solving the linear system means finding the vector x⃗\vec{x}x such that Ax⃗=b⃗A\vec{x} = \vec{b}Ax=b.

Conventionally, instead of ccc, ddd and eee, the symbols x1x_1x1​, x2x_2x2​ and x3x_3x3​ are used to denote the unknowns. This is also coherent with the notation used for denoting the components of a vector.

Similarly, the components of the resulting vector b⃗\vec{b}b are referred to as b1b_1b1​, b2b_2b2​, b3b_3b3​, and so on.

In the context of linear systems, the columns of a matrix AAA are often denoted by a⃗1,a⃗2,⋯\vec{a}_1, \vec{a}_2, \cdotsa1​,a2​,⋯, as opposed to the usual notation of A⋆1,A⋆2,⋯A_{\star 1}, A_{\star 2}, \cdotsA⋆1​,A⋆2​,⋯. This new notation really emphasizes that the columns of a matrix are simply column vectors.

Example

Take the linear system

[111−1][x1x2]=[51].\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} 5 \\ 1 \end{bmatrix}.[11​1−1​][x1​x2​​]=[51​].

Solving the system would mean finding the vector

x⃗=[32].\vec{x} = \begin{bmatrix} 3 \\ 2 \end{bmatrix}.x=[32​].

This means that the vector b⃗\vec{b}b can be rewritten as

3a⃗1+2a⃗2. 3 \vec{a}_1 + 2 \vec{a}_2.3a1​+2a2​.

We can even check the solution:

[111−1][32]=[1⋅3+1⋅21⋅3+(−1)⋅2]=[51].\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} 3 \\ 2 \end{bmatrix} = \begin{bmatrix} 1 \cdot 3 + 1 \cdot 2 \\ 1 \cdot 3 + (-1) \cdot 2 \end{bmatrix} = \begin{bmatrix} 5 \\ 1 \end{bmatrix}.[11​1−1​][32​]=[1⋅3+1⋅21⋅3+(−1)⋅2​]=[51​].
Previous
Reverse Order Law for the Transposition of Matrices
Next
Inverse Matrix of a Linear System
This website does not collect personal data, does not use cookies, and does not perform any tracking.