Their linear combinations are denoted by cu+dv+ew, that is
c101+d−1−10+e001=c−d−dc+e.
We now rewrite this linear combination as a matrix-by-vector product. We put the vectors
in the columns of the matrix A and put the scalars c, d, and e in
the column vector x. That is
Ax=[uvw]cde=101−1−10001cde.
Performing the matrix–vector multiplication produces exactly this linear combination of the columns of A: cu+dv+ew.
Conventionally, the resulting vector is denoted by b.
Up until now, when performing matrix-by-vector multiplication, we were given a matrix A
and a vector x to multiply with, and we were tasked with finding the resulting
vector b. Now we change our goal: we are given some matrix A
and the vector b which is the result of the product Ax, and we are
tasked with finding the vector x which leads to that result. We are basically
“reversing” the problem.
So before we were asking the question “what is the result of the linear combination
cu+dv+ew?”, and now we are asking the question
“what linear combination of u, v, and w results in the
result b?”.
The expression Ax=b is referred to as a linear system, and solving
the linear system means finding the vector x such that Ax=b.
Conventionally, instead of c, d and e, the symbols
x1, x2 and x3 are used to denote the unknowns. This is also coherent with the
notation used for denoting the components of a vector.
Similarly, the components of the resulting vector b are referred to as
b1, b2, b3, and so on.
In the context of linear systems, the columns of a matrix A are often denoted by a1,a2,⋯, as opposed to the usual notation of A⋆1,A⋆2,⋯.
This new notation really emphasizes that the columns of a matrix
are simply column vectors.