Work in Progress

Linear Algebra the Conceptual Way

by Kevin LaMaster, proud Member of the Math Squad.


For many students they are able to skate by in linear algebra by following equations and systems but don't understand the intuitive nature of matrices and vectors and their operators. This tutorial is not meant as a replacement to the course but should rather be used as a supplement to the course to understand why the operations work as they do. This tutorial is intended to be receivable by a wide range of individuals including past linear algebra students wanting review, present students seeking help, and my friends that I inevitable force to read my work.


  1. Vectors
    1. Addition/Subtraction
    2. Scalar Multiplication
    3. Unit
    4. Dot
    5. Cross


For computer science students vectors can be seen as ordered lists, for engineering students focused on physics they can be seen as a direction and a length. For linear algebra they can be approached from any and every angle Given $ 0\leq\theta<2\pi $ of course.

For the purposes of this tutorial think of it as a way to move a point (normally at the origin) to another point

As a warning most of this page will be movement oriented and I will try my best to graphically demonstrate that

So for example the vector written $ \begin{bmatrix} 1\\ 2\end{bmatrix} $ will move a vector from the origin to point (1,2)


If we want vectors to have all the properties of numbers then what should a vector + a vector result in.

What if we make it one movement and then the other? This way $ \begin{bmatrix} 1\\ 2\end{bmatrix}+\begin{bmatrix} -3\\ 2\end{bmatrix} $ will be the resultof moving right 1 and up 2 followed by moving left 3 and up 2.

Vector Addition

As displayed by the animation this is the same as adding the x component each vector and the y component of each vector.

In this way $ \begin{bmatrix} 1\\ 2\end{bmatrix}+\begin{bmatrix} -3\\ 2\end{bmatrix}=\begin{bmatrix}1-3\\2+2\end{bmatrix}=\begin{bmatrix}-2\\4\end{bmatrix} $

Vector subtraction is built on primarily the same process but in reverse as could be expected.

Some ways to derive the exact method would be imagining $ \vec{u}-\vec{v}=\vec{w} $ as $ \vec{u}=\vec{w}+\vec{v} $

We know from before that $ \begin{bmatrix} 1\\ 2\end{bmatrix}+\begin{bmatrix} -3\\ 2\end{bmatrix}=\begin{bmatrix}-2\\4\end{bmatrix} $ So in that case $ \begin{bmatrix} 1\\ 2\end{bmatrix}=\begin{bmatrix}-2\\4\end{bmatrix}-\begin{bmatrix} -3\\ 2\end{bmatrix} $ or $ \begin{bmatrix}-2\\4\end{bmatrix}-\begin{bmatrix} -3\\ 2\end{bmatrix}=\begin{bmatrix} -2-(-3)\\ 4-2\end{bmatrix}=\begin{bmatrix}1\\2\end{bmatrix} $

So vector subtraction works very much the way that we would expect as well.

You may be thinking that $ \begin{bmatrix} 1\\ 2\end{bmatrix}=\begin{bmatrix}-2\\4\end{bmatrix}-\begin{bmatrix} -3\\ 2\end{bmatrix} $ could also be proven by writing it as $ \begin{bmatrix} 1\\ 2\end{bmatrix}=\begin{bmatrix}-2\\4\end{bmatrix}+(-1)*\begin{bmatrix} -3\\ 2\end{bmatrix} $.

We haven't defined scalar multiplication yet but imagine it as adding a vector to itself so $ 2*\begin{bmatrix} 1\\2\end{bmatrix} $ is the same as $ \begin{bmatrix} 1\\2\end{bmatrix}+\begin{bmatrix} 1\\2\end{bmatrix} $

This would just be $ \begin{bmatrix} 1+1\\2+2\end{bmatrix} $ or $ \begin{bmatrix} 2*1\\2*2\end{bmatrix} $

So multiplying a vector by a scalar is the same as multiplying the individual components of the vector by that number

Unit Vectors

Since we seem to be using this specific example vector a lot, why don't we combine these two concepts to start writing $ \begin{bmatrix}1\\2\end{bmatrix} $ as $ \begin{bmatrix}1\\0\end{bmatrix}+2*\begin{bmatrix}0\\1\end{bmatrix} $?

The two vectors $ \begin{bmatrix}1\\0\end{bmatrix} $ and $ \begin{bmatrix}0\\1\end{bmatrix} $ seem like we'll use them a lot so lets name them $ \vec{i} $ and $ \vec{j} $ respectively.

Now instead of me typing out all the code required to show $ \begin{bmatrix}1\\2\end{bmatrix} $ $ _{\text{Which is a lot actually I don't know why I even bother}} $, I can just say $ \vec{i}+2\vec{j} $.

This also conveniently helps us with vector addition and subtraction. $ \begin{bmatrix} 1\\ 2\end{bmatrix}+\begin{bmatrix} -3\\ 2\end{bmatrix} $ becomes $ \vec{i}+2\vec{j}+-3\vec{i}+2\vec{2} $

Combining like terms in exactly the way you would expect you get $ -2\vec{i}+4\vec{j} $ or $ \begin{bmatrix}-2\\4\end{bmatrix} $

This process of writing a vector as the addition of two other vectors is a vital process the result is called a Linear Combinations

We can describe any matrix in our system as the linear combination of $ \vec{i} $ and $ \vec{j} $

Don't forget about this process because it becomes important later


You may be confused on what exactly the "linear" means in "linear combination"

Think of it in the terms of a linear function from your basic algebra course

Lets say we have a function $ F(x)=2x $

For any $ x $ that I choose to put in if I put in $ 2*x $, I will get out $ 2*2x $ or $ 2*F(x) $

This holds true for any constant $ a $, meaning $ F(ax)=aF(x) $

Also if we add together $ x+2 $ we will get out $ F(x+2)=2x+2*2=F(x)+(2) $

Again this holds generally true with $ F(x+y)=F(x)+F(y) $

These are our two essential conditions of linearity,

$ F(x+y)=F(x)+F(y) $ and $ F(ax)=aF(x) $

If you didn't understand the general concept it's okay we'll apply it to the vectors we've been dealing with.

Let's label the function that breaks down a vector into its unit components as the function $ LC(\vec{x}) $ for linear combination.

So, for example $ LC(\begin{bmatrix}1\\2\end{bmatrix})=\vec{i}+2\vec{j} $

We would hope that this Linear combination has the same properties of linearity.

Checking the first one we find that $ LC(k*\begin{bmatrix}1\\2\end{bmatrix})=LC(\begin{bmatrix}k\\2k\end{bmatrix})=k\vec{i}+2k\vec{j}=k*LC(\begin{bmatrix}1\\2\end{bmatrix}) $

Checking the second one we find that $ LC(\begin{bmatrix}3\\1\end{bmatrix}+\begin{bmatrix}1\\2\end{bmatrix})=LC(\begin{bmatrix}4\\3\end{bmatrix})=4\vec{i}+3\vec{j}=3\vec{i}+\vec{j}+\vec{i}+2\vec{j}=LC(\begin{bmatrix}3\\1\end{bmatrix})+LC(\begin{bmatrix}1\\2\end{bmatrix}) $

Since both characteristics hold true our linear combinations are by definition linear.

Matrices as Transformations

Lets say maybe with get tired of our base units of $ \vec{i} $ and $ \vec{j} $ and want to switch things up.

Lets switch our based units to my favorite two vectors $ \begin{bmatrix}1\\1\end{bmatrix} $ and $ \begin{bmatrix}-2\\3\end{bmatrix} $

So how in the world do we rewrite the vector $ \begin{bmatrix}1\\6\end{bmatrix} $ in our fancy new base system.

Well to write it in our new base we would have to find a linear combination of $ \begin{bmatrix}1\\1\end{bmatrix} $ and $ \begin{bmatrix}-2\\3\end{bmatrix} $ such that $ c_1\begin{bmatrix}1\\1\end{bmatrix}+c_2\begin{bmatrix}-2\\3\end{bmatrix}=\begin{bmatrix}1\\6\end{bmatrix} $

We then have to solve the system of equations of equations $ \begin{displaymath}\begin{matrix} c_1-2*c_2=1 c_1+3*c_2=6 \end{matrix}\end{displaymath} $ Ruining the fun of solving it yourself. The solution of this system is $ c_1=3 $, $ c_2=1 $. That means that we can multiply $ \begin{bmatrix}1\\1\end{bmatrix} $ by three and add it to $ \begin{bmatrix}-2\\3\end{bmatrix} $ to get $ \begin{bmatrix}1\\6\end{bmatrix} $

Dot Products

So we multiplied a vector times a scalar already, but that's kinda dull I want to smash two vectors together in ways other than addition.

Dot products is one of the ways we can do this and has its roots in integer multiplication

It may seem very pedantic, but lets go over regular multiplication in relation to the number line real quick.

Work in Progress

Alumni Liaison

Correspondence Chess Grandmaster and Purdue Alumni

Prof. Dan Fleetwood