unnamed lin alg website: Simple linear algebra explanations!

This website is a work in progress!

Created by Eldrick Chen, creator of calculusgaming.com. Based on A First Course in Linear Algebra by Robert A. Beezer


Table of Contents


Website Update History (Last update: )

Important: You might have to refresh the tab to view the latest updates to this website.

2025-05-17: Initial Release


Website Settings

Switch to a dark theme for those of you studying late at night! (This setting does not affect any of the images on this page, so they will stay bright.)

 


If the bright images in dark mode bother you, you can invert the colors of graphs using this setting. Warning: this will change the colors of points and curves on each graph, making graph captions inaccurate in some cases.

 


Scientific Notation Format

Control the way very large and small numbers are displayed on this website. (Primarily intended for those of you who enjoy incremental games!)


Font Settings

Change this website’s font to a font of your choice! (Note: Font must be installed on your device)

Enter font name:

Font size multiplier (scale the font size by this amount):


Color Settings

Background color:

Text color:


Background Image (or GIF)

Background image size: pixels

Background image horizontal offset: pixels

Background image vertical offset: pixels

Background opacity: 30%


What Is This Website?

A note about links on this page: Internal links (links that bring you to another spot on this page) are colored in light blue. External links (links that open a different website) are colored in dark blue. External links will always open in a new tab.

This is one of the websites in the “unnamed ____ website” series (you can find the rest at calculusgaming.com). For more information about these websites, read the “What Is This Website?” section of unnamed calc website.

The lessons on this page are based on A First Course in Linear Algebra, a free online linear algebra resource. I strongly recommend viewing this page for more detailed explanations and proofs of linear algebra concepts!

Unit Progress
Systems of Linear Equations 5/6
Vectors 0/6
Matrices 0/6
Vector Spaces 0/6
Determinants 0/2
Eigenvalues 0/3
Linear Transformations 0/4
Representations 0/4
All Units

Unit 1: Systems of Linear Equations

A First Course in Linear Algebra link: http://linear.pugetsound.edu/html/chapter-SLE.html

Intro to Systems of Linear Equations

In algebra, you’ve studied linear systems of equations before. In linear algebra, we’ll study linear systems in more detail, so let’s review them.

A linear equation is an equation of the form \(a_1 x_1 + a_2 x_2 + \cdots + a_n x_n = b\), where \(a_1\) through \(a_n\) are constant coefficients and \(b\) is a constant. A linear system of equations is a set of linear equations.

When we solve a linear equation, that means finding the values of \(x_1\), \(x_2\), ..., \(x_n\) that makes every equation in a system true at the same time.

The set of all solutions to a linear system of equations is known as its solution set. There are three possibilities for the solution set of a linear system of equations.

Linear systems with one solution

Here’s an example of this type of system:

\[ 2x_1 + 3x_2 = 5 \] \[ x_1 - x_2 = 5 \]

The only solution to this system of equations is \(x_1 = 4\) and \(x_2 = -1\).

We can visually represent this system of equations as two lines: the first equation can be represented by the line \(2x + 3y = 5\) and the second equation by the line \(x - y = 5\). These two lines intersect at exactly one point, which is our solution.

The lines intersect at one point, so the system has one solution.

Linear systems with infinitely many solutions

Here’s an example of this type of system:

\[ x_1 + 3x_2 = 2 \] \[ 2x_1 + 6x_2 = 4 \]

Notice how the second equation is just the first equation multiplied by 2, so these two equations are really asking for the same thing! Therefore, there are infinitely many pairs \((x_1, x_2)\) which satisfy both equations.

Visually, we can represent this system of equations with the lines \(x + 3y = 2\) and \(2x + 6y = 4\). These lines are the exact same, so they have infinitely many intersection points!

The lines overlap, so the system has infinitely many solutions.

Linear systems with no solutions

Here’s an example of this type of system:

\[ x_1 + 3x_2 = 2 \] \[ 2x_1 + 6x_2 = 5 \]

There are no possible values of \(x_1\) and \(x_2\) that will make both of these equations simultaneously true.

Visually, we can represent this as two lines \(x + 3y = 2\) and \(2x + 6y = 5\). These lines are parallel to each other but not the same line, so there are no intersection points.

The lines are parallel to each other and never overlap, so the system has no solutions.

Equation operations

When we have a system of equations, there are three operations we can perform on them without changing the solution set.

  1. Swapping the order of two equations
  2. Multiplying an equation by a nonzero constant
  3. Adding a constant multiple of one equation to another equation

Here’s an example with this system of equations. We’re going to perform each equation operation once on this system.

\[ 2x_1 + 3x_2 + 5x_3 = 7\] \[ 3x_1 - 6x_2 + 9x_3 = 12 \] \[ x_1 + x_3 = 2 \]

An example of the first operation is swapping the order of the first and second equations to get:

\[ \class{red}{3x_1 - 6x_2 + 9x_3 = 12} \] \[ \class{red}{2x_1 + 3x_2 + 5x_3 = 7} \] \[ x_1 + x_3 = 2 \]

We have swapped the order of the highlighted equations.

An example of the second operation is multiplying the third equation by 2 to get:

\[ 3x_1 - 6x_2 + 9x_3 = 12 \] \[ 2x_1 + 3x_2 + 5x_3 = 7 \] \[ \class{red}{2x_1 + 2x_3 = 4} \]

We have multiplied the highlighted equation by 2.

An example of the third operation is adding 3 times the second equation to the third equation to get:

\[ 3x_1 - 6x_2 + 9x_3 = 12 \] \[ \class{blue}{2x_1 + 3x_2 + 5x_3 = 7} \] \[ \class{red}{8x_1 + 9x_2 + 17x_3 = 25} \]

We have added 3 times the blue equation to the red equation.

Vectors and Matrices

In the section after this one, we will learn how to represent systems of linear equations using vectors and matrices. But let’s first talk about what vectors and matrices even are.

Vectors

A vector is a list of numbers. They can be represented in multiple ways: they can be written out like coordinates (e.g. \((1, 2, 3)\)), or as a column vector, where the numbers are stacked vertically:

\[ \begin{bmatrix} 1\\ 2\\ 3 \end{bmatrix} \]

There is a special type of vector known as a zero vector, and it’s a vector that only contains zeros. Here’s an example of a zero vector:

\[ \begin{bmatrix} 0\\ 0\\ 0 \end{bmatrix}\]

Matrices

A matrix is a 2-dimensional grid of numbers. You can think of them as a group of column vectors stacked side by side. Here’s an example:

\[ \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix} \]

A matrix with \(m\) rows and \(n\) columns is known as an \(m \times n\) matrix. For this example, because our matrix has 3 rows and 3 columns, it is a \(3 \times 3\) matrix.

In future sections, we will learn how we can use vectors and matrices to describe systems of equations and discover their properties. We will also eventually learn about the operations we can perform on vectors and matrices.

Representing Systems of Linear Equations

We can represent a system of linear equations using a matrix.

The coefficient matrix

The coefficient matrix is a way to represent the coefficients of a linear system of equations. Each row in the coefficient matrix represents the coefficients in one equation. For example, consider the following system of equations:

\[ \class{red}{2}x_1 + \class{red}{3}x_2 + \class{red}{4}x_3 = 5 \] \[ \class{red}{8}x_1 \class{red}{- 7}x_2 + \class{red}{6}x_3 = 5 \] \[ \class{red}{-3}x_1 \class{red}{- 6}x_2 \class{red}{- 9}x_3 = 12 \]

The coefficients of each equation are highlighted in red. If we put these coefficients into a matrix, we get the coefficient matrix. The coefficient matrix \(A\) for this system of equations is:

\[ A = \begin{bmatrix} \class{red}{2} & \class{red}{3} & \class{red}{4}\\ \class{red}{8} & \class{red}{-7} & \class{red}{6}\\ \class{red}{-3} & \class{red}{-6} & \class{red}{-9}\\ \end{bmatrix} \]

Don’t forget about the signs of each coefficient!

The vector of constants

The vector of constants holds the constants that each linear expression in our system of equations equals. Typically, these are the constants on the right-hand side of each equation. Let’s go back to our system of equations:

\[ {2}x_1 + {3}x_2 + {4}x_3 = \class{blue}{5} \] \[ {8}x_1 - 7x_2 + {6}x_3 = \class{blue}{5} \] \[ -3x_1 - 6x_2 - 9x_3 = \class{blue}{12} \]

This time, I’ve highlighted the constants in this system. Putting these vectors into a column vector gives us the vector of constants. The vector of constants \(\mathbf{b}\) for this system is:

\[ \mathbf{b} = \begin{bmatrix} \class{blue}{5} \\ \class{blue}{5} \\ \class{blue}{12} \end{bmatrix} \]

The augmented matrix

If we add another column to the right-hand side of the coefficient matrix and fill it up with the vector of constants, we get the augmented matrix for a system of equations. In this example, the augmented matrix is:

\[ \begin{bmatrix} \class{red}{2} & \class{red}{3} & \class{red}{4} & \class{blue}{5}\\ \class{red}{8} & \class{red}{-7} & \class{red}{6} & \class{blue}{5}\\ \class{red}{-3} & \class{red}{-6} & \class{red}{-9} & \class{blue}{12}\\ \end{bmatrix} \]

Reduced Row-Echelon Form

Now that we know how to represent systems of equations with matrices, how can we use this knowledge to actually solve them? To do this, we need to simplify our systems down into a form that’s simpler. One way to do this is to convert a system’s augmented matrix into a simpler form known as reduced row-echelon form.

A matrix is in reduced row-echelon form when it meets these conditions:

  1. If a row only contains zeros (this is known as a zero row), it is below all rows that aren’t zero rows.
  2. The leftmost nonzero number of every row is a 1 (unless the row is a zero row); this 1 is known as a leading 1.
  3. If a column has a leading 1, it is the only nonzero number in that column.
  4. Consider any two leading 1s in the matrix. If the row number of the second leading 1 is greater than the row number of the first leading 1, the column number of the second leading 1 must be greater than the column number of the first leading 1.
    • In symbols: Let’s say the first leading 1 is in row \(i\) and column \(j\) and the second leading 1 is in row \(s\) and column \(t\). It must always be true that if \(s \gt i\), then \(t \gt j\).

Here’s a matrix that is not in reduced row-echelon form because it violates condition 1:

\[ \begin{bmatrix} 1 & 0 & 3\\ \class{red}{0} & \class{red}{0} & \class{red}{0}\\ 0 & 1 & 2 \end{bmatrix}\]

The highlighted row is not below all other rows with nonzero terms.

Here’s a matrix that violates condition 2:

\[ \begin{bmatrix} \class{red}{2} & 0 & 3\\ 0 & 1 & 2\\ 0 & 0 & 0\\ \end{bmatrix}\]

The highlighted entry is a leading nonzero term of the first row but is not 1.

Here’s a matrix that violates condition 3:

\[ \begin{bmatrix} 1 & \class{red}{2} & 3\\ 0 & \class{red}{1} & 2\\ 0 & 0 & 0\\ \end{bmatrix}\]

The leading 1 in the second row is not the only nonzero entry in its column (as shown by the highlighted entries).

And finally, here’s a matrix that violates condition 4:

\[ \begin{bmatrix} 1 & 0 & 3\\ 0 & \class{red}{1} & 2\\ \class{blue}{1} & 0 & 0\\ \end{bmatrix}\]

The row number of the blue leading 1 is greater than the row number of the red leading 1, but the column number of the blue leading 1 is less than the row number of the red leading 1.

Here is a matrix in reduced row-echelon form:

\[ \begin{bmatrix} 1 & 0 & 3\\ 0 & 1 & 2\\ 0 & 0 & 0\\ \end{bmatrix}\]

Reduced row-echelon form is useful because once we get an augmented matrix into reduced row-echelon form (you will learn how to do this in the next section), it’s easy to find the solutions to the corresponding system of equations.

For example, the corresponding system of equations to the above matrix is:

\[ 1x_1 + 0x_2 = 3 \] \[ 0x_1 + 1x_2 = 2 \] \[ 0x_1 + 0x_2 = 0 \]

The last line simplifies to \(0 = 0\), so it is always true no matter what the values of \(x_1\) and \(x_2\) are. Therefore, we can disregard that equation. The other two lines directly give us the values of \(x_1\) and \(x_2\): \(x_1 = 3\) and \(x_2 = 2\).

For a matrix in reduced row-echelon form, a column with a leading 1 is known as a pivot column. In this example, column 1 and column 2 are pivot columns.

\[ \begin{bmatrix} \class{red}{1} & 0 & 3\\ 0 & \class{red}{1} & 2\\ 0 & 0 & 0\\ \end{bmatrix}\]

A column with a leading 1 is a pivot column.

I will sometimes use the term “row-reducing” to refer to analyzing the reduced row-echelon form of a matrix (without actually changing the original matrix).

Gauss-Jordan Elimination

Gauss-Jordan elimination is a systematic way to turn a matrix into reduced row-echelon form. The basic idea is to go through our matrix column by column and perform row operations to turn this matrix into reduced row-echelon form.

Row operations

There are three row operations we can perform on a matrix without changing the solution set of the corresponding system of equations:

  1. Swap the order of two rows
  2. Multiply every entry in a row by a nonzero constant
  3. Add a constant multiple of one row to another row (i.e. multiply every entry of a row by a constant multiple and add it to the entries of another row without changing the original row)

Notice the similarities between the row operations and equation operations mentioned in Intro to Systems of Linear Equations. When one matrix can be transformed into another matrix through these row operations, the matrices are known as row-equivalent.

Let’s go through the process of Gauss-Jordan elimination with the following matrix:

\[ A = \begin{bmatrix} 1 & 1 & 1 & 2\\ 2 & -3 & 1 & -9\\ -4 & 0 & 5 & -14 \end{bmatrix}\]

We first need to define some variables to keep track of where we are in the process. We will define the variables \(j\) and \(r\) and set them both to 0. \(j\) will serve as a counter to keep track of what column we’re on. In addition, we’ll define \(m\) as the number of rows in the matrix \(A\) and \(n\) as the number of columns (i.e. \(A\) is an \(m \times n\) matrix).

The first column (\(j = 1\))

We start off by increasing \(j\) by 1. The variable \(j\) is now 1, meaning that we’re working on the first column.

\[\begin{bmatrix} \class{red}{1} & 1 & 1 & 2\\ \class{red}{2} & -3 & 1 & -9\\ \class{red}{-4} & 0 & 5 & -14 \end{bmatrix}\]

Now we look at the entries of \(A\) in this column (in this case the first column). If all of the entries in this column from row \(r + 1\) to \(m\) are zero, then we skip this column. \(r + 1\) is currently 1 in this case, so we need to look at all of the entries in this column. These entries are not all zero, so we proceed.

Now we choose a row from rows \(r + 1\) to \(m\) such that the entry in column \(j\) is nonzero. We’ll call the index of this row \(i\). In this case, we can choose any of the rows, so I’ll choose row 1.

Now we increase \(r\) by 1 (after incrementing, \(r\) is 1 now). If \(i\) and \(r\) are different, we swap rows \(i\) and \(r\) of the matrix. In this case, because \(i\) and \(r\) are both 1, we don’t need to do anything here.

Now, we multiply row \(r\) by a constant to make the entry at column \(j\) 1. In this case, the entry at row 1 and column 1 is already 1, so we skip this step.

We then add constant multiples of row \(r\) to all other rows to make all other entries of column \(j\) (in this case, the first column) zero.

Adding -2 times row 1 to row 2:

\[ \begin{bmatrix} 1 & 1 & 1 & 2\\ 0 & -5 & -1 & -13\\ -4 & 0 & 5 & -14 \end{bmatrix} \]

Adding 4 times row 1 to row 3:

\[ \begin{bmatrix} 1 & 1 & 1 & 2\\ 0 & -5 & -1 & -13\\ 0 & 4 & 9 & -6 \end{bmatrix} \]

Now we can move on to the next column.

The second column (\(j = 2\))

We add 1 to \(j\), so \(j\) is currently 2. This means that we’re focusing on the second column.

\[ \begin{bmatrix} 1 & \class{red}{1} & 1 & 2\\ 0 & \class{red}{-5} & -1 & -13\\ 0 & \class{red}{4} & 9 & -6 \end{bmatrix} \]

Now we need to choose a row from row \(r + 1\) (which is currently 2) to \(m\) (which is 3) with a nonzero entry in column \(j\) (which is currently 2). I’ll choose row 2 for this example, so \(i = 2\).

Now we increase \(r\) by 1, so it’s currently 2. Because we chose row 2 and \(r = 2\), we don’t need to swap any rows.

Now we need to set the entry at row \(r\) and column \(j\) to a 1 by multiplying row \(r\) by a constant. To do this, we multiply row 2 by \(-1/5\) to turn the entry at row 2 and column 2 to a 1.

\[ \begin{bmatrix} 1 & 1 & 1 & 2\\ 0 & 1 & 1/5 & 13/5\\ 0 & 4 & 9 & -6 \end{bmatrix} \]

Now we have to zero out the other entries of column 2 by adding constant multiples of row 2 to every other row. Let’s start by adding -1 times row 2 to row 1:

\[ \begin{bmatrix} 1 & 0 & 4/5 & -3/5\\ 0 & 1 & 1/5 & 13/5\\ 0 & 4 & 9 & -6 \end{bmatrix} \]

Now let’s add -4 times row 2 to row 3:

\[ \begin{bmatrix} 1 & 0 & 4/5 & -3/5\\ 0 & 1 & 1/5 & 13/5\\ 0 & 0 & 41/5 & -82/5 \end{bmatrix} \]

Now we move on to column 3.

The third column (\(j = 3\))

We need to choose a row from \(r + 1 = 3\) to \(m = 3\). Our only choice in this case is the 3rd row. We then increase \(r\) by 1, so \(r = 3\) now. Therefore, we don’t need to swap any rows.

Now we want the entry at row 3 and column 3 to be a 1, so we multiply row 3 by \(5/41\).

\[ \begin{bmatrix} 1 & 0 & 4/5 & -3/5\\ 0 & 1 & 1/5 & 13/5\\ 0 & 0 & 1 & -2 \end{bmatrix} \]

Now we just have to zero out the other entries of column 3. Adding \(-1/5\) times the 3rd row to the 2nd row:

\[ \begin{bmatrix} 1 & 0 & 4/5 & -3/5\\ 0 & 1 & 0 & 3\\ 0 & 0 & 1 & -2 \end{bmatrix} \]

Adding \(-4/5\) times row 3 to row 1:

\[ \begin{bmatrix} 1 & 0 & 0 & 1\\ 0 & 1 & 0 & 3\\ 0 & 0 & 1 & -2 \end{bmatrix} \]

Our matrix is now in reduced row-echelon form! In this form, we can easily read out the solutions to the corresponding system of equations. Let’s translate this matrix into its corresponding system of equations now:

\[ 1x_1 + 0x_2 + 0x_3 = 1 \] \[ 0x_1 + 1x_2 + 0x_3 = 3 \] \[ 0x_1 + 0x_2 + 1x_3 = -2 \]

In this form, we can easily tell that the solution is \(x_1 = 1\), \(x_2 = 3\), and \(x_3 = -2\).

Note that every matrix has only one row-equivalent matrix in reduced row-echelon form.

Consistent Systems of Equations and Free/Dependent Variables

Some systems of equations have no solutions, and some systems of equations have one or infinitely many solutions. When a system of equations has at least one solution, we call it a consistent system. A system of equations with no solutions is an inconsistent system.

Consider the following augmented matrix:

\[ \begin{bmatrix} 1 & 1 & 1 & 2 \\ 2 & 2 & 2 & 4\\ 3 & 2 & 4 & 6 \end{bmatrix} \]

When we convert this matrix into reduced row-echelon form, we get:

\[ \begin{bmatrix} 1 & 0 & 2 & 2 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}\]

The corresponding system of equations is:

\[ 1x_1 + 0x_2 + 2x_3 = 2 \] \[ 0x_1 + 1x_2 - 1x_3 = 0 \] \[ 0x_1 + 0x_2 + 0x_3 = 0 \]

We can write this system more simply as:

\[ x_1 + x_3 = 2 \] \[ x_2 - x_3 = 0 \] \[ 0 = 0 \]

The last equation \(0 = 0\) is always true, so we can disregard it. But notice what happens with the other two equations. We can rewrite them as follows:

\[ x_1 = 2 - x_3\] \[ x_2 = x_3\]

Notice how when we write the solutions in this way, \(x_3\) is free to take on any value, and the values of \(x_1\) and \(x_2\) depend on \(x_3\). Because \(x_3\) can take on any value, there are infinitely many solutions.

We can describe the solution set of this system as all ordered pairs of the form \((x_1, x_2, x_3) = (2 - x_3, x_3, x_3)\) where \(x_3\) is any real number (or even any complex number).

Dependent and free variables

If we have \(A\), the augmented matrix of a system of equations, and \(B\), a row-equivalent matrix in reduced row-echelon form, then if column \(j\) of \(B\) is a pivot column, then the variable \(x_j\) is known as a dependent variable. All other variables are known as free variables.

Let’s look back at our previous example. Here is the matrix in reduced row-echelon form:

\[ \begin{bmatrix} \class{red}{1} & 0 & 2 & 2 \\ 0 & \class{red}{1} & -1 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}\]

The leading 1s are highlighted in red.

Notice that columns 1 and 2 are pivot columns. This means that in the corresponding system of equations, the variables \(x_1\) and \(x_2\) are dependent, and \(x_3\) is free. This is related to how \(x_3\) was able to take on any value in the solution of our system of equations, while the values of \(x_1\) and \(x_2\) depended on the value of \(x_3\).

Determining consistency of systems

We can tell if a system is consistent by looking at the reduced row-echelon form of its corresponding augmented matrix. If this row-reduced matrix has a pivot column at column \(n + 1\), where \(n\) is the number of variables in the system, then the system is inconsistent. Otherwise it is consistent.

For example, consider this system:

\[ x_1 + x_2 + x_3 = 1 \] \[ 2x_1 + 2x_2 + 2x_3 = 3 \] \[ x_1 + 2x_2 + 3x_3 = 4 \]

The augmented matrix of this system row-reduces to:

\[ \begin{bmatrix} 1 & 0 & -1 & 0\\ 0 & 1 & 2 & 0\\ 0 & 0 & 0 & \class{red}{1} \end{bmatrix}\]

Because column \(n + 1 = 4\) of this matrix is a pivot column (as indicated by the highlighted leading 1), this system is inconsistent. (Notice how it is impossible for the equations \(x_1 + x_2 + x_3 = 1\) and \(2x_1 + 2x_2 + 2x_3 = 3\) to be true at the same time!)

In addition, if the row-reduced matrix of a consistent system has \(r\) pivot columns, it is guaranteed that \(r \le n\). If \(r \lt n\), then the system has infinitely many solutions, and if \(r = n\), then the system has exactly one solution.

If a consistent system has more equations than it has variables, then the system has infinitely many solutions.

In conclusion, there are three possibilities for a linear system with \(n\) variables with augmented matrix \(A\):

  • If column \(n + 1\) of the reduced row-echelon form of \(A\) is a pivot column, the system is inconsistent and has no solutions.
  • Otherwise:
    • If the reduced row-echelon form of \(A\) has the same number of pivot columns as it has variables, the system has one solution.
    • If the reduced row-echelon form of \(A\) has fewer pivot columns than it has variables, the system has infinitely many solutions.

Counting free variables

We can tell how many free variables a consistent system has by looking at its corresponding matrix in reduced row-echelon form. If this system has \(n\) variables and the matrix in reduced row-echelon form has \(r\) nonzero rows (rows that don’t only contain zeros), then we can describe the solutions of the system with \(n - r\) free variables.

Going back to our first example, the matrix in reduced row-echelon form has two nonzero rows, so \(r = 2\). The system of equations has 3 variables \(x_1\), \(x_2\), and \(x_3\), so \(n = 3\). Therefore, the system of equations has \(n - r = 3 - 2 = 1\) free variable.

\[ \begin{bmatrix} \class{red}{1} & 0 & 2 & 2 \\ 0 & \class{red}{1} & -1 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}\]

This matrix in reduced row-echelon form has 2 pivot columns (as indicated by the highlighted leading 1s). Since the system has 3 variables, there is \(3 - 2 = 1\) free variable in the system.

To contrast this example, let’s say a system has augmented matrix \(A\) and the reduced row-echelon form of \(A\) is:

\[ \begin{bmatrix} \class{red}{1} & 0 & 0 & 0\\ 0 & \class{red}{1} & 0 & 0\\ 0 & 0 & \class{red}{1} & 0 \end{bmatrix} \]

In this case, the system has 3 variables while the reduced matrix has 3 pivot columns, so there are \(3 - 3 = 0\) free variables.

Homogeneous Systems of Equations and Null Spaces

A homogeneous system of equations is a special type of linear system of equations where all of the constants are zero.

Here’s an example of a homogeneous system:

\[ x_1 + x_2 + x_3 = \class{red}{0} \] \[ 2x_1 - x_2 + 4x_3 = \class{red}{0} \] \[ -x_1 + 2x_2 = \class{red}{0} \]

Notice how the constants (highlighted in red) are zero.

\[ \begin{bmatrix} 1 & 1 & 1 & 0\\ 2 & -1 & 4 & 0\\ -1 & 2 & 0 & 0 \end{bmatrix} \]

Homogeneous systems are always consistent: you can always find a solution to a homogeneous system by setting all of the variables to zero (this is known as the trivial solution).

In this case, setting \(x_1\), \(x_2\), and \(x_3\) all to zero results in all three equations becoming \(0 = 0\).

Are there any other solutions to this system? We can find that out by row-reducing the augmented matrix to get:

\[ \begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 \end{bmatrix}\]

Converting this back into a system of equations, we have:

\[ x_1 = 0 \] \[ x_2 = 0 \] \[ x_3 = 0 \]

Therefore, the trivial solution to this system is also the only solution.

If a homogeneous system has more variables than equations, it has infinitely many solutions (this is because homogeneous systems are always consistent and a consistent system with more variables than equations has infinitely many solutions).

Null spaces of matrices

The null space of a matrix \(A\) is the solution set of the system of equations with \(A\) as its coefficient matrix and the zero vector as its vector of constants (i.e. all of the constants are zero).

Let’s look at the coefficient matrix for our previous system of equations:

\[ \begin{bmatrix} 1 & 1 & 1\\ 2 & -1 & 4\\ -1 & 2 & 0 \end{bmatrix}\]

The null space of this matrix is the solution set to our previous system of equations. In this case, the null space only contains the zero vector, since that’s the only solution to our system of equations.

Now let’s look at another matrix:

\[ \begin{bmatrix} 1 & 2 & 3 \\ -1 & -2 & -3 \\ 2 & 0 & 2\\ \end{bmatrix} \]

The full augmented matrix for the homogeneous system of equations for this matrix is:

\[ \begin{bmatrix} 1 & 2 & 3 & 0\\ -1 & -2 & -3 & 0\\ 2 & 0 & 2 & 0\\ \end{bmatrix} \]

Row-reducing this matrix results in:

\[ \begin{bmatrix} \class{red}{1} & 0 & 1 & 0\\ 0 & \class{red}{1} & 1 & 0\\ 0 & 0 & 0 & 0 \end{bmatrix} \]

The leading 1s are highlighted.

Because there are 2 pivot columns and 3 variables, there is \(3 - 2 = 1\) free variable. Therefore, there are infinitely many solutions to the system. As a result, the null space of our original matrix \(A\) has infinitely many elements, with each element being a solution to the system.

Singular Matrices

Before we talk about singular matrices, let’s define some special types of matrices.

Square matrices

A square matrix is a matrix with the same number of rows and columns.

Here’s an example of a square matrix:

\[ \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix}\]

This matrix has 3 rows and 3 columns, so it is square.

Here’s an example of a non-square matrix:

\[ \begin{bmatrix} 1 & 2 & 3 & 4 \\ 5 & 6 & 7 & 8 \\ 9 & 10 & 11 & 12 \end{bmatrix}\]

This matrix has 3 rows and 4 columns, so it is not square.

Identity matrices

An identity matrix is a square matrix with all 1s on the main diagonal and 0s everywhere else. More formally, the \(n \times n\) identity matrix, denoted by \(I_n\), is defined by:

\[ [I_n]_{ij} = \begin{cases} 1 \;\text{ if } i = j\\ 0 \;\text{ if } i \ne j\\ \end{cases} \quad \text{ for } 1\le i \le n,\, 1 \le j \le n\]

In simple words, the entry at row \(i\) and column \(j\) is 1 if \(i\) and \(j\) are equal and 0 otherwise.

Here are some examples of identity matrices:

\[ I_2 = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \] \[ I_3 = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \] \[ I_4 = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \]

Singular matrices

A square matrix \(A\) is singular if the system of equations with \(A\) as its coefficient matrix and all-zero constants has infinitely many solutions (equivalently, the system has non-trivial solutions). If the system of equations has only the trivial solution, the matrix \(A\) is nonsingular.

In the previous example, we looked at this matrix:

\[ \begin{bmatrix} 1 & 1 & 1\\ 2 & -1 & 4\\ -1 & 2 & 0 \end{bmatrix}\]

We found that the only solution to the corresponding homogeneous system of equations was the trivial solution where all of the variables were set to zero. Therefore, this matrix is nonsingular.

\[ x_1 + x_2 + x_3 = 0 \] \[ 2x_1 - x_2 + 4x_3 = 0 \] \[ -x_1 + 2x_2 = 0 \]

This is the corresponding homogeneous system. The system only has the trivial solution, so the coefficient matrix is nonsingular.

We also looked at this matrix:

\[ \begin{bmatrix} 1 & 2 & 3 \\ -1 & -2 & -3 \\ 2 & 0 & 2\\ \end{bmatrix} \]

We found that the corresponding homogeneous system had infinitely many solutions, so this matrix is singular.

\[ x_1 + 2x_2 + 3x_3 = 0 \] \[ -x_1 - 2x_2 - 3x_3 = 0 \] \[ 2x_1 + 2x_3 = 0 \]

This is the corresponding homogeneous system. The system has infinitely many solutions, so the coefficient matrix is singular.

An interesting fact about nonsingular matrices is that reducing any nonsingular matrix to reduced row-echelon form always results in an identity matrix. More formally, a matrix is nonsingular if and only if the matrix in reduced row-echelon form is an identity matrix.

Here are the reduced row-echelon forms of the previous two matrices:

\[ \begin{bmatrix} 1 & 1 & 1\\ 2 & -1 & 4\\ -1 & 2 & 0 \end{bmatrix} \to \begin{bmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix} \]

This matrix is nonsingular, so it row-reduces to an identity matrix.

\[ \begin{bmatrix} 1 & 2 & 3\\ -1 & -2 & -3\\ 2 & 0 & 2 \end{bmatrix} \to \begin{bmatrix} 1 & 0 & 1\\ 0 & 1 & 1\\ 0 & 0 & 0 \end{bmatrix} \]

This matrix is singular, so it does not row-reduce to an identity matrix.

The null space of a nonsingular matrix contains just one element: the zero vector. A square matrix \(A\) is nonsingular if and only if its null space only contains the zero vector.

In addition, a square matrix \(A\) is nonsingular if and only if the system of equations with coefficient matrix \(A\) and vector of constants \(\mathbf{b}\) has a single unique solution for any possible choice for \(\mathbf{b}\).

To summarize, for any matrix \(A\), the following properties are equivalent (meaning for every property, all other properties are true if and only if that property is true):

  1. \(A\) is a nonsingular matrix.
  2. The reduced row-echelon form of \(A\) is an identity matrix.
  3. The null space of \(A\) contains only the zero vector.
  4. The linear system of equations with coefficient matrix \(A\) and vector of constants \(\mathbf{b}\) has a single unique solution for every possible choice of \(\mathbf{b}\).

Credits / Special Thanks

All code, diagrams, and explanations (except those in the “Guest Explanations” section) were created by Eldrick Chen (also known as “calculus gaming”). This page is open-source - view the GitHub repository here.

Feel free to modify this website in any way! If you have ideas for how to improve this website, feel free to make those changes and publish them yourself, as long as you follow the terms of the GNU General Public License v3.0 (scroll down to view this license).

👋 Hello! I’m Eldrick, and I originally started making educational math websites as a passion project to help people at my school.

Despite being (mostly) the only one to directly work on this project, it wouldn’t have been possible if it wasn’t for the work of many others. Here are some people and organizations I want to credit for allowing me to build this website in the first place.

Tools used to create this page

Fonts used on this page

Special thanks

Legal information