Skip to main content
\(\newcommand{\identity}{\mathrm{id}} \newcommand{\notdivide}{{\not{\mid}}} \newcommand{\notsubset}{\not\subset} \newcommand{\lcm}{\operatorname{lcm}} \newcommand{\gf}{\operatorname{GF}} \newcommand{\inn}{\operatorname{Inn}} \newcommand{\aut}{\operatorname{Aut}} \newcommand{\Hom}{\operatorname{Hom}} \newcommand{\cis}{\operatorname{cis}} \newcommand{\chr}{\operatorname{char}} \newcommand{\Null}{\operatorname{Null}} \renewcommand{\vec}[1]{\mathbf{#1}} \newcommand{\lt}{ < } \newcommand{\gt}{ > } \newcommand{\amp}{ & } \)

Section12.2Matrix Inversion

In Chapter 5 we defined the inverse of an \(n \times n\) matrix. We noted that not all matrices have inverses, but when the inverse of a matrix exists, it is unique. This enables us to define the inverse of an \(n \times n\) matrix \(A\) as the unique matrix \(B\) such that \(A B = B A =I\), where \(I\) is the \(n \times n\) identity matrix. In order to get some practical experience, we developed a formula that allowed us to determine the inverse of invertible \(2\times 2\) matrices. We will now use the Gauss-Jordan procedure for solving systems of linear equations to compute the inverses, when they exist, of \(n\times n\) matrices, \(n \geq 2\). The following procedure for a \(3\times 3\) matrix can be generalized for \(n\times n\) matrices, \(n\geq 2\).

Given the matrix \(A = \left( \begin{array}{ccc} 1 & 1 & 2 \\ 2 & 1 & 4 \\ 3 & 5 & 1 \\ \end{array} \right)\), we want to find its inverse, the matrix \(B=\left( \begin{array}{ccc} x_{11} & x_{12} & x_{13} \\ x_{21} & x_{22} & x_{23} \\ x_{31} & x_{32} & x_{33} \\ \end{array} \right)\), if it exists, such that \(A B = I\) and \(B A = I\). We will concentrate on finding a matrix that satisfies the first equation and then verify that B also satisfies the second equation.

The equation \[\left( \begin{array}{ccc} 1 & 1 & 2 \\ 2 & 1 & 4 \\ 3 & 5 & 1 \\ \end{array} \right)\left( \begin{array}{ccc} x_{11} & x_{12} & x_{13} \\ x_{21} & x_{22} & x_{23} \\ x_{31} & x_{32} & x_{33} \\ \end{array} \right)= \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right)\] is equivalent to \[\left( \begin{array}{ccc} x_{11}+x_{21}+2 x_{31} & x_{12}+x_{22}+2 x_{32} & x_{13}+x_{23}+2 x_{33} \\ 2 x_{11}+x_{21}+4 x_{31} & 2 x_{12}+x_{22}+4 x_{32} & 2 x_{13}+x_{23}+4 x_{33} \\ 3 x_{11}+5 x_{21}+x_{31} & 3 x_{12}+5 x_{22}+x_{32} & 3 x_{13}+5 x_{23}+x_{33} \\ \end{array} \right)= \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right)\]

By definition of equality of matrices, this gives us three systems of equations to solve. The augmented matrix of one of the systems, the one equating the first columns of the two matrices is: \begin{align} \left( \begin{array}{ccc|c} 1 & 1 & 2 & 1 \\ 2 & 1 & 4 & 0 \\ 3 & 5 & 1 & 0 \\ \end{array} \right)\label{eq-col-1}\tag{12.2.1} \end{align}

Using the Gauss-Jordan algorithm, we have: \begin{equation*} \begin{split} \left( \begin{array}{ccc|c} 1 & 1 & 2 & 1 \\ 2 & 1 & 4 & 0 \\ 3 & 5 & 1 & 0 \\ \end{array} \right) & \overset{-2 R_1+R_2}{\longrightarrow }\textrm{ }\left( \begin{array}{ccc|c} 1 & 1 & 2 & 1 \\ 0 & -1 & 0 & -2 \\ 3 & 5 & 1 & 0 \\ \end{array} \right) \overset{-3 R_1+R_3}{\longrightarrow }\textrm{ }\left( \begin{array}{ccc|c} 1 & 1 & 2 & 1 \\ 0 & -1 & 0 & -2 \\ 0 & 2 & -5 & -3 \\ \end{array} \right)\\ & \textrm{ }\overset{-1 R_2}{\longrightarrow }\textrm{ }\left( \begin{array}{ccc|c} 1 & 1 & 2 & 1 \\ 0 & 1 & 0 & 2 \\ 0 & 2 & -5 & -3 \\ \end{array} \right)\\ & \textrm{ }\overset{-R_2+R_1\textrm{ and} -2R_2+R_3}{\longrightarrow }\textrm{ }\left( \begin{array}{ccc|c} 1 & 0 & 2 & -1 \\ 0 & 1 & 0 & 2 \\ 0 & 0 & -5 & -7 \\ \end{array} \right)\\ & \overset{-\frac{1}{5} R_3}{\longrightarrow }\textrm{ } \left( \begin{array}{ccc|c} 1 & 0 & 2 & -1 \\ 0 & 1 & 0 & 2 \\ 0 & 0 & 1 & 7/5 \\ \end{array} \right)\overset{-2 R_3+R_1}{\longrightarrow }\textrm{ }\left( \begin{array}{ccc|c} 1 & 0 & 0 & -\frac{19}{5} \\ 0 & 1 & 0 & 2 \\ 0 & 0 & 1 & \frac{7}{5} \\ \end{array} \right)\\ \end{split} \end{equation*} So \(x_{11}= -19/5, x_{21}=2\) and \(x_{31}=7/5\), which gives us the first column of \(B\).

The matrix form of the system to obtain \(x_{12}\), \(x_{22}\), and \(x_{32}\), the second column of B, is: \begin{align} \left( \begin{array}{ccc|c} 1 & 1 & 2 & 0 \\ 2 & 1 & 4 & 1 \\ 3 & 5 & 1 & 0 \\ \end{array} \right)\label{col-2}\tag{12.2.2} \end{align} which reduces to \begin{align} \left( \begin{array}{ccc|c} 1 & 0 & 0 & \frac{9}{5} \\ 0 & 1 & 0 & -1 \\ 0 & 0 & 1 & -\frac{2}{5} \\ \end{array} \right)\label{col-2-inverse}\tag{12.2.3} \end{align} The critical thing to note here is that the coefficient matrix in (12.2.2) is the same as the matrix in (12.2.1), hence the sequence of row operations that we used in row reduction are the same in both cases.

To determine the third column of \(B\), we reduce \[\left( \begin{array}{ccc|c} 1 & 1 & 2 & 0 \\ 2 & 1 & 4 & 0 \\ 3 & 5 & 1 & 1 \\ \end{array} \right)\] to obtain \(x_{13}= 2/5, x_{23}=0\) and \(x_{33}=-1/5\). Here again it is important to note that the sequence of row operations used to solve this system is exactly the same as those we used in the first system. Why not save ourselves a considerable amount of time and effort and solve all three systems simultaneously? This we can do this by augmenting the coefficient matrix by the identity matrix \(I\). We then have, by applying the same sequence of row operations as above, \[\left( \begin{array}{ccc|ccc} 1 & 1 & 2 & 1 & 0 & 0 \\ 2 & 1 & 4 & 0 & 1 & 0 \\ 3 & 5 & 1 & 0 & 0 & 1 \\ \end{array} \right)\longrightarrow \left( \begin{array}{ccc|ccc} 1 & 0 & 0 & -\frac{19}{5} & \frac{9}{5} & \frac{2}{5} \\ 0 & 1 & 0 & 2 & -1 & 0 \\ 0 & 0 & 1 & \frac{7}{5} & -\frac{2}{5} & -\frac{1}{5} \\ \end{array} \right)\] So that \[B =\textrm{ }\left( \begin{array}{ccc} -\frac{19}{5} & \frac{9}{5} & \frac{2}{5} \\ 2 & -1 & 0 \\ \frac{7}{5} & -\frac{2}{5} & -\frac{1}{5} \\ \end{array} \right)\] The reader should verify that \(B A = I\) so that \(A ^{-1} = B\).

As the following theorem indicates, the verification that \(B A = I\) is not necessary. The proof of the theorem is beyond the scope of this text. The interested reader can find it in most linear algebra texts.

It is clear from Chapter 5 and our discussions in this chapter that not all \(n \times n\) matrices have inverses. How do we determine whether a matrix has an inverse using this method? The answer is quite simple: the technique we developed to compute inverses is a matrix approach to solving several systems of equations simultaneously.

Example12.2.2Recognition of a non-invertible matrix

The reader can verify that if \(A=\left( \begin{array}{ccc} 1 & 2 & 1 \\ -1 & -2 & -1 \\ 0 & 5 & 8 \\ \end{array} \right)\) then the augmented matrix \(\left( \begin{array}{ccc|ccc} 1 & 2 & 1 & 1 & 0 & 0 \\ -1 & -2 & -2 & 0 & 1 & 0 \\ 0 & 5 & 8 & 0 & 0 & 1 \\ \end{array} \right)\) reduces to \begin{align} \left( \begin{array}{ccc|ccc} 1 & 2 & 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 1 & 0 \\ 0 & 5 & 8 & 0 & 0 & 1 \\ \end{array} \right)\label{reduced-with-zero}\tag{12.2.4} \end{align}

Although this matrix can be row-reduced further, it is not necessary to do so since, in equation form, we have:

\(\begin{array}{l} x_{11}+2 x_{21}+x_{31}=1 \\ \textrm{ }0=1 \\ \textrm{ }5 x_{21}+8 x_{31}=0 \\ \end{array}\) \(\begin{array}{l} x_{12}+2 x_{22}+x_{32}=0 \\ \textrm{ }0=1 \\ \textrm{ }5 x_{22}+8 x_{32}=0 \\ \end{array}\) \(\begin{array}{l} x_{13}+2 x_{23}+x_{33}=0 \\ \textrm{ }0=0 \\ \textrm{ }5 x_{23}+8 x_{33}=1 \\ \end{array}\)

Clearly, there are no solutions to the first two systems, therefore \(A^{-1}\) does not exist. From this discussion it should be obvious to the reader that the zero row of the coefficient matrix together with the nonzero entry in the fourth column of that row in matrix (12.2.4) tells us that \(A^{-1}\) does not exist.

Subsection12.2.1Exercises for Section 12.2

1

In order to develop an understanding of the technique of this section, work out all the details of Example 12.2.2.

2

Use the method of this section to find the inverses of the following matrices whenever possible. If an inverse does not exist, explain why.

  1. \(\left( \begin{array}{cc} 1 & 2 \\ -1 & 3 \\ \end{array} \right)\)

  2. \(\left( \begin{array}{cccc} 0 & 3 & 2 & 5 \\ 1 & -1 & 4 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 1 & 3 & -1 \\ \end{array} \right)\)

  3. \(\left( \begin{array}{ccc} 2 & -1 & 0 \\ -1 & 2 & -1 \\ 0 & -1 & 2 \\ \end{array} \right)\)

  4. \(\left( \begin{array}{ccc} 1 & 2 & 1 \\ -2 & -3 & -1 \\ 1 & 4 & 4 \\ \end{array} \right)\)

  5. \(\left( \begin{array}{ccc} 6 & 7 & 2 \\ 4 & 2 & 1 \\ 6 & 1 & 1 \\ \end{array} \right)\)

  6. \(\left( \begin{array}{ccc} 2 & 1 & 3 \\ 4 & 2 & 1 \\ 8 & 2 & 4 \\ \end{array} \right)\)

3

Use the method of this section to find the inverses of the following matrices whenever possible. If an inverse does not exist, explain why.

  1. \(\left( \begin{array}{cc} \frac{1}{3} & 2 \\ \frac{1}{5} & -1 \\ \end{array} \right)\)

  2. \(\left( \begin{array}{cccc} 1 & 0 & 0 & 3 \\ 2 & -1 & 0 & 6 \\ 0 & 2 & 1 & 0 \\ 0 & -1 & 3 & 2 \\ \end{array} \right)\)

  3. \(\left( \begin{array}{ccc} 1 & -1 & 0 \\ -1 & 2 & -1 \\ 0 & -1 & 1 \\ \end{array} \right)\)

  4. \(\left( \begin{array}{ccc} 1 & 0 & 0 \\ 2 & 2 & -1 \\ 1 & -1 & 1 \\ \end{array} \right)\)

  5. \(\left( \begin{array}{ccc} 2 & 3 & 4 \\ 3 & 4 & 5 \\ 4 & 5 & 6 \\ \end{array} \right)\)

  6. \(\left( \begin{array}{ccc} 1 & \frac{1}{2} & \frac{1}{3} \\ \frac{1}{2} & \frac{1}{3} & \frac{1}{4} \\ \frac{1}{3} & \frac{1}{4} & \frac{1}{5} \\ \end{array} \right)\)

Answer
4

  1. Find the inverses of the following matrices.

    1. \(\left( \begin{array}{ccc} 2 & 0 & 0 \\ 0 & 3 & 0 \\ 0 & 0 & 5 \\ \end{array} \right)\)

    2. \(\left( \begin{array}{cccc} -1 & 0 & 0 & 0 \\ 0 & \frac{5}{2} & 0 & 0 \\ 0 & 0 & \frac{1}{7} & 0 \\ 0 & 0 & 0 & \frac{3}{4} \\ \end{array} \right)\)

  2. If \(D\) is a diagonal matrix whose diagonal entries are nonzero, what is \(D^{-1}\)?

5

Express each system of equations in Exercise 12.1.1.1 in the form \(A x = B\). When possible, solve each system by first finding the inverse of the matrix of coefficients.

Answer