Diagonalize the matrix
\begin{equation*}
A= \left(
\begin{array}{ccc}
1 & 12 & -18 \\
0 & -11 & 18 \\
0 & -6 & 10 \\
\end{array}
\right)\text{.}
\end{equation*}
First, we find the eigenvalues of \(A\text{.}\)
\begin{equation*}
\begin{split}
\det (A-\lambda I) &=\det \left(
\begin{array}{ccc}
1-\lambda & 12 & -18 \\
0 & -\lambda -11 & 18 \\
0 & -6 & 10-\lambda \\
\end{array}
\right)\\
&=(1-\lambda ) \det \left(
\begin{array}{cc}
-\lambda -11 & 18 \\
-6 & 10-\lambda \\
\end{array}
\right)\\
&=(1-\lambda ) ((-\lambda -11)(10-\lambda )+108) = (1-\lambda ) \left(\lambda ^2+\lambda -2\right)
\end{split}
\end{equation*}
Hence, the equation \(\det (A-\lambda I)\) becomes
\begin{equation*}
(1-\lambda ) \left(\lambda ^2+\lambda -2\right) =- (\lambda -1)^2(\lambda +2)
\end{equation*}
Therefore, our eigenvalues for \(A\) are \(\lambda_1= -2\) and \(\lambda_2=1\text{.}\) We note that we do not have three distinct eigenvalues, but we proceed as in the previous example.
Case 1. For \(\lambda_1= -2\) the equation \((A-\lambda I)\vec{x}= \vec{0}\) becomes
\begin{equation*}
\left(
\begin{array}{ccc}
3 & 12 & -18 \\
0 & -9 & 18 \\
0 & -6 & 12 \\
\end{array}
\right) \left(
\begin{array}{c}
x_1 \\
x_2 \\
x_3 \\
\end{array}
\right)= \left(
\begin{array}{c}
0 \\
0 \\
0 \\
\end{array}
\right)
\end{equation*}
We can row reduce the matrix of coefficients to \(\left(
\begin{array}{ccc}
1 & 0 & 2 \\
0 & 1 & -2 \\
0 & 0 & 0 \\
\end{array}
\right)\text{.}\)
The matrix equation is then equivalent to the equations \(x_1 = -2x_3 \textrm{ and } x_2= 2x_3\text{.}\) Therefore, the solution set, or eigenspace, corresponding to \(\lambda_1=-2\) consists of vectors of the form
\begin{equation*}
\left(
\begin{array}{c}
-2x_3 \\
2x_3 \\
x_3 \\
\end{array}
\right)= x_3\left(
\begin{array}{c}
-2 \\
2 \\
1 \\
\end{array}
\right)
\end{equation*}
Therefore \(\left(
\begin{array}{c}
-2 \\
2 \\
1 \\
\end{array}
\right)\) is an eigenvector corresponding to the eigenvalue \(\lambda_1=-2\text{,}\) and can be used for our first column of \(P\text{:}\)
\begin{equation*}
P= \left(
\begin{array}{ccc}
-2 & ? & ? \\
2 & ? & ? \\
1 & ? & ? \\
\end{array}
\right)
\end{equation*}
Before we continue we make the observation: \(E_1\) is a subspace of \(\mathbb{R}^3\) with basis \(\left\{P^{(1)}\right\}\) and \(\dim E_1 =
1\text{.}\)
Case 2. If \(\lambda_2= 1\text{,}\) then the equation \((A-\lambda I)\vec{x}= \vec{0}\) becomes
\begin{equation*}
\left(
\begin{array}{ccc}
0 & 12 & -18 \\
0 & -12 & 18 \\
0 & -6 & 9 \\
\end{array}
\right) \left(
\begin{array}{c}
x_1 \\
x_2 \\
x_3 \\
\end{array}
\right)= \left(
\begin{array}{c}
0 \\
0 \\
0 \\
\end{array}
\right)
\end{equation*}
Without the aid of any computer technology, it should be clear that all three equations that correspond to this matrix equation are equivalent to \(2 x_2-3x_3= 0\text{,}\) or \(x_2= \frac{3}{2}x_3\text{.}\) Notice that \(x_1\) can take on any value, so any vector of the form
\begin{equation*}
\left(
\begin{array}{c}
x_1 \\
\frac{3}{2}x_3 \\
x_3 \\
\end{array}
\right)=x_1\left(
\begin{array}{c}
1 \\
0 \\
0 \\
\end{array}
\right)+x_3\left(
\begin{array}{c}
0 \\
\frac{3}{2} \\
1 \\
\end{array}
\right)
\end{equation*}
will solve the matrix equation.
We note that the solution set contains two independent variables, \(x_1\) and \(x_3\text{.}\) Further, note that we cannot express the eigenspace \(E_2\) as a linear combination of a single vector as in Case 1. However, it can be written as
\begin{equation*}
E_2= \left\{x_1\left(
\begin{array}{c}
1 \\
0 \\
0 \\
\end{array}
\right)+x_3\left(
\begin{array}{c}
0 \\
\frac{3}{2} \\
1 \\
\end{array}
\right) \mid x_1,x_3\in \mathbb{R}\right\}.
\end{equation*}
We can replace any vector in a basis is with a nonzero multiple of that vector. Simply for aesthetic reasons, we will multiply the second vector that generates \(E_2\) by 2. Therefore, the eigenspace \(E_2\) is a subspace of \(\mathbb{R}^3\) with basis \(\left\{\left(
\begin{array}{c}
1 \\
0 \\
0 \\
\end{array}
\right),\left(
\begin{array}{c}
0 \\
3 \\
2 \\
\end{array}
\right)\right\}\) and so \(\dim E_2 = 2\text{.}\)
What this means with respect to the diagonalization process is that \(\lambda_2= 1\) gives us both Column 2 and Column 3 the diagonalizing matrix. The order is not important so we have
\begin{equation*}
P= \left(
\begin{array}{ccc}
-2 & 1 & 0 \\
2 & 0 & 3 \\
1 & 0 & 2 \\
\end{array}
\right)
\end{equation*}
The reader can verify (see Exercise 5 of this section) that \(P^{-1}= \left(
\begin{array}{ccc}
0 & 2 & -3 \\
1 & 4 & -6 \\
0 & -1 & 2 \\
\end{array}
\right)\) and \(P^{-1}A P = \left(
\begin{array}{ccc}
-2 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \\
\end{array}
\right)\)