Skip to main content
\(\newcommand{\identity}{\mathrm{id}} \newcommand{\notdivide}{{\not{\mid}}} \newcommand{\notsubset}{\not\subset} \newcommand{\lcm}{\operatorname{lcm}} \newcommand{\gf}{\operatorname{GF}} \newcommand{\inn}{\operatorname{Inn}} \newcommand{\aut}{\operatorname{Aut}} \newcommand{\Hom}{\operatorname{Hom}} \newcommand{\cis}{\operatorname{cis}} \newcommand{\chr}{\operatorname{char}} \newcommand{\Null}{\operatorname{Null}} \renewcommand{\vec}[1]{\mathbf{#1}} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \)

Section12.5Some Applications

A large and varied number of applications involve computations of powers of matrices. These applications can be found in science, the social sciences, economics, engineering, and, many other areas where mathematics is used. We will consider a few diverse examples the mathematics behind these applications here.

We begin by developing a helpful technique for computing \(A^m\), \(m > 1\). If \(A\) can be diagonalized, then there is a matrix \(P\) such that \(P^{-1}A P = D\), where \(D\) is a diagonal matrix and \begin{gather} A^m= P D^m P^{-1} \textrm{ for all } m\geq 1\label{eq-matrix-power}\tag{12.5.1} \end{gather}

The proof of this identity was an exercise in Section 5.4. The condition that \(D\) be a diagonal matrix is not necessary but when it is, the calculation on the right side is particularly easy to perform. Although the formal proof is done by induction, the reason why it is true is easily seen by writing out an example such as \(m=3\): \begin{equation*} \begin{split} A^3 & = \left(P D P^{-1}\right)^3\\ & =\left(P D P^{-1}\right)\left(P D P^{-1}\right)\left(P D P^{-1}\right)\\ &= P D \left(P^{-1}P\right) D \left(P^{-1}P \right)D P^{-1}\quad\textrm{by associativity of matrix multiplication}\\ &= P D I D I D P^{-1}\\ &= P D D D P^{-1}\\ &= P D^3 P^{-1}\\ \end{split} \end{equation*}

Example12.5.1Application to Recursion: Matrix Computation of the Fibonnaci Sequence

Consider the computation of terms of the Fibonnaci Sequence. Recall that \(F_0= 1, F_1= 1\) and \(F_k= F_{k-1}+F_{k-2}\) for \(k\geq 2\).

In order to formulate the calculation in matrix form, we introduced the “dummy equation” \(F_{k-1}= F_{k-1}\) so that now we have two equations \begin{equation*} \begin{array}{c} F_k= F_{k-1}+F_{k-2}\\ F_{k-1}= F_{k-1}\\ \end{array} \end{equation*} If \( A = \left( \begin{array}{cc} 1 & 1 \\ 1 & 0 \\ \end{array} \right)\), these two equations can be expressed in matrix form as \begin{equation*} \begin{split} \left( \begin{array}{c} F_k \\ F_{k-1} \\ \end{array} \right) &=\left( \begin{array}{cc} 1 & 1 \\ 1 & 0 \\ \end{array} \right)\left( \begin{array}{c} F_{k-1} \\ F_{k-2} \\ \end{array} \right) \quad\textrm{ if } k\geq 2\\ &= A \left( \begin{array}{c} F_{k-1} \\ F_{k-2} \\ \end{array} \right)\\ &= A^2\left( \begin{array}{c} F_{k-2} \\ F_{k-3} \\ \end{array} \right) \quad\textrm{ if } k\geq 3\\ & etc.\\ \end{split} \end{equation*} We can use induction to prove that if \(k\geq 2\), \[\left( \begin{array}{c} F_k \\ F_{k-1} \\ \end{array} \right)=A^{k-1} \left( \begin{array}{c} 1 \\ 1 \\ \end{array} \right)\]

Next, by diagonalizing \(A\) and using the fact that \(A^{m }= P D^m P^{-1}\). we can show that \[F_k= \frac{1}{\sqrt{5}}\left( \left(\frac{1+\sqrt{5}}{2}\right)^k- \left(\frac{1-\sqrt{5}}{2}\right)^k\right)\]

Some comments on this example:

  1. An equation of the form \(F_k = a F_{k-1} + b F_{k-2}\) , where \(a\) and \(b\) are given constants, is referred to as a linear homogeneous second-order difference equation. The conditions \(F_0=c_0\) and \(F_1= c_1\) , where \(c_1\) and \(c_2\) are constants, are called initial conditions. Those of you who are familiar with differential equations may recognize that this language parallels what is used in differential equations. Difference (aka recurrence) equations move forward discretely; that is, in a finite number of positive steps. On the other hand, a differential equation moves continuously; that is, takes an infinite number of infinitesimal steps.

  2. A recurrence relationship of the form \(S_k = a S_{k-1} + b\), where \(a\) and \(b\) are constants, is called a first-order difference equation. In order to write out the sequence, we need to know one initial condition. Equations of this type can be solved similarly to the method outlined in the example by introducing the superfluous equation \(1 =0\cdot F_{k-1}+1\) to obtain in matrix equation: \[\left( \begin{array}{c} F_k \\ 1 \\ \end{array} \right)=\left( \begin{array}{cc} a & b \\ 0 & 1 \\ \end{array} \right)\left( \begin{array}{c} S_{k-1} \\ 1 \\ \end{array} \right)\textrm{ }\Rightarrow \left( \begin{array}{c} F_k \\ 1 \\ \end{array} \right)=\left( \begin{array}{cc} a & b \\ 0 & 1 \\ \end{array} \right)^k\left( \begin{array}{c} F_0 \\ 1 \\ \end{array} \right)\]

In the next example, we apply following theorem, which can be proven by induction.

Example12.5.3Counting Paths with Diagonalization

Consider the graph in Figure 12.5.4.

Counting Numbers of Paths
Figure12.5.4Counting Numbers of Paths

As we saw in Section 6.4, the adjacency matrix of this graph is \(A=\left( \begin{array}{ccc} 1 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 1 \\ \end{array} \right)\).

Recall that \(A^k\) is the adjacency matrix of the relation \(r^k\), where \(r\) is the relation \(\{(a, a), (a, b), (b, a), (b, c), (c, b), (c, c)\}\) of the above graph. Also recall that in computing \(A^k\), we used Boolean arithmetic. What happens if we use “regular” arithmetic? If we square \(A\) we get \(A^2 = \left( \begin{array}{ccc} 2 & 1 & 1 \\ 1 & 2 & 1 \\ 1 & 1 & 2 \\ \end{array} \right)\)

How can we interpret this? We note that \(A_{33} = 2\) and that there are two paths of length two from \(c\) (the third node) to \(c\text{.}\) Also, \(A_{13} = 1\), and there is one path of length 2 from \(a\) to \(c\text{.}\) The reader should verify these claims by examining the graph.

How do we compute \(A^k\) for possibly large values of \(k\text{?}\) From the discussion at the beginning of this section, we know that \(A^k= P D^kP^{-1}\) if \(A\) is diagonalizable. We leave to the reader to show that \(\lambda = 1, 2,\textrm{ and } -1\) are eigenvalues of \(A\) with eigenvectors \[\left( \begin{array}{c} 1 \\ 0 \\ -1 \\ \end{array} \right),\left( \begin{array}{c} 1 \\ 1 \\ 1 \\ \end{array} \right), \textrm{ and } \left( \begin{array}{c} 1 \\ -2 \\ 1 \\ \end{array} \right)\]

Then \[A^k= P \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 2^k & 0 \\ 0 & 0 & (-1)^k \\ \end{array} \right)P^{-1}\] where \(P=\left( \begin{array}{ccc} 1 & 1 & 1 \\ 0 & 1 & -2 \\ -1 & 1 & 1 \\ \end{array} \right)\) and \(P^{-1}=\left( \begin{array}{ccc} \frac{1}{2} & 0 & -\frac{1}{2} \\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \\ \frac{1}{6} & -\frac{1}{3} & \frac{1}{6} \\ \end{array} \right)\)

See Exercise of this section for the completion of this example.

Example12.5.5Matrix Calculus - Exponentials

Those who have studied calculus recall that the Maclaurin series is a useful way of expressing many common functions. For example, \(e^x=\sum _{k=0}^{\infty } \frac{x^k}{k!}\). Indeed, calculators and computers use these series for calculations. Given a polynomial \(f(x)\), we defined the matrix-polynomial \(f(A)\) for square matrices in Chapter 5. Hence, we are in a position to describe \(e^A\) for an \(n \times n\) matrix \(A\) as a limit of polynomials, the partial sums of the series. Formally, we write \[e^A= I + A + \frac{A^2}{2! }+ \frac{A^3}{3!}+ \cdots = \sum _{k=0}^{\infty } \frac{A^k}{k!}\]

Again we encounter the need to compute high powers of a matrix. Let \(A\) be an \(n\times n\) diagonalizable matrix. Then there exists an invertible \(n\times n\) matrix \(P\) such that \(P^{-1}A P = D\), a diagonal matrix, so that \begin{equation*} \begin{split} e^A &=e^{P D P^{-1} }\\ & =\sum _{k=0}^{\infty } \frac{\left(P D P^{-1}\right)^k}{k!} \\ & = P \left(\sum _{k=0}^{\infty } \frac{ D^k}{k!}\right)P^{-1} \end{split} \end{equation*}

The infinite sum in the middle of this final expression can be easily evaluated if \(D\) is diagonal. All entries of powers off the diagonal are zero and the \(i^{\textrm{ th}}\) entry of the diagonal is \[\left(\sum _{k=0}^{\infty } \frac{ D^k}{k!}\right){}_{i i}=\sum _{k=0}^{\infty } \frac{ D_{i i}{}^k}{k!}= e^{D_{ii}}\]

For example, if \(A=\left( \begin{array}{cc} 2 & 1 \\ 2 & 3 \\ \end{array} \right)\), the first matrix we diagonalized in Section 12.3, we found that \(P =\left( \begin{array}{cc} 1 & 1 \\ -1 & 2 \\ \end{array} \right)\) and \(D= \left( \begin{array}{cc} 1 & 0 \\ 0 & 4 \\ \end{array} \right)\).

Therefore, \begin{equation*} \begin{split} e^A &=\left( \begin{array}{cc} 1 & 1 \\ -1 & 2 \\ \end{array} \right) \left( \begin{array}{cc} e & 0 \\ 0 & e^4 \\ \end{array} \right) \left( \begin{array}{cc} \frac{2}{3} & -\frac{1}{3} \\ \frac{1}{3} & \frac{1}{3} \\ \end{array} \right)\\ &= \left( \begin{array}{cc} \frac{2 e}{3}+\frac{e^4}{3} & -\frac{e}{3}+\frac{e^4}{3} \\ -\frac{2 e}{3}+\frac{2 e^4}{3} & \frac{e}{3}+\frac{2 e^4}{3} \\ \end{array} \right)\\ & \approx \left( \begin{array}{cc} 20.0116 & 17.2933 \\ 34.5866 & 37.3049 \\ \end{array} \right) \end{split} \end{equation*}


Many of the ideas of calculus can be developed using matrices. For example, if \(A(t) = \left( \begin{array}{cc} t^3 & 3 t^2+8t \\ e^t & 2 \\ \end{array} \right)\) then \(\frac{d A(t)}{d t}=\left( \begin{array}{cc} 3 t^2 & 6 t+8 \\ e^t & 0 \\ \end{array} \right)\)

Many of the basic formulas in calculus are true in matrix calculus. For example, \[\frac{d (A(t)+B(t))}{d t} = \frac{d A(t)}{d t}+ \frac{d B(t)}{d t}\] and if \(A\) is a constant matrix, \[\frac{d e^{A t}}{d t}= A e^{A t}\]

Matrix calculus can be used to solve systems of differential equations in a similar manner to the procedure used in ordinary differential equations.

Note12.5.7Sage Note - Matrix Exponential

Sage's matrix exponential method is exp.

Subsection12.5.1Exercises for Section 12.5


  1. Write out all the details of Example 12.5.1 to show that the formula for \(F_k\) given in the text is correct.

  2. Use induction to prove the assertion made in the example that \(\left( \begin{array}{c} F_k \\ F_{k-1} \\ \end{array} \right)=A^{k-1} \left( \begin{array}{c} 1 \\ 1 \\ \end{array} \right)\)


  1. Do Example 8.3.8 of Chapter 8 using the method outlined in Example 12.5.1. Note that the terminology characteristic equation, characteristic polynomial, and so on, introduced in Chapter 8, comes from the language of matrix algebra,

  2. What is the significance of Algorithm 8.3.1, part c, with respect to this section?


Solve \(S(k) = 5S(k - 1) + 4\), with \(S(0) = 0\), using the method of this section.


How many paths are there of length 6 from vertex 1 to vertex 3 in Figure 12.5.8? How many paths from vertex 2 to vertex 2 of length 6 are there?

Graph for exercise 4
Figure12.5.8Graph for exercise 4

Regarding Example 12.5.3,

  1. Use matrices to determine the number of paths of length 1 that exist from vertex \(a\) to each of the vertices in the given graph. Verify using the graph. Do the same for vertices \(b\) and \(c\text{.}\)

  2. Verify all the details provided in the example.

  3. Use matrices to determine the number of paths of length 4 there between each pair of nodes in the graph. Verify your results using the graph.


Let \(A =\left( \begin{array}{cc} 2 & -1 \\ -1 & 2 \\ \end{array} \right)\)

  1. Find \(e^A\)

  2. Recall that \(\sin x = \sum _{k=0}^{\infty } \frac{(-1)^k x^k}{(2 k+1)!}\) and compute \(\sin A\).

  3. Formulate a reasonable definition of the natural logarithm of a matrix and compute \(\ln A\).


We noted in Chapter 5 that since matrix algebra is not commutative under multiplication, certain difficulties arise. Let \(A=\left( \begin{array}{cc} 1 & 1 \\ 0 & 0 \\ \end{array} \right)\) and \(B=\left( \begin{array}{cc} 0 & 0 \\ 0 & 2 \\ \end{array} \right)\).

  1. Compute \(e^A\), \(e^B\), and \(e^{A+B}\). Compare \(e^A e^{B}\), \(e^B e^A\) and \(e^{A+B}\) .

  2. Show that if \(\pmb{0}\) is the \(2\times 2\) zero matrix, then \(e^{\pmb{0}}= I\).

  3. Prove that if \(A\) and \(B\) are two matrices that do commute, then \(e^{A+B}=e^Ae^B\), thereby proving that \(e^A\) and \(e^B\) commute.

  4. Prove that for any matrix \(A\text{,}\) \(\left(e^A\right)^{-1}= e^{-A}\).


Another observation for adjacency matrices: For the matrix in Example 12.5.3, note that the sum of the elements in the row corresponding to the node \(a\) (that is, the first row) gives the outdegree of \(a\text{.}\) Similarly, the sum of the elements in any given column gives the indegree of the node corresponding to that column.

Graph for exercise 8
Figure12.5.9Graph for exercise 8

  1. Using the matrix \(A\) of Example 12.5.3, find the outdegree and the indegree of each node. Verify by the graph.

  2. Repeat part (a) for the directed graphs in Figure 12.5.9.