Skip to main content
Logo image

Applied Discrete Structures

Section 5.4 Matrix Oddities

Subsection 5.4.1 Dissimilarities with elementary algebra

We have seen that matrix algebra is similar in many ways to elementary algebra. Indeed, if we want to solve the matrix equation \(A X = B\) for the unknown \(X\text{,}\) we imitate the procedure used in elementary algebra for solving the equation \(a x = b\text{.}\) One assumption we need is that \(A\) is a square matrix that has an inverse. Notice how exactly the same properties are used in the following detailed solutions of both equations.
Table 5.4.1.
Equation in the algebra of real numbers Equation in matrix algebra
\(a x = b\) \(A X = B\)
\(a^{-1}(a x) =a^{-1}b\) if \(a \neq 0\) \(A^{-1}(A X) = A^{-1}B\) if \(A^{-1 }\) exists
\(\left(a^{-1} a\right)x = a^{-1} b\) Associative Property \(\left(A^{-1} A\right)X = A^{-1} B\)
\(1x = a^{-1} b\) Inverse Property \(I X = A^{-1} B\)
\(x = a^{-1} b\) Identity Property \(X = A^{-1} B\)
Certainly the solution process for solving \(A X = B\) is the same as that of solving \(a x = b\text{.}\)
The solution of \(x a = b\) is \(x = b a^{-1} = a^{-1}b\text{.}\) In fact, we usually write the solution of both equations as \(x =\frac{b}{a}\text{.}\) In matrix algebra, the solution of \(X A = B\) is \(X = B A^{-1}\) , which is not necessarily equal to \(A^{-1} B\text{.}\) So in matrix algebra, since the commutative law (under multiplication) is not true, we have to be more careful in the methods we use to solve equations.
It is clear from the above that if we wrote the solution of \(A X = B\) as \(X=\frac{B}{A}\text{,}\) we would not know how to interpret \(\frac{B}{A}\text{.}\) Does it mean \(A^{-1} B\) or \(B A^{-1}\text{?}\) Because of this, \(A^{-1}\) is never written as \(\frac{I}{A}\text{.}\)

Observation 5.4.2. Matrix Oddities.

Some of the main dissimilarities between matrix algebra and elementary algebra are that in matrix algebra:
  1. \(A B\) may be different from \(B A\text{.}\)
  2. There exist matrices \(A\) and \(B\) such that \(A B = \pmb{0}\text{,}\) and yet \(A\neq \pmb{0}\) and \(B\neq \pmb{0}\text{.}\)
  3. There exist matrices \(A\) where \(A \neq \pmb{0}\text{,}\) and yet \(A^2 = \pmb{0}\text{.}\)
  4. There exist matrices \(A\) where \(A^2=A\) with \(A\neq I\) and \(A\neq \pmb{0}\)
  5. There exist matrices \(A\) where \(A^2=I\text{,}\) where \(A\neq I\) and \(A\neq -I\)

Exercises 5.4.2 Exercises

1.

Discuss each of the “Matrix Oddities” with respect to elementary algebra.
Answer.
In elementary algebra (the algebra of real numbers), each of the given oddities does not exist.
  • \(AB\) may be different from \(BA\text{.}\) Not so in elementary algebra, since \(a b = b a\) by the commutative law of multiplication.
  • There exist matrices \(A\) and \(B\) such that \(AB = \pmb{0}\text{,}\) yet \(A\neq \pmb{0}\)and \(B\neq \pmb{0}\text{.}\) In elementary algebra, the only way \(ab = 0\) is if either \(a\) or \(b\) is zero. There are no exceptions.
  • There exist matrices \(A\text{,}\) \(A\neq \pmb{0}\text{,}\) yet \(A^2=\pmb{0}\text{.}\) In elementary algebra, \(a^2=0\Leftrightarrow a=0\text{.}\)
  • There exist matrices \(A^2=A\text{.}\) where \(A\neq \pmb{0}\) and \(A\neq I\text{.}\) In elementary algebra, \(a^2=a\Leftrightarrow a=0 \textrm{ or } 1\text{.}\)
  • There exist matrices \(A\) where \(A^2=I\) but \(A\neq I\) and \(A\neq -I\text{.}\) In elementary algebra, \(a^2=1\Leftrightarrow a=1\textrm{ or }-1\text{.}\)

2.

Determine \(2\times 2\) matrices which show that each of the “Matrix Oddities” are true.

3.

Prove or disprove the following implications.
  1. \(A^2= A\) and \(\det A \neq 0 \Rightarrow A =I\)
  2. \(A^2 = I \textrm{ and } \det A \neq 0 \Rightarrow A = I \textrm{ or } A = -I\text{.}\)
Answer.
  1. \(\det A \neq 0\Rightarrow A^{-1}\) exists, and if you multiply the equation \(A^2=A\) on both sides by \(A^{-1}\) , you obtain \(A=I\text{.}\)
  2. Counterexample: \(A=\left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \\ \end{array} \right)\)

4.

Let \(M_{n\times n}(\mathbb{R})\) be the set of real \(n\times n\) matrices. Let \(P \subseteq M_{n\times n}(\mathbb{R})\) be the subset of matrices defined by \(A \in P\) if and only if \(A^2 = A\text{.}\) Let \(Q \subseteq P\) be defined by \(A\in Q\) if and only if \(\det A \neq 0\text{.}\)
  1. Determine the cardinality of \(Q\text{.}\)
  2. Consider the special case \(n = 2\) and prove that a sufficient condition for \(A \in P \subseteq M_{2\times 2}(\mathbb{R})\) is that \(A\) has a zero determinant (i.e., \(A\) is singular) and \(tr(A) = 1\) where \(tr(A) = a_{11}+ a _{22}\) is the sum of the main diagonal elements of \(A\text{.}\)
  3. Is the condition of part b a necessary condition?

5.

Write each of the following systems in the form \(A X = B\text{,}\) and then solve the systems using matrices.
  1. \(\displaystyle \begin{array}{c}2x_1+x_2=3\\ x_1-x_2= 1\\ \end{array}\)
  2. \(\displaystyle \begin{array}{c}2x_1-x_2=4\\ x_1 -x_2= 0\\ \end{array}\)
  3. \(\displaystyle \begin{array}{c}2x_1+x_2=1\\ x_1 -x_2= 1\\ \end{array}\)
  4. \(\displaystyle \begin{array}{c}2x_1+x_2=1\\ x_1 -x_2= -1\\ \end{array}\)
  5. \(\displaystyle \begin{array}{c}3x_1+2x_2=1 \\ 6 x_1 +4x_2= -1\\ \end{array}\)
Answer.
  1. \(A^{-1}=\left( \begin{array}{cc} 1/3 & 1/3 \\ 1/3 & -2/3 \\ \end{array} \right)\) \(x_1=4/3\text{,}\) and \(x_2=1/3\)
  2. \(A^{-1}=\left( \begin{array}{cc} 1 & -1 \\ 1 & -2 \\ \end{array} \right)\) \(x_1=4\text{,}\) and \(x_2=4\)
  3. \(A^{-1}=\left( \begin{array}{cc} 1/3 & 1/3 \\ 1/3 & -2/3 \\ \end{array} \right)\) \(x_1=2/3\text{,}\) and \(x_2=-1/3\)
  4. \(A^{-1}=\left( \begin{array}{cc} 1/3 & 1/3 \\ 1/3 & -2/3 \\ \end{array} \right)\) \(x_1=0\text{,}\) and \(x_2=1\)
  5. The matrix of coefficients for this system has a zero determinant; therefore, it has no inverse. The system cannot be solved by this method. In fact, the system has no solution.

6.

Recall that \(p(x) = x^2- 5x + 6\) is called a polynomial, or more specifically, a polynomial over \(\mathbb{R}\text{,}\) where the coefficients are elements of \(\mathbb{R}\) and \(x \in \mathbb{R}\text{.}\) Also, think of the method of solving, and solutions of, \(x^2- 5x + 6= 0\text{.}\) We would like to define the analogous situation for \(2\times 2\) matrices. First define where \(A\) is a \(2\times 2\) matrix \(p(A) = A^2 - 5A + 6I\text{.}\) Discuss the method of solving and the solutions of \(A^2 - 5A + 6I=\pmb{0}\text{.}\)

7.

For those who know calculus:
  1. Write the series expansion for \(e^a\) centered around \(a=0\text{.}\)
  2. Use the idea of exercise 6 to write what would be a plausible definition of \(e^A\) where \(A\) is an \(n \times n\) matrix.
  3. If \(A=\left( \begin{array}{cc} 1 & 1 \\ 0 & 0 \\ \end{array} \right)\) and \(B =\left( \begin{array}{cc} 0 & -1 \\ 0 & 0 \\ \end{array} \right)\) , use the series in part (b) to show that \(e^A= \left( \begin{array}{cc} e & e-1 \\ 0 & 1 \\ \end{array} \right)\)and \(e^B= \left( \begin{array}{cc} 1 & -1 \\ 0 & 1 \\ \end{array} \right)\text{.}\)
  4. Show that \(e^Ae^B\neq e^Be^A\text{.}\)
  5. Show that \(e^{A+B}= \left( \begin{array}{cc} e & 0 \\ 0 & 1 \\ \end{array} \right)\text{.}\)
  6. Is \(e^Ae^B=e^{A+B}\text{?}\)
Solution.
The power series expansion of \(e^a\) is \(\sum_{k=0}^{\infty } \frac{a^k}{k!}\text{.}\) Therefore, it is reasonable to define the matrix exponential \(e^A\) to be \(\sum_{k=0}^{\infty } \frac{A^k}{k!}\text{,}\) assuming this sum converges.
If \(A=\left( \begin{array}{cc} 1 & 1 \\ 0 & 0 \\ \end{array} \right)\text{,}\) then we observe that \(A^k =A\) for all positive \(k\text{.}\) Therefore
\begin{equation*} e^A=I + \sum_{k=1}^{\infty } \frac{A}{k!}= I + \sum_{k=1}^{\infty} \left( \begin{array}{cc} \frac{1}{k!} & \frac{1}{k!} \\ 0 & 0 \\ \end{array} \right) = I +\left( \begin{array}{cc} \sum_{k=1}^{\infty} \frac{1}{k!} & \sum_{k=1}^{\infty} \frac{1}{k!} \\ 0 & 0 \\ \end{array} \right)=I + \left( \begin{array}{cc} e-1 & e-1 \\ 0 & 0 \\ \end{array} \right), \end{equation*}
which agrees with the stated value in the problem. The value of \(e^B\) is even easier to derive since \(B^k\) is the zero matrix for \(k \geq 2\text{.}\) Thus, \(e^B=I +B\text{,}\) which equals the matrix that is given in the problem.
We observe that \(e^Ae^B\neq e^{A+B}\text{.}\) They disagree in the first row, second column.