Subsection5.4.1Dissimilarities with elementary algebra
We have seen that matrix algebra is similar in many ways to elementary algebra. Indeed, if we want to solve the matrix equation \(A X = B\) for the unknown \(X\text{,}\) we imitate the procedure used in elementary algebra for solving the equation \(a x = b\text{.}\) One assumption we need is that \(A\) is a square matrix that has an inverse. Notice how exactly the same properties are used in the following detailed solutions of both equations.
Table5.4.1.
Equation in the algebra of real numbers
Equation in matrix algebra
\(a x = b\)
\(A X = B\)
\(a^{-1}(a x) =a^{-1}b\) if \(a \neq 0\)
\(A^{-1}(A X) = A^{-1}B\) if \(A^{-1 }\) exists
\(\left(a^{-1} a\right)x = a^{-1} b\)
Associative Property
\(\left(A^{-1} A\right)X = A^{-1} B\)
\(1x = a^{-1} b\)
Inverse Property
\(I X = A^{-1} B\)
\(x = a^{-1} b\)
Identity Property
\(X = A^{-1} B\)
Certainly the solution process for solving \(A X = B\) is the same as that of solving \(a x = b\text{.}\)
The solution of \(x a = b\) is \(x = b a^{-1} = a^{-1}b\text{.}\) In fact, we usually write the solution of both equations as \(x =\frac{b}{a}\text{.}\) In matrix algebra, the solution of \(X A = B\) is \(X = B A^{-1}\) , which is not necessarily equal to \(A^{-1} B\text{.}\) So in matrix algebra, since the commutative law (under multiplication) is not true, we have to be more careful in the methods we use to solve equations.
It is clear from the above that if we wrote the solution of \(A X = B\) as \(X=\frac{B}{A}\text{,}\) we would not know how to interpret \(\frac{B}{A}\text{.}\) Does it mean \(A^{-1} B\) or \(B A^{-1}\text{?}\) Because of this, \(A^{-1}\) is never written as \(\frac{I}{A}\text{.}\)
Observation5.4.2.Matrix Oddities.
Some of the main dissimilarities between matrix algebra and elementary algebra are that in matrix algebra:
\(A B\) may be different from \(B A\text{.}\)
There exist matrices \(A\) and \(B\) such that \(A B = \pmb{0}\text{,}\) and yet \(A\neq \pmb{0}\) and \(B\neq \pmb{0}\text{.}\)
There exist matrices \(A\) where \(A \neq \pmb{0}\text{,}\) and yet \(A^2 = \pmb{0}\text{.}\)
There exist matrices \(A\) where \(A^2=A\) with \(A\neq I\) and \(A\neq \pmb{0}\)
There exist matrices \(A\) where \(A^2=I\text{,}\) where \(A\neq I\) and \(A\neq -I\)
Exercises5.4.2Exercises
1.
Discuss each of the “Matrix Oddities” with respect to elementary algebra.
In elementary algebra (the algebra of real numbers), each of the given oddities does not exist.
\(AB\) may be different from \(BA\text{.}\) Not so in elementary algebra, since \(a b = b a\) by the commutative law of multiplication.
There exist matrices \(A\) and \(B\) such that \(AB = \pmb{0}\text{,}\) yet \(A\neq \pmb{0}\)and \(B\neq \pmb{0}\text{.}\) In elementary algebra, the only way \(ab = 0\) is if either \(a\) or \(b\) is zero. There are no exceptions.
There exist matrices \(A\text{,}\)\(A\neq \pmb{0}\text{,}\) yet \(A^2=\pmb{0}\text{.}\) In elementary algebra, \(a^2=0\Leftrightarrow a=0\text{.}\)
There exist matrices \(A^2=A\text{.}\) where \(A\neq \pmb{0}\) and \(A\neq I\text{.}\) In elementary algebra, \(a^2=a\Leftrightarrow a=0 \textrm{ or } 1\text{.}\)
There exist matrices \(A\) where \(A^2=I\) but \(A\neq I\) and \(A\neq -I\text{.}\) In elementary algebra, \(a^2=1\Leftrightarrow a=1\textrm{ or }-1\text{.}\)
2.
Determine \(2\times 2\) matrices which show that each of the “Matrix Oddities” are true.
3.
Prove or disprove the following implications.
\(A^2= A\) and \(\det A \neq 0 \Rightarrow A =I\)
\(A^2 = I \textrm{ and } \det A \neq 0 \Rightarrow A = I \textrm{ or } A = -I\text{.}\)
Let \(M_{n\times n}(\mathbb{R})\) be the set of real \(n\times n\) matrices. Let \(P \subseteq M_{n\times n}(\mathbb{R})\) be the subset of matrices defined by \(A \in P\) if and only if \(A^2 = A\text{.}\) Let \(Q \subseteq P\) be defined by \(A\in Q\) if and only if \(\det A \neq 0\text{.}\)
Determine the cardinality of \(Q\text{.}\)
Consider the special case \(n = 2\) and prove that a sufficient condition for \(A \in P \subseteq M_{2\times 2}(\mathbb{R})\) is that \(A\) has a zero determinant (i.e., \(A\) is singular) and \(tr(A) = 1\) where \(tr(A) = a_{11}+ a _{22}\) is the sum of the main diagonal elements of \(A\text{.}\)
Is the condition of part b a necessary condition?
5.
Write each of the following systems in the form \(A X = B\text{,}\) and then solve the systems using matrices.
The matrix of coefficients for this system has a zero determinant; therefore, it has no inverse. The system cannot be solved by this method. In fact, the system has no solution.
6.
Recall that \(p(x) = x^2- 5x + 6\) is called a polynomial, or more specifically, a polynomial over \(\mathbb{R}\text{,}\) where the coefficients are elements of \(\mathbb{R}\) and \(x \in \mathbb{R}\text{.}\) Also, think of the method of solving, and solutions of, \(x^2- 5x + 6= 0\text{.}\) We would like to define the analogous situation for \(2\times 2\) matrices. First define where \(A\) is a \(2\times 2\) matrix \(p(A) =
A^2 - 5A + 6I\text{.}\) Discuss the method of solving and the solutions of \(A^2 - 5A + 6I=\pmb{0}\text{.}\)
7.
For those who know calculus:
Write the series expansion for \(e^a\) centered around \(a=0\text{.}\)
Use the idea of exercise 6 to write what would be a plausible definition of \(e^A\) where \(A\) is an \(n \times n\) matrix.
If \(A=\left(
\begin{array}{cc}
1 & 1 \\
0 & 0 \\
\end{array}
\right)\) and \(B =\left(
\begin{array}{cc}
0 & -1 \\
0 & 0 \\
\end{array}
\right)\) , use the series in part (b) to show that \(e^A= \left(
\begin{array}{cc}
e & e-1 \\
0 & 1 \\
\end{array}
\right)\)and \(e^B= \left(
\begin{array}{cc}
1 & -1 \\
0 & 1 \\
\end{array}
\right)\text{.}\)
Show that \(e^Ae^B\neq e^Be^A\text{.}\)
Show that \(e^{A+B}= \left(
\begin{array}{cc}
e & 0 \\
0 & 1 \\
\end{array}
\right)\text{.}\)
The power series expansion of \(e^a\) is \(\sum_{k=0}^{\infty } \frac{a^k}{k!}\text{.}\) Therefore, it is reasonable to define the matrix exponential \(e^A\) to be \(\sum_{k=0}^{\infty } \frac{A^k}{k!}\text{,}\) assuming this sum converges.
If \(A=\left(
\begin{array}{cc}
1 & 1 \\
0 & 0 \\
\end{array}
\right)\text{,}\) then we observe that \(A^k =A\) for all positive \(k\text{.}\) Therefore
which agrees with the stated value in the problem. The value of \(e^B\) is even easier to derive since \(B^k\) is the zero matrix for \(k \geq 2\text{.}\) Thus, \(e^B=I +B\text{,}\) which equals the matrix that is given in the problem.
We observe that \(e^Ae^B\neq e^{A+B}\text{.}\) They disagree in the first row, second column.