Skip to main content
\(\newcommand{\identity}{\mathrm{id}} \newcommand{\notdivide}{{\not{\mid}}} \newcommand{\notsubset}{\not\subset} \newcommand{\lcm}{\operatorname{lcm}} \newcommand{\gf}{\operatorname{GF}} \newcommand{\inn}{\operatorname{Inn}} \newcommand{\aut}{\operatorname{Aut}} \newcommand{\Hom}{\operatorname{Hom}} \newcommand{\cis}{\operatorname{cis}} \newcommand{\chr}{\operatorname{char}} \newcommand{\Null}{\operatorname{Null}} \renewcommand{\vec}[1]{\mathbf{#1}} \newcommand{\lt}{ < } \newcommand{\gt}{ > } \newcommand{\amp}{ & } \)

Section15.5Coding Theory, Group Codes

A Transmission Problem

In this section, we will introduce the basic ideas involved in coding theory and consider solutions of a coding problem by means of group codes.

Imagine a situation in which information is being transmitted between two points. The information takes the form of high and low pulses (for example, radio waves or electric currents), which we will label 1 and 0, respectively. As these pulses are sent and received, they are grouped together in blocks. of fixed length. The length determines how much information can be contained in one block. If the length is r, there are \(2^r\) different values that a block can have. If the information being sent takes the form of text, each block might be a character. In that case, the length of a block may be seven, so that \(2^7 = 128\) block values can represent letters (both upper and lower case), digits, punctuation, and so on. During the transmission of data, noise can alter the signal so that what is received differs from what is sent. Figure 15.5.1 illustrates the problem that can be encountered if information is transmitted between two points. During the transmission of data, noise can alter the signal so that what is received differs from what is sent.

A noisy transmission
Figure15.5.1A noisy transmission

Noise is a fact of life for anyone who tries to transmit information. Fortunately, in most situations, we could expect a high percentage of the pulses that are sent to be received properly. However, when large numbers of pulses are transmitted, there are usually some errors due to noise. For the remainder of the discussion, we will make assumptions about the nature of the noise and the message that we want to send. Henceforth, we will refer to the pulses as bits.

We will assume that our information is being sent along a binary symmetric channel. By this, we mean that any single bit that is transmitted will be received improperly with a certain fixed probability, \(p\). The value of p is usually quite small. To illustrate the process, we will assume that \(p = 0.001\), which, in the real world, would be considered somewhat large. Since \(1 - p = 0.999\), we can expect \(99.9\%\) of all bits to be properly received.

Suppose that our message consists of 3,000 bits of information, to be sent in blocks of three bits each. Two factors will be considered in evaluating a method of transmission. The first is the probability that the message is received with no errors. The second is the number of bits that will be transmitted in order to send the message. This quantity is called the rate of transmission: \[\textrm{ Rate}= \frac{\textrm{ Message length}}{\textrm{ Number of bits transmitted}}\] As you might expect, as we devise methods to improve the probability of success, the rate will decrease.

Suppose that we ignore the noise and transmit the message “as is.” The probability of success is \(0.999^{3000}= 0.0497124\). Therefore we only successfully receive the message in a totally correct form less than \(5\%\) of the time. The rate of \(\frac{3000}{3000} = 1\) certainly doesn't offset this poor probability.

Our strategy for improving our chances of success will be to send an encoded message across the binary symmetric channel. The encoding will be done in such a way that small errors can be identified and corrected. This idea is illustrated in Figure 15.5.2.

The Coding Process
Figure15.5.2The Coding Process

In our examples, the functions that will correspond to our encoding and decoding devices will all be homomorphisms between Cartesian products of \(\mathbb{Z}_2\).

Subsection15.5.1Error Detection

Suppose that each block of three bits \(a = \left(a_1, a_2 , a_3 \right)\) is encoded with the function \(e: \mathbb{Z}_2{}^3\to \mathbb{Z}_2{}^4\), where \[e(a) = \left(a _1, a _2 , a _3, a_1+_2a_2+_2a_3 \right)\] When the encoded block is received, the first three bits are probably part of the message (it is correct approximately \(99.7\%\) of the time), but the added bit that is sent will make it possible to detect single errors in the block. Note that when \(e(a)\) is transmitted, the sum of its components is \(a_1+_2 a_2 +_2 a_3+_2 \left( a_1+_2 a_2+_2 a_3\right)= 0\),since \(a_i+a_i=0\) in \(\mathbb{Z}_2\).

If any single bit is garbled by noise, the sum of the received bits will be 1. The last bit of \(e(a)\) is called the parity bit. A parity error occurs if the sum of the received bits is 1. Since more than one error is unlikely when \(p\) is small, a high percentage of all errors can be detected.

At the receiving end, the decoding function acts on the four-bit block \(b = \left(b_1,b _2 ,b_3,b_4 \right)\) with the function \(d: \mathbb{Z}_2{}^4\to \mathbb{Z}_2^4\), where \[d(b) = \left(b_1,b _2 ,b_3,b_1+_2b _2 +_2b_3+_2b_4 \right)\] The fourth bit is called the parity-check bit. If no parity error occurs, the first three bits are recorded as part of the message. If a parity error occurs, we will assume that a retransmission of that block can be requested. This request can take the form of automatically having the parity-check bit of \(d(b)\) sent back to the source. If 1 is received, the previous block is retransmitted; if 0 is received, the next block is sent. This assumption of two-way communication is significant, but it is desirable to make this coding system useful. It is reasonable to expect that the probability of a transmission error in the opposite direction is also 0.001. Without going into the details, we will report that the probability of success is approximately 0.990 and the rate is approximately 3/5. The rate includes the transmission of the parity-check bit to the source.

Subsection15.5.2Error Correction

Next, we will consider a coding process that can correct errors at the receiving end so that only one-way communication is needed. Before we begin, recall that every element of \(\mathbb{Z}_2{}^n\), \(n \geq 1\), is its own inverse; that is, \(-b = b\). Therefore, \(a - b = a + b\).

Noisy three-bit message blocks are difficult to transmit because they are so similar to one another. If \(a\) and \(b\) are in \(\mathbb{Z}_2{}^3\), their difference, \(a +_2 b\), can be thought of as a measure of how close they are. If \(a\) and \(b\) differ in only one bit position, one error can change one into the other. The encoding that we will introduce takes a block \(a = \left(a_1, a_2, a_3 \right)\) and produces a block of length 6 called the code word of \(a\). The code words are selected so that they are farther from one another than the messages are. In fact, each code word will differ from each other code word by at least three bits. As a result, any single error will not push a code word close enough to another code word to cause confusion. Now for the details.

Let \(G=\left( \begin{array}{cccccc} 1 & 0 & 0 & 1 & 1 & 0 \\ 0 & 1 & 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 0 & 1 & 1 \\ \end{array} \right)\). We call \(G\) the generator matrix for the code, and let \(a = \left(a_1, a_2, a_3 \right)\) be our message.

Define \(e : \mathbb{Z}_2{}^3 \to \mathbb{Z}_2{}^6\) by \[e(a) = a G = \left(a_1, a_2, a_3,a_4, a_5, a_6\right)\] where \[\begin{split} a_4 &= a_1 +_2 a_2\\ a_5 &= a_1 \quad\quad +_2 a_3\\ a_6 &= \quad \quad a_2 +_2 a_3\\ \end{split}\]

Notice that \(e\) is a homomorphism. Also, if \(a\) and \(b\) are distinct elements of \(\mathbb{Z}_2{}^3\), then \(c = a + b\) has at least one coordinate equal to 1. Now consider the difference between \(e(a)\) and \(e(b)\): \begin{equation*} \begin{split} e(a) + e(b) &= e(a + b) \\ & = e(c)\\ & = \left(c_1, c_2, c_3, c_4, c_5, c_6\right)\\ \end{split} \end{equation*} Whether \(c\) has 1, 2, or 3 ones, \(e(c)\) must have at least three ones; therefore \(e(a)\) and \(e(b)\) differ in at least three bits.

Now consider the problem of decoding the code words. Imagine that a code word, \(e(a)\), is transmitted, and \(b= \left(b_1, b_2, b_3,b_4, b_5, b_6\right)\) is received. At the receiving end, we know the formula for \(e(a)\), and if no error has occurred in transmission, \[\begin{array}{c} b_1= a_1 \\ b_2=a_2 \\ b_3=a_3 \\ b_4=a_1+_2a_2 \\ b_5=a_1+_2a_3 \\ b_6=a_2+_2a_3 \\ \end{array} \Rightarrow \begin{array}{c} b_1+_2 b_2\quad \quad +_2 b_4\quad\quad\quad\quad=0 \\ b_1\quad \quad +_2 b_3\quad \quad+_2 b_5\quad\quad=0 \\ \quad \quad b_2+_2 b_3\quad \quad\quad +_2 b_6\quad=0\\ \end{array} \] The three equations on the right are called parity-check equations. If any of them is not true, an error has occurred. This error checking can be described in matrix form.

Let \[P=\left( \begin{array}{ccc} 1 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right)\]

\(P\) is called the parity-check matrix for this code. Now define \(p:\mathbb{Z}_2{}^6 \to \mathbb{Z}_2{}^3\) by \(p(b) = b P\). We call \(p(b)\) the syndrome of the received block. For example, \(p(0,1,0,1,0,1)=(0,0,0)\) and \(p(1,1,1,1,0,0)=(1,0,0)\)

Note that \(p\) is also a homomorphism. If the syndrome of a block is \((0, 0, 0)\), we can be almost certain that the message block is \(\left(b_1, b_2, b_3\right)\).

Next we turn to the method of correcting errors. Despite the fact that there are only eight code words, one for each three-bit block value, the set of possible received blocks is \(\mathbb{Z}_2{}^6\), with 64 elements. Suppose that \(b\) is not a code word, but that it differs from a code word by exactly one bit. In other words, it is the result of a single error in transmission. Suppose that \(w\) is the code word that \(b\) is closest to and that they differ in the first bit. Then \(b + w = (1, 0, 0, 0, 0, 0)\) and \begin{equation*} \begin{split} p(b) & = p(b) + p(w)\quad \textrm{ since }p(w) = (0, 0, 0)\\ &=p(b+w)\quad\textrm{ since }p \textrm{ is a homomorphism}\\ & =p(1,0,0,0,0,0)\\ & =(1,1,0)\\ \end{split} \end{equation*}

Note that we haven't specified \(b\) or \(w\), only that they differ in the first bit. Therefore, if \(b\) is received, there was probably an error in the first bit and \(p(b) = (1, 1, 0)\), the transmitted code word was probably \(b + (1, 0, 0, 0, 0, 0)\) and the message block was \(\left(b_1+_2 1, b_2, b_3\right)\). The same analysis can be done if \(b\) and \(w\) differ in any of the other five bits.

This process can be described in terms of cosets. Let \(W\) be the set of code words; that is, \(W = e\left(\mathbb{Z}_2{}^3 \right)\). Since \(e\) is a homomorphism, \(W\) is a subgroup of \(\mathbb{Z}_2{}^6\). Consider the factor group \(\left.\mathbb{Z}_2{}^6\right/W\): \begin{equation*} \lvert \mathbb{Z}_2{}^6/W \rvert =\frac{\lvert \mathbb{Z}_2{}^6 \rvert }{\lvert W \rvert}=\frac{64}{8}=8 \end{equation*}

Suppose that \(b_1\) and \(b_2\) are representatives of the same coset. Then \(b_1= b_2+w\) for some \(w\) in \(W\). Therefore, \begin{equation*}\begin{split} p\left(b _1\right) &= p\left(b_1\right) + p(w)\quad \textrm{ since } p(w) = (0, 0, 0)\\ &= p\left(b_1 + w\right)\\ &= p\left(b_2 \right)\\ \end{split} \end{equation*} and so \(b_1\) and \(b_2\) have the same syndrome.

Finally, suppose that \(d_1\) and \(d_2\) are distinct and both have only a single coordinate equal to 1. Then \(d_1+d_2\) has exactly two ones. Note that the identity of \(\mathbb{Z}_2{}^6\), \((0, 0, 0, 0, 0, 0)\), must be in \(W\). Since \(d_1+d_2\) differs from the identity by two bits, \(d_1+d_2 \notin W\). Hence \(d_1\) and \(d_2\) belong to distinct cosets. The reasoning above serves as a proof of the following theorem.

Now we can describe the error-correcting process. First match each of the blocks with a single 1 with its syndrome. In addition, match the identity of \(W\) with the syndrome \((0, 0, 0)\) as in the table below. Since there are eight cosets of \(W\), select any representative of the eighth coset to be distinguished. This is the coset with syndrome \((1, 1, 1)\). \[\begin{array}{c|c} \begin{array}{c} \textrm{ Syndrome} \\ \hline \begin{array}{ccc} 0 & 0 & 0 \\ 1 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 1 & 1 \\ \end{array} \\ \end{array} & \begin{array}{c} \textrm{ Error} \textrm{ Correction} \\ \hline \begin{array}{cccccc} 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 0 & 1 \\ \end{array} \\ \end{array} \\ \end{array}\]

When block \(b\) is received, you need only compute the syndrome, \(p(b)\), and add to \(b\) the error correction that matches \(p(b)\).

We will conclude this example by computing the probability of success for our hypothetical situation. It is \(\left(0.999^6 + 6\ 0.999^5 \times 0.001\right)^{1000}=0.985151\). The rate for this code is \(\frac{1}{2}\).

Subsection15.5.3Exercises for Section 15.5

1

If the error-detecting code is being used, how would you act on the following received blocks?

  1. \((1, 0, 1, 1)\)

  2. \((1, 1, 1, 1)\)

  3. \((0, 0, 0, 0)\)

Answer
2

Express the encoding and decoding functions for the error-detecting code using matrices.

3

If the error-correcting code is being used, how would you decode the following blocks? Expect a problem with one of these. Why?

  1. \((1,0,0,0,1,1)\)

  2. \((1,0,1,0,1,1)\)

  3. \((0,1,1,1,1,0)\)

  4. \((0,0,0,1,1,0)\)

Answer
4

Describe how the triple-repetition code with encoding function, \(e:\mathbb{Z}_2\to \mathbb{Z}_2{}^3\), where \(e\left(a_1\right) = \left(a_1,a_1,a_1\right)\) can allow us to correct a single error. What is the probability of success for the \(p = 0.001\), 3000-bit situation? What are the generator and parity-check matrices for this code?

5

Consider the linear code defined the generator matrix\\ \(G=\left( \begin{array}{cccc} 1 & 0 & 1 & 0 \\ 0 & 1 & 1 & 1 \\ \end{array} \right)\)

  1. What size blocks does this code encode and what is the length of the code words?

  2. What are the code words for this code?

  3. With this code, can you detect single bit errors? Can you correct all, some, or no single bit errors?

Answer
6Rectangular codes

To build a rectangular code, you partition your message into blocks of length \(m\) and then factor \(m\) into \(k_1\cdot k_2\) and arrange the bits in a \(k_1 \times k_2\) rectangular array as in the figure below. Then you add parity bits along the right side and bottom of the rows and columns. The code word is then read row by row. \[\begin{array}{cccccc} \blacksquare & \blacksquare & \blacksquare & \cdots & \blacksquare & \square \\ \blacksquare & \blacksquare & \blacksquare & \cdots & \blacksquare & \square \\ \vdots & \vdots & \vdots & & \vdots & \vdots \\ \blacksquare & \blacksquare & \blacksquare & \cdots & \blacksquare & \square \\ \square & \square & \square & \cdots & \square & \\ \end{array} \quad \begin{array}{c} \textrm{ }\blacksquare = \textrm{ message} \textrm{ bit} \\ \square =\textrm{ parity} \textrm{ bit} \\ \end{array}\]

For example, if \(m\) is 4, then our only choice is a 2 by 2 array. The message 1101 would be encoded as \begin{equation*}\begin{array}{cc|c} 1 & 1 & 0\\ 0 & 1 & 1\\ \hline 1 & 0 &\\ \end{array}\end{equation*} and the code word is the string 11001110.

  1. Suppose that you were sent four bit messages using this code and your received the following strings. What were the messages?

    1. 11011000

    2. 01110010

    3. 10001111

  2. If you encoded \(n^2\) bits in this manner, what would be the rate of the code?

  3. Rectangular codes are linear codes. For the 3 by 2 rectangular code, what are the generator and parity check matrices?