(8 votes, average: 4.13 out of 5)

# Hamming (7,4) code with hard decision decoding

by on September 29, 2009

In previous posts, we have discussed convolutional codes with Viterbi decoding (hard decision, soft decision and with finite traceback). Let us know discuss a block coding scheme where a group of $k$ information bits is mapped into $n$ coded bits. Such codes are referred to as $(n,k)$ codes. We will restrict the discussion to Hamming $(7,4)$ codes, where 4 information bits are mapped into 7 coded bits. The performance with and without coding is compared using BPSK modulation in AWGN only scenario.

## Hamming (7,4) codes

With a $(7,4)$ Hamming code, we have 4 information bits and we need to add 3 parity bits to form the 7 coded bits. The can be seven valid combinations of the three bit parity matrix (excluding the all zero combination) i.e. $\(001), (010), (011), (100), (101), (110), (111)$.

The coding operation can be denoted in matrix algebra as follows:

$c = mG$

where,

$m$ is the message sequence of dimension $[1\mbox{ x }k]$,

$G$ is the coding matrix of dimension $[k\mbox{ x }n]$,

$c$ is the coded sequence of dimension $[1\mbox{ x }n]$.

Using the example provided in chapter eight (example 8.1-1) of Digital Communications by John Proakis , let the coding matrix $G$ be,

$G = \left[\begin{array}{cccc}1 & 0 & 0 & 0 & 1 & 0 & 1\\ \\ 0 & 1 & 0 & 0 & 1 & 1 & 1 \\ \\ 0 & 0 & 1 & 0 & 1 & 1 & 0 \\ \\ 0 & 0 & 0 & 1 & 0 & 1 & 1 \end{array}\right]$.

This matrix can be thought of as,

$G=\left[ I_k|P\right]$ ,

where,

$I_k$ is a $[k\mbox{ x }k]$ identity matrix and

$P$ is a $[k\mbox{ x }(n-k)]$ the parity check matrix.

Since $I_k$ an identity matrix, the first $k$ coded bits are identical to source message bits and the remaining $(n-k)$ bits form the parity check matrix.

This type of code matrix $G$ where the raw message bits are send as is is called systematic code.

Assuming that the message sequence is $m = \left[\begin{array}m_0 & m_1 & m_2 & m_3 &\end{array}\right]$, then the coded output sequence is :

$c = \left[\begin{array}m_0 & m_1 & m_2 & m_3 & p_0 & p_1 & p_2\end{array}\right]$, where

$p_0 = m_0 \oplus m_1 \oplus m_2$,

$p_1 = m_1 \oplus m_2 \oplus m_3$,

$p_2 = m_0 \oplus m_1 \oplus m_3$.

The operator $\oplus$ denotes exclusive-OR (XOR) operator.

 Sl No m0 m1 m2 m3 p0 p1 p2 0 0 0 0 0 0 0 0 1 0 0 0 1 0 1 1 2 0 0 1 0 1 1 0 3 0 0 1 1 1 0 1 4 0 1 0 0 1 1 1 5 0 1 0 1 1 0 0 6 0 1 1 0 0 0 1 7 0 1 1 1 0 1 0 8 1 0 0 0 1 0 1 9 1 0 0 1 1 1 0 10 1 0 1 0 0 1 1 11 1 0 1 1 0 0 0 12 1 1 0 0 0 1 0 13 1 1 0 1 0 0 1 14 1 1 1 0 1 0 0 15 1 1 1 1 1 1 1

Table: Coded output sequence for all possible input sequence

## Hamming Decoding

### Minimum distance

Hamming distance computes the number of differing positions when comparing two code words. For the coded output sequence listed in the table above, we can see that the minimum separation between a pair of code words $d_{min}$ is 3.

If an error of weight $d_{min}$ occurs, it is possible to transform one code word to another valid code word and the error cannot be detected. So, the number of errors which can be detected is $d_{min}-1$.

To determine the error correction capability, let us visualize that we can have $2^k$ valid code words from possible $2^n$ values. If each code word is visualized as a sphere of radius $t$, then the largest value of $t$ which does not result in overlap between the sphere is,
$t = \lfloor \frac{1}{2}(d_{min}-1) \rfloor$

where,

$\lfloor x \rfloor$ is the the largest integer in $x$.

Any code word that lies with in the sphere is decoded into the valid code word at the center of the sphere.

So the error correction capability of code with $d_{min}$ distance is $t = \lfloor \frac{1}{2}(d_{min}-1) \rfloor$.

In our example, as $d_{min}=3$, we can correct up-to 1 error.

### Parity Check Matrix

For any linear block code of $(n,k)$dimension, there exists a dual code of dimension $[(n-k)\mbox{ x }n]$. Any code word $c$ is orthogonal to any row of the dual code. For the chosen coding matrix $G$, the dual code $H$is,

$H=\left[\begin{array}1& 1 &1 &0 &1 &0 &0 \\ 0& 1& 1& 1& 0 &1 &0 \\ 1& 1& 0& 1& 0& 0& 1 \end{array}\right]$.

It can be seen that modulo-2 multiplication of the coding matrix $G$ with the transpose of the dual code matrix $H$ is all zeros i.e

$GH^T = \left[\begin{array} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0\end{array}\right]$.

This dual code $H$ is also known as parity check matrix.

### Maximum Likelihood decoding

A simple method to perform maximum likelihood decoding is to compare the received coded sequence with all possible $2^k$ coded sequences, count the number of differences and choose the code word which has the minimum number of errors.

As stated in Chapter 8.1-5 of Digital Communications by John Proakis, a more efficient way (with identical performance) is to use the parity check matrix $H$.

Let the system model be,

$y=mG + e$, where

$y$ is the received code word of dimension $[1\mbox{ x } n]$,

$m$ is the raw message bits of dimension $[1\mbox{ x } k]$,

$G$ is the raw message bits $[k\mbox{ x } n]$,

$e$ is the error locations of dimension $[1\mbox{ x } n]$.

Multiplying the received code word with the parity check matrix,
$\begin{array}{lll}yH^T & = & (mG+e)H^T\\ & = & mGH^T + eH^T \\ & = & eH^T \end{array}$.

The term $eH^T$ is called the syndrome of the error pattern and is of dimension $[1\mbox{ x } (n-k)]$. As the term $mGH^T= \left[\begin{array}0 & 0 & 0 \end{array}\right]$, the syndrome is affected only by the error sequence.

Lets assume that the message sequence

$m= \left[\begin{array}0 & 0 & 0 & 0\end{array}\right]$.

The code output sequence is

$c= \left[\begin{array}0 & 0 & 0 & 0 & 0 & 0 & 0\end{array}\right]$

Let us find the error syndrome for all possible one bit error locations.

 Sl N0 c0 c1 c2 c3 c4 c5 c6 s0 s1 s2 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 1 2 0 1 0 0 0 0 0 1 1 1 3 0 0 1 0 0 0 0 1 1 0 4 0 0 0 1 0 0 0 0 1 1 5 0 0 0 0 1 0 0 1 0 0 6 0 0 0 0 0 1 0 0 1 0 7 0 0 0 0 0 0 1 0 0 1

Table: Syndrome for all possible one bit error locations

Observations

1. If there are no errors (first row), the syndrome takes all zero values

2. For the one bit error, the syndrome takes one among the valid 7 non-zero values.

3. If we have more than one error locations, then also the syndrome will fall into one of the 8 valid syndrome sequence and hence cannot be corrected.

## Simulation Model

The Matlab/Octave script performs the following

(a) Generate random binary sequence of 0′s and 1′s.

(b) Group them into four bits, add three parity bits and convert them to 7 coded bits using Hamming (7,4) systematic code

(d) Perform hard decision decoding

(e) Compute the error syndrome for groups of 7 bits, correct the single bit errors

(f) Count the number of errors

(g) Repeat for multiple values of $\frac{E_b}{N_0}$ and plot the simulation results.

Figure: BER plot for Hamming (7,4) code with hard decision decoding in AWGN

Observations

1. For $\frac{E_b}{N_0}$ above 6dB, the Hamming (7,4) code starts showing improved bit error rate.

## Reference

Digital Communications by John Proakis

D id you like this article? Make sure that you do not miss a new article by subscribing to RSS feed OR subscribing to e-mail newsletter. Note: Subscribing via e-mail entitles you to download the free e-Book on BER of BPSK/QPSK/16QAM/16PSK in AWGN.