- DSP log - http://www.dsplog.com -

Hamming (7,4) code with hard decision decoding

Posted By Krishna Sankar On September 29, 2009 @ 5:52 am In Coding | 49 Comments

In previous posts, we have discussed convolutional codes [1] with Viterbi decoding (hard decision [2], soft decision [3] and with finite traceback [4]). Let us know discuss a block coding scheme where a group of information bits is mapped into coded bits. Such codes are referred to as codes. We will restrict the discussion to Hamming codes, where 4 information bits are mapped into 7 coded bits. The performance with and without coding is compared using BPSK modulation in AWGN [5] only scenario.

Hamming (7,4) codes

With a Hamming code, we have 4 information bits and we need to add 3 parity bits to form the 7 coded bits. The can be seven valid combinations of the three bit parity matrix (excluding the all zero combination) i.e. .

The coding operation can be denoted in matrix algebra as follows:

where,

is the message sequence of dimension ,

is the coding matrix of dimension ,

is the coded sequence of dimension .

Using the example provided in chapter eight (example 8.1-1) of Digital Communications by John Proakis [6] , let the coding matrix be,

.

This matrix can be thought of as,

,

where,

is a identity matrix and

is a the parity check matrix.

Since an identity matrix, the first coded bits are identical to source message bits and the remaining bits form the parity check matrix.

This type of code matrix where the raw message bits are send as is is called systematic code.

Assuming that the message sequence is , then the coded output sequence is :

, where

,

,

.

The operator denotes exclusive-OR (XOR) operator.

 

Sl No m0 m1 m2 m3 p0 p1 p2
0 0 0 0 0 0 0 0
1 0 0 0 1 0 1 1
2 0 0 1 0 1 1 0
3 0 0 1 1 1 0 1
4 0 1 0 0 1 1 1
5 0 1 0 1 1 0 0
6 0 1 1 0 0 0 1
7 0 1 1 1 0 1 0
8 1 0 0 0 1 0 1
9 1 0 0 1 1 1 0
10 1 0 1 0 0 1 1
11 1 0 1 1 0 0 0
12 1 1 0 0 0 1 0
13 1 1 0 1 0 0 1
14 1 1 1 0 1 0 0
15 1 1 1 1 1 1 1

Table: Coded output sequence for all possible input sequence

Hamming Decoding

Minimum distance

Hamming distance [7] computes the number of differing positions when comparing two code words. For the coded output sequence listed in the table above, we can see that the minimum separation between a pair of code words is 3.

If an error of weight occurs, it is possible to transform one code word to another valid code word and the error cannot be detected. So, the number of errors which can be detected is .

To determine the error correction capability, let us visualize that we can have valid code words from possible values. If each code word is visualized as a sphere of radius , then the largest value of which does not result in overlap between the sphere is,

where,

is the the largest integer in .

Any code word that lies with in the sphere is decoded into the valid code word at the center of the sphere.

So the error correction capability of code with distance is .

In our example, as , we can correct up-to 1 error.

Parity Check Matrix

For any linear block code of dimension, there exists a dual code of dimension . Any code word is orthogonal to any row of the dual code. For the chosen coding matrix , the dual code is,

.

It can be seen that modulo-2 multiplication of the coding matrix with the transpose of the dual code matrix is all zeros i.e

.

This dual code is also known as parity check matrix.

Maximum Likelihood decoding

A simple method to perform maximum likelihood decoding is to compare the received coded sequence with all possible coded sequences, count the number of differences and choose the code word which has the minimum number of errors.

As stated in Chapter 8.1-5 of Digital Communications by John Proakis [8], a more efficient way (with identical performance) is to use the parity check matrix .

Let the system model be,

, where

is the received code word of dimension ,

is the raw message bits of dimension ,

is the raw message bits ,

is the error locations of dimension .

Multiplying the received code word with the parity check matrix,
.

The term is called the syndrome of the error pattern and is of dimension . As the term , the syndrome is affected only by the error sequence.

Lets assume that the message sequence

.

The code output sequence is

Let us find the error syndrome for all possible one bit error locations.

 

Sl N0 c0 c1 c2 c3 c4 c5 c6
s0 s1 s2
0 0 0 0 0 0 0 0 0 0 0
1 1 0 0 0 0 0 0 1 0 1
2 0 1 0 0 0 0 0 1 1 1
3 0 0 1 0 0 0 0 1 1 0
4 0 0 0 1 0 0 0 0 1 1
5 0 0 0 0 1 0 0 1 0 0
6 0 0 0 0 0 1 0 0 1 0
7 0 0 0 0 0 0 1 0 0 1

Table: Syndrome for all possible one bit error locations

Observations

1. If there are no errors (first row), the syndrome takes all zero values

2. For the one bit error, the syndrome takes one among the valid 7 non-zero values.

3. If we have more than one error locations, then also the syndrome will fall into one of the 8 valid syndrome sequence and hence cannot be corrected.

Simulation Model

The Matlab/Octave script performs the following

(a) Generate random binary sequence of 0′s and 1′s.

(b) Group them into four bits, add three parity bits and convert them to 7 coded bits using Hamming (7,4) systematic code

(c) Add White Gaussian Noise

(d) Perform hard decision decoding

(e) Compute the error syndrome for groups of 7 bits, correct the single bit errors

(f) Count the number of errors

(g) Repeat for multiple values of and plot the simulation results.

Click here to download

  Matlab/Octave script for computing BER with Hamming (7,4) systematic code with hard decision decoding [9] (2.5 KiB, 4,766 hits)

BER plot for Hamming (7,4) code with hard decision decoding in AWGN

Figure: BER plot for Hamming (7,4) code with hard decision decoding in AWGN

Observations

1. For above 6dB, the Hamming (7,4) code starts showing improved bit error rate.

Reference

Digital Communications by John Proakis [8]



Article printed from DSP log: http://www.dsplog.com

URL to article: http://www.dsplog.com/2009/09/29/hamming-74-code-with-hard-decision-decoding/

URLs in this post:

[1] convolutional codes: http://www.dsplog.com/20http://www.dsplog.com/04/convolutional-code/

[2] hard decision: http://www.dsplog.com/20http://www.dsplog.com/04/viterbi/

[3] soft decision: http://www.dsplog.com/20http://www.dsplog.com/14/soft-viterbi/

[4] with finite traceback: http://www.dsplog.com/20http://www.dsplog.com/27/viterbi-with-finite-survivor-state-memory/

[5] BPSK modulation in AWGN: http://www.dsplog.com/20http://www.dsplog.com/05/bit-error-probability-for-bpsk-modulation/

[6] Digital Communications by John Proakis: http://www.amazon.chttp://www.dsplog.com/redirect.html?ie=UTF8&location=http%3A%2F%2Fwww.amazon.com%2FDigital-Communications-John-Proakis%2Fdp%2F0072321113&tag=dl04-20&linkCode=ur2&camp=1789&creative=9325

[7] Hamming distance: http://en.wikipedia.org/wiki/Hamming_distance

[8] Digital Communications by John Proakis: http://www.dsplog.com/redirect.html?ie=UTF8&location=http%3A%2F%2Fwww.amazon.com%2FDigital-Communications-John-Proakis%2Fdp%2F0072321113&tag=dl04-20&linkCode=ur2&camp=1789&creative=9325

[9] Matlab/Octave script for computing BER with Hamming (7,4) systematic code with hard decision decoding: http://www.dsplog.com/download/2009/09/script_bpsk_ber_awgn_hamming_7_4_code.m

[10] click here to SUBSCRIBE : http://www.feedburner.com/fb/a/emailverifySubmit?feedId=1348583&loc=en_US

Copyright © 2007-2012 dspLog.com. All rights reserved. This article may not be reused in any fashion without written permission from http://www.dspLog.com.