- DSP log - http://www.dsplog.com -

Hamming (7,4) code with soft and hard decoding

Posted By Krishna Sankar On March 15, 2012 @ 6:37 am In Coding | 16 Comments

An earlier post we discussed hard decision decoding for a Hamming (7,4) code [1] and simulated the the bit error rate. In this post, let us focus on the soft decision decoding for the Hamming (7,4) code, and quantify the bounds in the performance gain.

Hamming (7,4) codes

With a  Hamming code, we have 4 information bits and we need to add 3 parity bits to form the 7 coded bits.

The coding operation can be denoted in matrix algebra as follows:

where,

 is the message sequence of dimension ,

 is the coding matrix of dimension ,

 is the coded sequence of dimension .

Using the example provided in chapter eight (example 8.1-1) of Digital Communications by John Proakis [2] , let the coding matrix  be,

.

This matrix can be thought of as,

 ,

where,

 is a  identity matrix and

 is a  the parity check matrix.

Since  an identity matrix, the first  coded bits are identical to source message bits and the remaining  bits form the parity check matrix.

This type of code matrix  where the raw message bits are send as is is called systematic code.

Assuming that the message sequence is , then the coded output sequence is :

, where

,

,

.

The operator  denotes exclusive-OR (XOR) operator.

 The matrix of valid coded sequence  of dimension 

Sl No m0 m1 m2 m3 p0 p1 p2
0 0 0 0 0 0 0 0
1 0 0 0 1 0 1 1
2 0 0 1 0 1 1 0
3 0 0 1 1 1 0 1
4 0 1 0 0 1 1 1
5 0 1 0 1 1 0 0
6 0 1 1 0 0 0 1
7 0 1 1 1 0 1 0
8 1 0 0 0 1 0 1
9 1 0 0 1 1 1 0
10 1 0 1 0 0 1 1
11 1 0 1 1 0 0 0
12 1 1 0 0 0 1 0
13 1 1 0 1 0 0 1
14 1 1 1 0 1 0 0
15 1 1 1 1 1 1 1

Table: Coded output sequence for all possible input sequence

Minimum distance

Hamming distance [3] computes the number of differing positions when comparing two code words. For the coded output sequence listed in the table above, we can see that the minimum separation between a pair of code words  is 3.

If an error of weight  occurs, it is possible to transform one code word to another valid code word and the error cannot be detected. So, the number of errors which can be detected is .

To determine the error correction capability, let us visualize that we can have  valid code words from possible  values. If each code word is visualized as a sphere of radius , then the largest value of  which does not result in overlap between the sphere is,

where,

 is the the largest integer in .

Any code word that lies with in the sphere is decoded into the valid code word at the center of the sphere.

So the error correction capability of code with  distance is .

In our example, as , we can correct up-to 1 error.

Soft decision decoding

Let the received code word be,

, where

and

is the transmit code word,

is the additive white Gaussian noise with mean and variance  and

 form the elements of the code word.

Given that there are known code words, the goal is to find correlation of received vector with each of the valid code words.

The correlation vector is,

where,

 is the received coded sequence of dimension ,

 is the matrix of valid code words sequence of dimension  and

 is the vector of correlation value for each valid code word and is of dimension

.

From the correlation values, the index of the location where  is maximized corresponds to the maximum likelihood transmit code word.

Note:

The term is to given weights for the code words i.e.0 is given a weight -1, 1 is given a weight 1.

Hard decision decoding

To recap the discussion from the previous post [1], the hard decision decoding is done using parity check matrix .

Let the system model be,

, where

 is the received code word of dimension ,

 is the raw message bits of dimension ,

 is the raw message bits ,

 is the error locations of dimension .

Multiplying the received code word with the parity check matrix,
.

The term  is called the syndrome of the error pattern and is of dimension .  As the term , the syndrome is affected only by the error sequence.

Assuming that the error hits only one bit,

a) There are can be  possible error locations.

b) If the syndrome is 0, then it means that there is no errors.

c) The value of syndrome takes one among the valid 7 non-zero values. From the value of syndrome we can figure out which bit in the coded sequence is in error and correct it.

Note :

a) If we have more than one error location, then also the syndrome will fall into one of the 8 valid syndrome sequence and hence cannot be corrected.

b) The chosen Hamming (7,4) coding matrix , the dual code is,

.

It can be seen that modulo-2 multiplication of the coding matrix  with the transpose of the dual code matrix  is all zeros i.e

.

Asymptotic Coding gains

From Chapter 12.2.1 and Chapter 12.2.1 of Digital Communication: Third Edition, by John R. Barry, Edward A. Lee, David G. Messerschmitt [4], the asymptotic coding gain with soft decision decoding and hard decision decoding is given as,

.

where,

is the coding rate,

is the minimum distance between the code words and

is the maximum number of errors which can be corrected.

Simulation Model

The Matlab/Octave script performs the following

(a) Generate random binary sequence of 0′s and 1′s.

(b) Group them into four bits, add three parity bits and convert them to 7 coded bits using Hamming (7,4) systematic code

(c) Add White Gaussian Noise

(d) Perform hard decision decoding – compute the error syndrome for groups of 7 bits, correct the single bit errors

(e) Perform soft decision decoding

(f) Count the number of errors for both hard decision and soft decision decoding

(g) Repeat for multiple values of  and plot the simulation results.

Click here to download Matlab/Octave script for computing BER for BPSK in Hamming (7,4) code with soft and hard decision decoding [5].

Figure : BER plot for Hamming (7,4) code with soft and hard decision decoding

 

Observations

a) At bit error rate close to , can see that the coding gains corresponding to hard and soft decision decoding is tending towards the asymptotic coding gain numbers.

References

Digital Communications by John Proakis [6]

Digital Communication: Third Edition, by John R. Barry, Edward A. Lee, David G. Messerschmitt [4]


Article printed from DSP log: http://www.dsplog.com

URL to article: http://www.dsplog.com/2012/03/15/hamming-code-soft-hard-decode/

URLs in this post:

[1] hard decision decoding for a Hamming (7,4) code: http://www.dsplog.com/2009/09/29/hamming-74-code-with-hard-decision-decoding/

[2] Digital Communications by John Proakis: http://www.amazon.chttp://www.dsplog.com/redirect.html?ie=UTF8&location=http%3A%2F%2Fwww.amazon.com%2FDigital-Communications-John-Proakis%2Fdp%2F0072321113&tag=dl04-20&linkCode=ur2&camp=1789&creative=9325

[3] Hamming distance: http://en.wikipedia.org/wiki/Hamming_distance

[4] Digital Communication: Third Edition, by John R. Barry, Edward A. Lee, David G. Messerschmitt: http://www.amazon.com/gp/redirect.html?ie=UTF8&location=http%3A%2F%2Fwww.amazon.com%2FDigital-Communication-John-R-Barry%2Fdp%2F0792375483&tag=dl04-20&linkCode=ur2&camp=1789&creative=9325

[5] download Matlab/Octave script for computing BER for BPSK in Hamming (7,4) code with soft and hard decision decoding: http://www.dsplog.com/db-install/wp-content/uploads/2012/03/script_bpsk_ber_awgn_hamming_7_4_code_soft_hard_decoding.m

[6] Digital Communications by John Proakis: http://www.amazon.com/gp/redirect.html?ie=UTF8&location=http%3A%2F%2Fwww.amazon.com%2FDigital-Communications-John-Proakis%2Fdp%2F0072321113&tag=dl04-20&linkCode=ur2&camp=1789&creative=9325

[7] click here to SUBSCRIBE : http://www.feedburner.com/fb/a/emailverifySubmit?feedId=1348583&loc=en_US

Copyright © 2007-2012 dspLog.com. All rights reserved. This article may not be reused in any fashion without written permission from http://www.dspLog.com.