- DSP log - http://www.dsplog.com -

GATE-2012 ECE Q38 (communication)

Posted By Krishna Sankar On November 3, 2012 @ 8:03 am In GATE | 13 Comments

Question 38 on Communication from GATE (Graduate Aptitude Test in Engineering) 2012 Electronics and Communication Engineering paper.

## Solution

The solution to the problem seem deceptively reasonably simple. Quoting from the Wiki entry on Binary Symmetric Channel [1]

A binary symmetric channel (or BSC) is a common communications channel model used in coding theory and information theory. In this model, a transmitter wishes to send a bit (a zero or a one), and the receiver receives a bit. It is assumed that the bit is usually transmitted correctly, but that it will be “flipped” with a small probability (the “crossover probability”). This channel is used frequently in information theory because it is one of the simplest channels to analyze.”

In our example, the transition probability i.e  probability of transition (i.e. 1 becoming 0 or 0 becoming 1) for each alphabet is given as 1/8. probability of correct transmission, for each alphabet is given as 1/8 (the probability of incorrect transition is  7/8).

The transition probability diagram with the above data can be drawn as,

Figure : Transition probability diagram for a binary symmetric channel

The probability of the source alphabets are,

$\begin{array}p(x=0)&=&\frac{9}{10}\\p(x=1)&=&\frac{1}{10}\end{array}$.

The transition probability when $x=0$ is :

$\begin{array}p$$y=0|{x=0}$$&=&\frac{7}{8}\\p$$y=1|{x=0}$$&=&\frac{1}{8}\end{array}$.

Similarly the transition probability when $x=1$ is :

$\begin{array}p$$y=0|{x=1}$$&=&\frac{1}{8}\\p$$y=1|{x=1}$$&=&\frac{7}{8}\end{array}$.

The received alphabet not equal to the transmitted alphabet is,

$\begin{array}{lll}P(X \ne Y)&=&p$$y=1|{x=0}$$p(x=0) + p$$y=0|{x=1}$$p(x=1)\\&=&\frac{1}{8}*\frac{9}{10}+\frac{1}{8}*\frac{1}{10}\\&=&\frac{1}{8}\end{array}$.

Hold on, we are not done yet.

## Update (9th Nov 2012):

The above gives the probability that the received symbol is not equal to the transmitted symbol, and not the probability of error for an optimum detector. Thanks to Mr. Raghava GD who rightly pointed out in the comment section that – “the calculation that you did just gives the probability of receiving the bit erroneously and it doesn’t tell anything about the probability of error for an optimum decoder.”

After digesting his comments, and referring from Section of 1.2 of Chapter 1 of Prof. John M. Cioffi’s book [2] ,

Consider the channel model which produces a discrete vector output for a discrete vector input. The detector chooses a message $m_i$ from all possible messages $\begin{array}m_i&\{i=0,1,...M-1\}&\end{array}$transmit vectors. The channel input vector $x$ results in a corresponding channel output vector $y$. The decision device translates the received vector $y$ to an estimate of the transmitted vector $\hat{x}$ and a decoder converts $\hat{x}$ to the estimate of the message vector $\hat{m}$.

Figure : Vector channel model (Reference : Figure 1.13 in Chapter 1 of Prof. John M. Cioffi’s book [2])

The probability of error is defined as,

$\begin{array}P_e&\equiv&P\{\hat{m}\ne m&\}\end{array}$.

Correspondingly, the probability of being correct is,

$\begin{array}P_c&=&1-P_e&=1-P\{\hat{m}\ne m&\}&=&P\{\hat{m}= m&\}\end{array}$.

The optimum detector chooses $\hat{m}$ to minimize the probability of error $P_e$ or equivalently maximize $P_c$.

The probability of making the correct decision $\begin{array}\hat{m}&=&m_i\end{array}$, given the received vector $y$is

$\begin{array}P_c(\hat{m}=m_i)&=&p(m_i|y)p(y)&=&p(x_i|y)p(y)\end{array}$.

The above quantity known as the posteriori probability [3], and an optimum decision device will try to maximize the above quantity. The Maximum a Posteriori Detector (MAP), is defined as the detector that chooses the index $i$ to maximize the a posterior probability $p(x_i|y)p(y)$ given the received vector $y$.

Using Bayes therorem [4], the posterior probability can be written in terms of prior probability $p(x)$ and the channel transition probability $p(y|x_i)$,

$p$$x_i|y$$p$$y$$=p$$y|x_i$$p$$x_i$$$.

Note :

The term $p(y)=\sum_{j=0}^{M-1}p$$y|x_j$$p$$x_j$$$ is a constant and can be ignored when trying to maximize the posterior probability $p(x_i|y)$.

The MAP detection rule is,

$\Large{\begin{array}{lrllrr}\hat{m}\Rightarrow m_i,&\mbox{if }\frac{p$$y|x_i$$p$$x_i$$}{p$$y|x_j$$p$$x_j$$}&\ge&1,&\forall&j\ne&i\end{array}}$.

Note :

This is discussed briefly in Chapter 5.1.3 The Optimum Detector in Digital Communications, Fourth edition John G. Proakis (buy from Amazon.com [5]buy from Flipkart.com [6])

Applying the MAP detection rule to the problem at hand :

We have two candidates 0 and 1 for the source message i.e

a) $m_i=\{0,\ 1}$ ,

b) modulator is a pass through i.e $x_i=m_i$ and

c) received symbol $y_i$ can be 0 or 1.

The goal is to make decision rule on the observed received symbol $y_i$.

Applying MAP detection rule,

$\begin{array}{llrll}\mbox{when } y = 1, &\mbox{then }\hat{m}\Rightarrow 0,&\mbox{if }\frac{p$$y=1|{x=0}$$p$$x=0$$}{p$$y=1|{x=1}$$p$$x=1$$}&\ge&1\\&&\frac{\frac{1}{8}*\frac{9}{10}}{\frac{7}{8}*\frac{1}{10}}=\frac{9}{7}&\ge&1\Rightarrow&\mbox{true}\end{array}$.

$\begin{array}{llrll}\mbox{when } y = 0, &\mbox{then }\hat{m}\Rightarrow 0,&\mbox{if }\frac{p$$y=0|{x=0}$$p$$x=0$$}{p$$y=0|{x=1}$$p$$x=1$$}&\ge&1\\&&\frac{\frac{7}{8}*\frac{9}{10}}{\frac{1}{8}*\frac{1}{10}}=\frac{63}{1}&\ge&1\Rightarrow&\mbox{true}\end{array}$.

In both the cases i.e when received symbol $y_i=0\mbox{ or }1$, the optimum MAP detection rule suggest that the estimated message symbol is $\hat{m}=0$.

Intuitively, this makes sense : given that $\begin{array}p(x=0)&=&\frac{9}{10}\end{array}$, a receiver will be better of assuming that the transmitted symbol is always 0. With this, the probability of making an error in the decision is  $\frac{1}{10}$.

Matlab example

clear all; close all;
% number of observations
N = 10^5;
% generating x with p(x=0) = 9/10, p(x=1) = 1/10
x = (rand(1,N) > 9/10);
% generating c with p(c=0) = 1/8, p(c=1) = 7/8
c = (rand(1,N) < 7/8);
% binary symmetric channel
y = mod(x+c,2);
xhat = 0;
% counting errors
nErr    = size(find(xhat-x),2);
errProb = nErr/N

Based on the above, the right choice is (D) 1/10.

## References

[1] GATE Examination Question Papers [Previous Years] from Indian Institute of Technology, Madras http://gate.iitm.ac.in/gateqps/2012/ec.pdf [7]

[2] Wiki entry on Binary Symmetric Channel http://en.wikipedia.org/wiki/Binary_symmetric_channel [1]

[6] Digital Communications, Fourth edition John G. Proakis (buy from Amazon.com [5]buy from Flipkart.com [6])

URL to article: http://www.dsplog.com/2012/11/03/gate-2012-ece-q38-communication/

URLs in this post:

[1] Wiki entry on Binary Symmetric Channel: http://en.wikipedia.org/wiki/Binary_symmetric_channel

[2] Chapter 1 of Prof. John M. Cioffi’s book: http://www.stanford.edu/group/cioffi/book/chap1.pdf

[3] posteriori probability: http://en.wikipedia.org/wiki/Posterior_probability

[4] Bayes therorem: http://en.wikipedia.org/wiki/Bayes