Viterbi with finite survivor state memory

In the post on Viterbi decoder and soft input Viterbi decoder, we discussed a convolutional encoding scheme with rate 1/2, constraint length and having generator polynomial and having generator polynomial . If the number of uncoded bits is , then the number of coded bits at the output of the convolutional encoder is . Decoding the convolutionaly encoded bits by Viterbi algorithm consisted of the following steps.

1. For group of two bits, compute Hamming distance or Euclidean distance (for hard decision or soft decision respectively)

2. For each of the states, compute the path metric knowing the following.

  • Each state can be reached from only two previous states (out of the 4 possible states).
  • For each possible transition, compute the branch metric by adding the path metric of the previous state to the Hamming/Euclidean distance of the transition
  • Select the branch metric which is minimum among the two values and store it as the path metric for the current state

3. Store the survivor predecessor state for each state.

4. Once we have processed received bits, we know that the convolution encoder has reached state 00. We start traceback from state 00, and given that we know the current and previous state, the input bits can be decoded.

However, when the number of input uncoded bits is large, the memory requirements for a survivor state matrix of dimension becomes impossible to manage.

Traceback with finite survivor memory

To be able to decode with finite survivor memory, lets say we need to start the traceback at some time instant , where

is the decoding depth and

is the traceback depth.

The sequence of events are as follows:

  • At time instant , we start the traceback and continue tracing back through the survivor path times. Once we are done with the traceback, we start estimating the decoded bits knowing the current state and previous state.
  • Similarly, we again start the traceback at time , do traceback times and then decode bits from to .
  • Once we reach the end of the input sequence at time instance , we know that trellis has converged to state00 and then we do demodulation with traceback.

Viterbi traceback with finite survivor memory

Figure: Viterbi traceback with finite survivor memory

Given that traceback speed is equal to storing the speed incoming coded bits, we can implement the above algorithm with a memory of .

Note: The literature of different architectures for survivor state management is vast and is not explored in this post.

Selecting start state for traceback

At time instant , there are multiple ways to select the state to

  • Always start from a defined state (for example state 00)
  • Start from the state which has the minimum path metric

From simulations, can identify the minimum value of traceback depth (), which results in performance comparable to infinite survivor path memory case. From simulations shown in the post on hard decision Viterbi decoder, we know that at , results in a bit error rate of

BER_vs_traceback_viterbi_decode_ebno_4db

Figure: BER vs traceback depth at Eb/N0 = 4dB

From simulations it has been observed that doing traceback depth of around 15 ensures that the obtained BER with finite survivor state memory is close to the BER with infinite survivor state memory. Note that corresponds to around 5 times the constraint length of .

Additionally, as expected starting traceback with from the state with minimum pathmetric shows better performance than always starting at state 00.

Simulation results

Click here to download Matlab/Octave script for simulating bit error rate vs traceback for Eb/N0 = 4dB.

Click here to download Matlab/Octave script for simulating BER for BPSK with convolotional coding with hard/soft decision Viterbi decoding and with finite survivor state memory

Warning: This simulation took around 10hrs to complete. Am thinking that I will move to C code for simulations involving lots of for-loops.

BER plot for BPSK with convolutional coding with finite survivor state memory

Figure: BER for BPSK with convolutional coding with finite survivor state memory

Reference

Digital Communication: Third Edition, by John R. Barry, Edward A. Lee, David G. Messerschmitt

Digital Communications by John Proakis

10 thoughts on “Viterbi with finite survivor state memory

  1. Mr. Krishna:

    Because I’m trying to understand these math equations and describe in this
    article, so I confused why some equations can’t display.

    thanks~

  2. hi
    i have tried this viterbi decoding with finite survivor state memory.but it is giving error at function conv() at the starting of program it self.
    can u please help me how to overcome thos problem. i had run this program on matlab 7.6a version.

Leave a Reply

Your email address will not be published. Required fields are marked *