[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[ecc] Turbo decoder problem



I will be grateful if someone can help me get over this problem.

I have implemented the well known max-log MAP parallel concatenated 
RSC convolutional turbo decoder in C. As usual, LLR (extrinsic 
information) is passed between the decoders and 
Apriori+intrinsic+extrinsic information is used for decoding. 

Initial conditions of alpha, beta for both the decoders are correctly 
implemented and the last state of the frame is forced to be zero for the 
1st decoder. I have taken frame size of 10000.  I havent normalised any 
of the LLR, alpha, beta or gamma metrics, because normalisation effects 
max* operation. Also while implemeting max*(a,b), for computational 
purposes I used the approx. max*(a,b)=b if b>a, and (b-a) > 30. so 
that the exp(b-a) term will come only when (b-a) difference is less than 
zero.

Now the problem is with iterations the bit error rate decreases to zero 
and then increases again to some 6 times the inital value and oscillates 
there...ie., after zero ber is reached if iterations are continued the ber 
worsens.

For a frame with all zero bits or all one bit frames the decoder behaves 
properly, ie., with iterations the bit error rate decreases and goes to 
zero and stays there. However the problem is for frame with both zeros 
and ones. For these frames, if gaussian noise is not added while 
transmiting, the ber is zero as expected, for the 1st iteration but errors  
increases to ~250 and oscillate there for higher iteration numbers. When 
gaussian noise is added,  for 0 db signal, initially the frame starts has 
some 60-150 errors. Till the ber goes to zero the decoder behaves 
properly, ie., ber decrease rapidly with iteration numbers and  it touches 
zero for 2nd or 3rd iteration. But it starts getting bad if we continue 
iterations and errors go upto 250-300. 

I have noticed that, limiting the LLRs decreases the unexpected errors 
generated. Also decreasing the frame length also decreases these 
errors. Because of the absence of any normalisation alphas and betas 
are going to very highvalues at the end of the frames. LLRs too are 
increasing with iterations and reaching 50 ~ 100 after 4-6 iterations, So 
that the LLR term is the term dominating in gamma calculation.

What might be the mistake in my program? I believe my decoding is right 
since for zero noise, zero ber is obtained in 1st iteration consistently 
and all zeros or all 1s frames are decoded properly. 
I lost a lot of time in solving this problem. It will be very kind of you if 
some one can help me out with this problem.

Thanking you, 
Yours sincerely
J.Srivatsava.
--
To unsubscribe from ecc mailing list please visit http://www.opencores.org/mailinglists.shtml