## Matlab UMTS/LTE Turbo Code

This Matlab code plots the BER, EXIT chart and iterative decoding trajectories for the UMTS and LTE turbo codes, when using BPSK modulation for transmission over an AWGN channel. Functions are provided to generate the UMTS and LTE interleavers, as well as to perform component encoding and decoding. Furthermore, functions are provided for generating a priori LLRs having a particular mutual information, as well as for measuring the mutual information of some LLRs.

**main_ber.m** draws a BER plot for the UMTS turbo code.

**main_exit.m** draws an EXIT chart for the UMTS turbo code.

**main_traj.m** plots the iterative decoding trajectories of the UMTS turbo code.

**get_UMTS_interleaver.m** provides a function for generating the UMTS interleaver.

**component_encoder.m** provides an encoder function for the UMTS component codes.

**component_decoder.m** provides a BCJR decoder function for the UMTS component codes.

**jac.m** provides a function for performing the exact, lookup-table-aided and approximate Jacobian logarithms.

**generate_llrs.m** provides a function for generating Gaussian distributed a priori LLRs.

**measure_mutual_information_histogram.m** measures the mutual information of some LLRs using the histogram method.

**measure_mutual_information_averaging.m** measures the mutual information of some LLRs using the averaging method.

You can download the Matlab code here. You can also download the LTE interleaver and LTE puncturer. The operation of the UMTS turbo decoder is described in Section 2.2 of Liang Li’s nine month report and this document.

Copyright © 2010 Robert G. Maunder. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

August 12th, 2010 at 8:29 am

Dear Rob,

In your main_exit.m file in Matlab UMTS Turbo code, the exit chart is based on the figure 2.13. Is this means that the exit chart plotted is only for one decoder? I understand that by swapped the axis, we can display the second curve since the output is the input of the second decoder. I just wonder if we need to repeat the process of figure 2.13 in figure 2.11. If not, the function of interleaver or deinterleaver is not considered, am I right?

August 12th, 2010 at 8:34 am

Hello again Annena,

In an EXIT chart, each EXIT function depends only on one of the decoders - it is completely independent of the other decoder and of the interleaver. Since the upper and lower codes in a UMTS turbo code are identical, their EXIT functions are also identical. For this reason, there is no need to run two simulations. Instead, you can just take the EXIT function for the upper code and invert its axes to get the EXIT function of the lower code. Please note that while the EXIT functions are completely independent of the interleaver, the iterative decoding trajectories do depend on the interleaver design. The trajectory will only match the EXIT functions if the interleaver is able to randomise the order of the LLRs sufficiently well.

Hope this helps, Rob.

August 12th, 2010 at 8:39 am

Dear Rob,

Thank you so much, now I clearly understand. I think, I need to check back the main_traj.m file first.

August 13th, 2010 at 7:35 am

Hi Rob,

May I know, do you consider channel reliability (Lc) in your BCJR decoding?

Annena

August 13th, 2010 at 8:47 am

Hi again,

In case I want to use your component_decoder.m function for different generator, what is the modification should I consider? Is it OK by only changes the trellis matrix?

Annena

August 13th, 2010 at 9:22 am

Hi Annena,

In the main_ber.m, main_exit.m and main_traj.m files, I use lines like the following to perform soft BPSK demodulation…

a_c = (abs(a_rx+1).^2-abs(a_rx-1).^2)/N0;

If you expand the squares, you can rearrange this to a form including the channel reliability…

a_c = Lc*real(a_rx);

… where…

Lc = -4/N0;

If you want to change the generator polynomials, all you need to do is change the transitions matrix in component_decoder.m and the equations in component_encoder.m.

Hope this helps, Rob.

August 18th, 2010 at 2:51 am

Hello Rob,

I want to know about the main_exit.m file on the encoder part. The data is transmitted using rate 1/2. in the file, you are transmitting systematic bit and parity bit from the first encoder only, right? This means that you did not apply the puncturing method in the encoder.

Please correct me if I’m misunderstood your program.

TQ.

August 18th, 2010 at 7:53 am

Hi Annena,

You are correct. Only the information that is used by the upper decoder is transmitted in this simulation. This allows the upper decoder to be characterised in isolation from the lower decoder. The simulation required to characterise the lower decoder is identical to this one. That is why the results are plotted twice, once for the upper decoder and inverted for the lower decoder. By the way, the UMTS turbo code does not use puncturing.

Take care, Rob.

August 19th, 2010 at 3:03 am

Hi,

Tq so much for your help.

I want to know,

1. if I run the main_exit.m file using my encoder and decoder, and the result is decreasing for an increasing IA_index, what is it means? I believe it must be something wrong with my program in the decoder part, but is it explain anything?

2. if we referring to standard parallel concatenated code in paper stephen ten brick fig1, 1st BCJR decoder require input of Z1(systematic and p1) and A1.

a) If I want to use your main_exit.m file, is it means that A1=a_a?

b) the length of Z1 is equal to the length of (a_c+c_c + e_c(terminated bits)). Since a_a is generated from a, which is without terminated bits, the length of a_a is not equal to Z1. If A1=a_a, can I just add bit zeros to a_a so that it will have the same length as Z1?

I’ll really appreciate your help.

August 19th, 2010 at 9:32 am

Hi Annena,

1. It sounds like there is a problem with either your encoder or your decoder. You can confirm this by comparing the EXIT functions that you get using the averaging method of measuring mutual information with the EXIT functions you get using the histogram method. If these EXIT functions are significantly different then it implies that there is a bug in your encoder, modulator, demodulator or your decoder.

2a. That’s right. Here, I’m assuming that you mean this paper…

http://scholar.google.co.uk/scholar?cluster=15764897033271139190&hl=en&as_sdt=2000

2b. Note that the length of A1 is not equal to the length of Z1 in that paper - it is half the length of Z1. The operation of main_exit, main_ber and main_traj is illustrated in Figures 2.11 and 2.13 of…

http://users.ecs.soton.ac.uk/rm/wp-content/liang_li_nine_month_report.pdf

Here, if the interleaver has a length of N bits, then a_c has a length of N, c_c and d_c have lengths of N+3, while e_c and f_c have lengths of 3. Instead of zeros as you suggest, e_c is concatenated onto a_a+a_c in order to give y_a, which has the same length as c_c of N+3.

Hope this helps, Rob.

August 23rd, 2010 at 2:02 am

Hi Rob,

Thank you so much for your help, I feel so happy because finally I manage to plot my first EXIT chart using my encoder and decoder.

Now, I’m learning to plot the trajectory part.

1) May I know, why is the file main_traj.m plot trajectory at different frame? Should we just average the value so that we can get just a smooth trajectory plot?

2) From the trajectory figure, can we plot the EXIT curve by taking the peak of each trajectory point or do we need to map it into figure from main_exit.m?

Really appreciate your help.

Regards,

Annena

August 23rd, 2010 at 9:00 am

Hi Annena,

1) You can average the trajectories, but you sometimes find that the result doesn’t give a very good match with the EXIT functions. Also, if you take the average of the trajectories, you don’t get a feel for how much variation there is from frame to frame. It is this variation that explains why short frame lengths give a poorer performance than longer frames.

2) You can do this, but the result will typically not be very smooth and will also depend on your interleaver design. Furthermore, the whole point of EXIT chart analysis is that is can be done without requiring lengthy iterative decoding simulations.

Hope this helps, Rob.

August 27th, 2010 at 12:35 am

Hi Rob,

I noticed that your BPSK encoder modulate the bit 1 to -1 and the bit 0 to 1. In your component_decoder.m file, extrinsic_uncoded_llrs(bit_index) = prob0-prob1, does it mean prob0 is for bit 1 and prob1 is for bit -1? in case I modulate it otherwise, 1–> 1 and 0 –>-1, can I still used your decoder file?

Appreciate your advise.

August 27th, 2010 at 8:48 am

Hi Annena,

I find that this causes a lot of confusion for people. It all comes down to whether LLRs are defined as LLR=ln(P0/P1) or LLR=ln(P1/P0). J Hagenauer uses the former definition and C Berrou uses the latter one. My code uses the definition LLR=ln(P0/P1). If another part of your system uses the other definition (such as your BPSK demodulator), all you need to do is multiply your LLRs by -1 when you pass them from part of the system to the other.

Hope this helps, Rob.

November 11th, 2010 at 8:29 am

Hi Rob,

I develop my own encoder file where only my first encoder is terminated and the second encoder is not. To develop the decoder, how can I modify your component_decoder.m file?

Please advise.

TQ

November 11th, 2010 at 12:13 pm

Hi Mila,

All you need to do for the unterminated decoder is change the line…

betas(1,length(apriori_uncoded_llrs))=0;

…to…

betas(:,length(apriori_uncoded_llrs))=0;

Take care, Rob.

November 22nd, 2010 at 2:46 am

Hi Rob,

Thank you for your respond.

May I know, is it possible to plot the EXIT chart using your function when the first decoder is terminated and the second decoder is not?

As I know, for parallel turbo code, we only used one decoder only to plot the EXIT chart.

Please advise

TQ

November 28th, 2010 at 5:35 am

Hi Mila,

You should fnd that termination makes only a negligible difference to the EXIT function, particularly if you have a long interleaved. My advice would be to simply mirror one or other of the EXIT functions - either the terminated one, or the unterminated one.

Take care, Rob.

February 14th, 2011 at 8:40 pm

Hi Bob,

1. Thank you so much for your code. It is really hard to find something so helpful about the BCJR algorithm in MATLAB on the Internet.

2. I have a question. Now I want to implement a conv code decoder with BCJR based on your code, but in my scenario the rate of the code is different, like 2/3.

(1) Are the equations in Liang Li’s report still applicable?

(2) I know I need to change the “transitions” matrix; for the “UncodedBit” and “EncodedBit” column, do I need to expand them into several binary column, or simply use decimal representations of the bits, such as 7 for 111?

(3) Other things I need to care about?

Thank you very much for your help.

February 16th, 2011 at 8:59 pm

Hello Lai,

The equations in Liang Li’s report are still applicable, but they need to be generalized. You can see generalized versions at…

http://users.ecs.soton.ac.uk/rm/wp-content/LogBCJR.pdf

You need to insert an additional column into the transitions matrix for each additional input and output bit. For example, a k=2, n=3 CC would have a transitions matrix with 7 columns: two for the states, two for the input bits and three for the output bits. You also need five sets of gamma calculations, one for each input and output bit. Finally, you need two sets of gamma and extrinsic LLR calculations, one for each input bit.

Hope this helps, Rob.

February 23rd, 2011 at 10:04 am

Somebody has asked me how to concatenate the UMTS turbo decode with an additional inner decoder, such as a SISO MIMO detector. I have drawn a schematic for this and uploaded it to…

http://users.ecs.soton.ac.uk/rm/wp-content/concat.png

Take care, Rob.

June 15th, 2011 at 7:31 pm

Hi Rob,

Fine algorithm, I’ve been using it to run some tests within a larger system. However, ECC are not my domain and I need to obtain the LLRs of the coded bits (lc+ld+le+lf) and not just the LLRs from the data bits (la) at the end of the turbo decoding step. Howmay I achieve it? Thanks in advance.

June 16th, 2011 at 9:26 am

Hi José,

This is not too difficult to achieve. You just need to modify component_decoder.m so that it has an additional output called extrinsic_encoded_llrs. You can generate this using these lines of code

% Calculate encoded extrinsic transition log-confidences. This is similar to

% Equation 2.18 in Liang Li’s nine month report or Equation 4 in the BCJR paper.

deltas2=zeros(size(transitions,1),length(apriori_encoded_llrs));

for bit_index = 1:length(apriori_encoded_llrs)

for transition_index = 1:size(transitions,1)

deltas2(transition_index, bit_index) = alphas(transitions(transition_index,1),bit_index) + uncoded_gammas(transition_index, bit_index) + betas(transitions(transition_index,2),bit_index);

end

end

% Calculate the encoded extrinsic LLRs. This is similar to Equation 2.19 in

% Liang Li’s nine month report.

extrinsic_encoded_llrs = zeros(1,length(apriori_encoded_llrs));

for bit_index = 1:length(apriori_encoded_llrs)

prob0=-inf;

prob1=-inf;

for transition_index = 1:size(transitions,1)

if transitions(transition_index,4)==0

prob0 = jac(prob0, deltas2(transition_index,bit_index));

else

prob1 = jac(prob1, deltas2(transition_index,bit_index));

end

end

extrinsic_encoded_llrs(bit_index) = prob0-prob1;

end

June 16th, 2011 at 8:58 pm

Hi Rob,

Thanks for your help, it works just fine. Much apprettiated.

September 20th, 2011 at 3:48 pm

Dear Rob

Can you plase tell somthing about turbo codes, that as why is it necessary that the information feeded into encoder should be unocrrelated as much as possible.

What if they are correlated, apart from turbo decoding how does it affect. Can you please explain it

thanks

swap

September 20th, 2011 at 4:33 pm

Hi Swap,

If the occurrence of two events A and B is correlated then their joint probability is given by P(A,B) = P(A|B)*P(B). If they are uncorrelated then the joint probability is given by P(A,B) = P(A)*P(B). The second situation is much easier to work with because we only need one P(A) value for each outcome of A. By contrast, the first situation requires a P(A|B) value for each possible combination of outcomes of A and B. To save on complexity, the BCJR assumes that P(A,B) = P(A)*P(B), i.e. that the LLRs are uncorrelated. If they are correlated, then this assumption is false and the performance of the BCJR suffers.

Take care, Rob.

October 30th, 2011 at 2:32 am

Hi Rob,

First of all, thanks for the code, it’s really helpful. For component turbo code, you have separated the information, parity as well as the termination bits when input to channel. I wonder whether the result will be the same if they are combined into one before input to the channel?

The reason for me asking the question because I’m not sure whether the Randn function in Matlab is independent statistically when you call it for the second or subsequent time. Let say it’s independent, therefore for your case, the AWGN noise is independent for each the systematic, parity and termination sequence (a_rx,c_rx_d_rx,e_rx and f_rx)?

Looking forward for your explanation.

Many Thanks,

Mory

October 31st, 2011 at 9:52 am

Hi Mory,

I kept the different bit sequences separate because it is very easy to make a mistake when programming their concatenation and de-concatenation. If they were concatenated, all the results would be exactly the same.

Take care, Rob.

November 3rd, 2011 at 10:08 pm

Hello,

Thanks for your great work & handy reference. I think I found a tiny typo/bug in the code: in main_exit.m, the value 0.3037 should be 0.3073 ( from http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1499068 ).

Regards,

-Colin

November 4th, 2011 at 10:03 am

Hi Colin,

Thank you for this. However, I’m not sure where in my code or the paper you mean - I can’t find any values of 0.3037 or 0.3073 in my code or that paper.

Take care, Rob.

November 8th, 2011 at 4:03 am

oops… I meant generate_llrs.m

November 8th, 2011 at 11:34 am

Hi Colin,

Thanks for noticing this typo - I have updated all the occurrences of this mistake in the code on my website (I think). This shouldn’t make any difference to the results though, because of the while loop in generate_llrs.m.

Take care, Rob.

March 6th, 2012 at 3:57 am

Dear Rob,

I have some questions in component_decoder.m,

why there are two parts: uncoded_gammas and encoded_gammas, while in bcjr_decoder is just gammas . And why uncoded_gammas is computed in alphas and betas,but not deltas. By the way , in the computaion of alphas or betas , why the bit_index -1 is used ?(encoded_gammas(transition_index, bit_index-1)) Many thanks.

March 6th, 2012 at 6:37 am

alphas(transitions(transition_index,2),bit_index) = jac(alphas(transitions(transition_index,2),bit_index),alphas(transitions(transition_index,1),bit_index-1) + uncoded_gammas(transition_index, bit_index-1) + encoded_gammas(transition_index, bit_index-1));

Based on equ. 5 in BCJR , uncoded_gammas(transition_index, bit_index) + encoded_gammas(transition_index, bit_index). Is it right?

But I don\’n understand why just the encoded_gammas is used to compute the deltas in component_decoder and both encoded_gammas and uncoded_gammas are used in bcjr. Wait for your answer . Thanks a million.

March 6th, 2012 at 8:09 am

Dear Rob,

In main_ber :a_p = a_a + a_c + a_e;

In main_traj :a_p = a_a + a_e;

So why not the same?

March 6th, 2012 at 11:21 am

Hi Lingjun,

This is the difference between a BCJR decoder that generates extrinsic LLRs and one that generates aposteriori LLRs. A decoder that generates aposteriori LLRs has the advantage of not needing to make a distinction between uncoded_gammas and encoded_gammas, therefore giving a lower complexity. However, there is a small disadvantage, which relates to the fact that the extrinsic LLRs must be calculated by subtracting the aproiri LLRs from the extrinsic LLRs - if the apriori and extrinsic LLRs are both infinite in magnitude, then subtracting one from the other gives an undetermined result. This is a very minor disadvantage and so in practice, decoders that generate aposteriori LLRs are preferred, owing to their lower complexity.

By the way, the line a_p = a_a + a_e; in main_traj.m should say a_p = a_a + a_c + a_e; I have fixed this bug and uploaded a new version of the code.

Take care, Rob.

April 12th, 2012 at 12:12 am

Dear Rob,

I want to implement puncturing using LDPC code in two parallel (AWGN) channels. I have the concept of puncturing but I am feeling difficulties in implementing.

could you suggest me any help in this matter please please

April 12th, 2012 at 9:12 am

Hi Ideal,

To puncture a bit sequence, you just remove some of the bits in the transmitter. To perform the corresponding depuncturing operation in the receiver, you just replace the punctured bits with zero-valued LLRs.

Take care, Rob.

April 12th, 2012 at 12:29 pm

Hi Rob,

That is possible for single channel. Actually I want to implement the puncturing in two channel (parallel optical and RF channel).

I think there would be a way to generate a puncturing pattern using single encoder/decoder.

I can assume the channel state information at the transmitter and can puncture the codeword accordingly but what can be the ensemble and how to send the punctured bits in different channel is a challeng for me. Thats why I am looking such a technique which can give me inside help about the puncturing. Thanks

April 12th, 2012 at 12:31 pm

Hi Ideal,

I think that you are entering new ground with what you are talking about and so I think that you will have to investigate how best to do the puncturing yourself.

Take care, Rob.

April 13th, 2012 at 2:12 am

Hi Rob,

I got few ideas but one thing,

If i puncture some bits at the trasmitter side then for the depuncturing on the receiver side, how can I add zeros at the exact locations, i.e.

let x= [1 0 1 1 1 1 0 1] is a codeword and I puncture either random or few bits from x.

let few bits punctured xp = [p 0 1 1 1 p 0 p], it means i puncture 1, 6 and 8 bit, how the receiver knows about it, I mean in matlab how can I write a command for this.

secondly If I have a (half rate, r=0.5) codeword of length 1024 and I punctured 171 bits randomly out of 1024 to have a rate of rp=0.6. now how can i introduced zeros at the particular values of punctured codeword.

The simple question is in random puncturing, how can i write a command in matlab which can puncture the bits at the transmitter and add zeros at those locations (punctured) at the reciever side. please send me this type of matlab script please. thanks

April 13th, 2012 at 9:39 am

Hi Ideal,

The receiver needs to have knowledge of the puncturing pattern. This code does everything you are talking about - it also interleaves the bits, which is useful in a Rayleigh fading channel.

temp = randperm(1024);

pattern = temp(1:853);

% In the transmitter

punctured_bits = bits(pattern);

% In the receiver

llrs = zeros(size(bits));

llrs(pattern) = punctured_llrs;

Take care, Rob.

April 15th, 2012 at 6:30 am

Dear Rob,

I really very much appreciate your help.

one things more please,

1. bits are the coded bits after the encoder?

2. In line, llrs(pattern) = punctured_llrs; you didn\’t defined punctured the punctured_llrs, I think, punctured_llrs = (2/sigma^2)*(punctured_bits + noise); for awgn channel?

Am I right in the above two questions?

Again I am really very appreciate your help. Thanks

April 16th, 2012 at 9:05 am

Hi Ideal,

Both of your assumptions are correct.

Take care, Rob.

April 17th, 2012 at 4:05 am

Dear Rob,

Thanks for the answer, One more question please

I just want to understand the concept of capacity approaching low density parity check codes, (some researcher are calculating the bit error rate curve for certain LDPC code and claim that its capacity approaching code)

can you have any suggestion/advise for that things?

Thanks for that

April 17th, 2012 at 10:16 am

Hi Ideal,

You should look into Irregular LDPC codes - these can operate extremely closely to the Shannon capacity.

Take care, Rob.

April 18th, 2012 at 3:23 am

Dear Rob,

Currently I am interested in low rate regular structure LDPC code and then introduce ouncturing to in increase the code rate. using simple bpsk channel.

But I am confuse with the threshold of code and capacity. Like I don’t understand, how to calculate the threshold of the code and how to prove that it is capacity approaching code for simple BPSK AWGN channels and even the punctured code with higher rate is performed well. These are the questions I have in my mind please

Thanks for the response

April 18th, 2012 at 8:30 am

Hi Ideal,

You can draw the EXIT chart of your LDPC codes and find the lowest SNR at which the EXIT chart tunnel remains open. Then, you can compare this SNR with that at which the capacity of the channel becomes equal to the throughput of your LDPC code. If the SNRs are similar, then your code is a near-capacity code.

Take care, Rob.

April 18th, 2012 at 11:55 pm

Dear Rob,

Thanks now I understand the threshold. Thanks

What is the relation between the throuhgput of LDPC code and the channel capacity (sorry for silly questions).

According to me, throughput = symbol rate x capacity, but I am not sure, how to calculate the throughput of LDPC code and compare with channel capacity?

I can calculate the EXIT chart, bpsk channel capacity but not sure how to compare the throughput of LDPC code with channel capacity at particular snr?

Thanks for help

April 19th, 2012 at 8:57 am

Hi Ideal,

The throughput is measured in bits per symbol (or equivalently bits per second per hertz) - it is given by R*log2(M), where R is the LDPC coding rate and M is the number of constellation points in your modulation scheme, e.g. M=16 in 16QAM. The capacity is plotted at http://users.ecs.soton.ac.uk/rm/resources/matlabcapacity/

Take care, Rob.

April 20th, 2012 at 12:22 am

Dear Rob,

I understand the calculation of capacity. Thanks

I understand the threshold will be that value of snr where the check node curve and the variable node curve just touches.

Also As we know that I_A = J(sigma), where sigma = sqrt(8*R*Es/No)?

sorry for not getting the point

1. EXIT chart (snr) = threshold (where two curves touches)

2. channel capacity (snr(Threshold)),

you said (Then, I can compare this SNR(threshold) with that at which the capacity of the channel becomes equal to the throughput (R*log2(M) of your LDPC code. If the SNRs are similar, then your code is a near-capacity code.)

It means for half rate code and bpsk modulation, the LDPC throughput will be 0.5. Then for the near capacity code, the channel capacity of bpsk channel should be near 0.5 at snr(threshold). I think I am right now

April 20th, 2012 at 5:04 pm

Hi Ideal, You’ve got it. Rob.

April 22nd, 2012 at 12:49 am

Dear Rob,

Can you please tell me any code of generating the ldpc parity check matrix (H) and the corresponding generator matrix (G).

I know about the PEG algorithm for generating H matrix but I want some other script if you know please. Thanks

April 23rd, 2012 at 9:00 am

DEAR ROB,

IS THERE ANY EASY WAY TO CALCULATE THE DECODER TRAJECTORY (STAIRS) IN MATLAB? THANKS

April 23rd, 2012 at 11:28 am

Hi Ideal,

I have uploaded some code for generating G and H matrices to…

http://users.ecs.soton.ac.uk/rm/wp-content/generate_H.m

The trouble with this code is that it tries too hard to find matrices that have exactly the specified degrees, and so it takes a long time to run. You should probably modify it to make it quicker to run.

You can download main_traj.m from the top of this page - it shows you how to plot the trajectory.

Take care, Rob.

April 23rd, 2012 at 12:01 pm

Dear Rob,

Its really great help. I will look in the code. Thanks dear Sir,

April 24th, 2012 at 8:02 pm

Hi Rob,

I’m trying to use your BCJR to decode my 1/3 Turbo coded stream.

I don’t understand the way you’re calculating “gammas” (Equation 2.14 in the 9 month report). Specificly, how you derive that (1-y(t)) .. formula and why don’t you directly use the conditional pdf instead? Is this specific to UMTS code?

Could you please explain?

Thanks,

Elnaz

April 25th, 2012 at 12:11 am

Hi Rob,

I have another question:

In calculating “betas”, according to the report you cited, shouldn’t we calculate each “betas” using the “gammas” value of the same stage i.e. to follow backward recursive move? In your code you are using the “gammas” value for the next stage. Am I missing something here?

I see the same thing when you caculate “deltas” i.e. “alphas” and “gammas” and “betas” are all indexed with “bit_index” but I think “betas” should be indexed by “bit_index”+1 (one stage ahead).

Thanks,

Elnaz

April 25th, 2012 at 1:08 am

Dear Rob,

I try to understand the code of main_traj.m but couldn’t. I am using the LDPC code and its trajectory will be the information exchange between the variable and check nodes. So i think there should be simple program in order to calculate the trajectory without using the sum-product algorithm.

At the moment I am using the Sum-Product algorithm and calculating the trajectory based on the information exchange when the SP algorithm runs. I was thinking to calculate the trajectory without running the decoder. I am not sure if it is possible? Thanks

April 25th, 2012 at 10:58 am

Hi Ideal,

If you have the EXIT chart then it is possible to predict the trajectory, without running the iterative decoder. You just need to manually draw a staircase that bounces between the two EXIT functions. The trouble is that this predicted trajectory will only be valid in the case of a very long block length. This approach will not let you consider the variation in the trajectory that results when using shorter block lengths.

Take care, Rob.

April 25th, 2012 at 11:05 am

Hi Elnaz,

Equation 2.14 is saying that if a transition is labelled with a zero, then the corresponding gamma should take on the value of the corresponding a priori LLR. If the transition is labelled with a one, then the corresponding gamma should take on the value of zero.

This approach has a slightly lower complexity than the one originally proposed for the Log-MAP. This says that if a transition is labelled with a zero, then the corresponding gamma should take on the value of the corresponding a priori LLR divided by 2. If the transition is labelled with a one, then the corresponding gamma should take on the value of the corresponding a priori LLR divided by -2.

Betas should be calculated using the gammas of the transitions to the right of the corresponding states. I think that the notation in the original explanation of the Log-MAP may be confusing you. It is because this notation is so confusing that we decided to explain things in this different way.

Take care, Rob.

April 25th, 2012 at 1:35 pm

Hi Rob,

Thanks for your explanation.

As for my first question, I don’t see you divide LLRs by 2? The way I explained it to my self is that you’re normalizong LLRs with respect to the probability of a bit value being 1 and zero (if you know what I mean).

Actually, this brings another question because in this normalization you have LLR= P(bit=1)/P(bit=0), but at the end of the program you calculate extrinsic LLR with P(bit=0)/P(bit=1) ratio?

Also, why are you omitting “uncoded-gammas” in calculating “deltas” ?

Thank you for answering my questions,

Elnaz

April 25th, 2012 at 1:58 pm

Hi Elnaz,

There’s no need to dived by two if you do it the way I do here - this is because the absolute value of the gammas doesn’t matter - it’s the difference between the gammas associated with one and zeros that matters.

I think that I’m using LLR=ln(p0/p1) throughtout this code, but I’ll check this.

The reason why uncoded_gammas is omitted from the delta calculation is because this causes the algorithm to generate extrinsic LLRs directly, rather than a posterior LLRs.

Take care, Rob.

April 25th, 2012 at 3:18 pm

Thanks Rob,

So for the “gammas” to take the value of LLR when input bit is zero, we should have: P1 = 1/(1+exp(LLR)) and P0= exp(LLR)/(1+exp(LLR)) which means LLR is log(P1/P0). Because with this we’ll have log(P1) = -log(1+exp(LLR)) and log(P0) = LLR - log(1+exp(LLR)) where by omitting the common term we’ll end up with what you do in the code. Is this correct?

Another point of confusion for me is where in calculating “deltas” you’re using “bettas of bit_index” while according to text books we should use the next stage “betas” here. Shouldn’t we?

Thanks,

Elnaz

April 25th, 2012 at 3:25 pm

Hi Elnax,

Actually, P1 = 1/(1+exp(LLR)) and P0= exp(LLR)/(1+exp(LLR)) implies that LLR = log(P0/P1), not LLR = log(P1/P0).

To put the calculation into words for you, the delta of a transition is equal to the sum of

* the encoded gamma of that transition,

* the alpha of the state that the left end of the transition connects to and

* the beta of the state that the right end of the transition connects to.

This is what the text books say, although they often use confusing notation to do it.

Take care, Rob.

April 25th, 2012 at 4:29 pm

You’re right; maybe the original explanation of the Log-MAP is confusing me. I think there is only one “gammas” for each transition between two states, however, the direction of movement changes in calculating “alphas” vs. “betas”. For example, gammas of 1 changes the state from 1 to 2. In calculating bettas of 1, I think, we should do: (Bettas of 2) X (gammas of 1) not (gammas of 2) because gammas of 2 takes state 2 to state 3. But, you’re doing the gammas of 2 in the code!

Elnaz

April 25th, 2012 at 5:49 pm

Hi Elnaz,

I’m afraid I’m not sure what you mean here. You should note that the gamma and delta values correspond to particular transitions. By contrast, the alphas and betas correspond to particular states. I’d encourage you to look closer at the code - I’m quite confident that there is no mistake in it.

Take care, Rob.

April 25th, 2012 at 7:59 pm

Hi Rob,

Surely, the code is correct. I’m trying to resolve my points of confusion which is partly the way you’re indexing “gammas” in calculating “betas”. In my previous note I wrote one of the iterations in your nested loops (where you calculate “betas”). So, for example you get bettas(..,bit_index=1) from betas(…, bit_index=2) (next state) and gammas(…, bit_index=2). My confusion is that I think it should be using gammas(…, bit_index=1) instead of gammas(…, bit_index=2) because it is gammas(…, bit_index=1) that takes us from stage 1 to stage 2. gammas (…, bit_index=2) will take us from stage 2 to stage 3.

Did I succeed explaining what I don’t understand?

Thanks,

Elnaz

April 25th, 2012 at 8:15 pm

Hi Elnaz,

I see what you mean now - you are looking at the following line…

deltas(transition_index, bit_index) = alphas(transitions(transition_index,1),bit_index) + encoded_gammas(transition_index, bit_index) + betas(transitions(transition_index,2),bit_index);

Since bit_index is the same for alphas and beta, it looks like the states that correspond to these are in the same horizontal position within the trellis. If you look closely though, you’ll see that bit_index refers to a transition’s horizontal position, not a state’s horizontal position. I’ve set things up so that a particular value in alphas corresponds to a state at the left end of the transition indexed by bit_index. By contrast, betas is set up so that each of its values corresponds to a state at the right end of the transition indexed by bit_index.

Take care, Rob.

April 25th, 2012 at 8:33 pm

Hi Rob,

Actually, I’m referring to where you calculate “betas”, line:

betas(…, bit_index) = jac(betas(…, bit_index),betas(…, bit_index+1) + uncoded_gammas(…, bit_index+1) + encoded_gammas(…, bit_index+1));

I think we should have:

betas(…, bit_index) = jac(betas(…, bit_index),betas(…, bit_index+1) + uncoded_gammas(…, bit_index) + encoded_gammas(…, bit_index));

Because it is gammas(…, bit_index) that moves us horizontally from bit_index to bit_index+1 position in trellis.

According to text books:

Beta (stage=k, state=p) = sum over different states of (gamma(stage=k, state=p to state=q) * Beta(stage=k+1, state=q).

i.e. in calculating beta in stage=1(horizontal position in trellis) we use beta of the position to the right (next stage, stage=2) and gammas of the same position (same stage, stage=1) which takes us from stage=1 to stage=2. Translating this to your code, it means we should use gammas(…, bit_index) instead of gammas(…, bit_index+1).

Elnaz

April 25th, 2012 at 8:42 pm

Hi again Elnaz,

Taking my previous reply a step further, betas(…, bit_index) refers to a state that is to the left of the transition referred to by uncoded_gammas(…, bit_index+1). Similarly, betas(…, bit_index+1) refers to a state that is to the right of that transition.

Take care, Rob.

April 25th, 2012 at 9:14 pm

I see that Rob; and that is exactly my point of confusion because there is only one gammas for each transition and we use it in calculating both betas and alphas.

The way you calculate alphas exactly matches with text book but betas does not. What makes it confusing to me is that we use say gammas(…, bit_index=2) to to go from betas(…, bit_index=2) to betas(…, bit_index=1)!

I’m sorry it seems I’m repeating my question here but it seems to me that each transition is marked with one gamma only and in the code we’re referring to the same transition by different gammas when caculating alphas versus betas.

Thank you,

Elnaz

April 25th, 2012 at 9:21 pm

Hi Elnaz,

That’s right - the gammas used for the alpha calculations are the same as the ones used for the beta calculations. The only difference between alpha calculations and beta calculation is that the former uses a recursion from left to right, while the latter uses a recursion from right to left.

Take care, Rob.

April 25th, 2012 at 9:38 pm

Yes, exactly! So when we move from time k (bit_index=k) to time k+1 (bit_index=k+1) in the trellis, we’re marking this transition with gammas(…, bit_index=k). Now, here, we move to the right from alphas(…, k) to alphas(…, k+1) and move to the left from betas(k+1) to betas(k) on the same transition marked with gammas(..,k).

The question is you’re not doing the above thing in the code. you have alphas(…,k+1) and betas(…, k) referring to the same node i.e. the same point in time!

This is where I don’t get it!

Elnaz

April 25th, 2012 at 9:42 pm

Ah, but you are assuming that alphas and betas are using the same time reference - i.e. that alphas(…, bit_index) and betas(…, bit_index) are referring to the same state. Actually the state that alphas(…, bit_index) refers to is one position to the left of the state that betas(…, bit_index) refers to.

Take care, Rob.

April 26th, 2012 at 12:42 am

Thank you Rob. I get it now. It is just the way you name your betas. I was thinking your theory may be different which is not.

Actually, I want to use two BCJR decoders, parallely concatenated, to decode a rate 1/3 Turbo code. I know that I should pass the output of each decoder (function here) as the apriori_uncoded_llrs input to the other function and iterate in a figure 8 shape. But where should I use the systematic output of the channel? I can’t feed both systematic and parity streams to the decoder function.

Thank you,

Elnaz

April 26th, 2012 at 10:04 am

Hi Elnaz,

Take a look at Figure 2.11 in the nine month report. The systematic LLRs can be added to the interleaved extrinsic LLRs.

Take care, Rob.

April 30th, 2012 at 5:31 am

Hi Rob,

I learned many thing just reading the above questions and answers, so thanks for that.

I just have a small question about puncturing LDPC codes.

In my matlab script, I puncture a portion of the information bits in a systematic codeword in the transmitter side.

My problem is with the receiver side. In an earlier comment you have mentioned that I need to replace the puncture bits with zero-valued LLR values. Did you mean the LLR values from the channel to the variable node which is use to generate the apostiriori LLR values?

Even thoug I perform the above, I get a BER curve exactly as that for a LDPC code without puncturing. Really appreciate if you can explain this to me.

Thanks in advance.

April 30th, 2012 at 9:53 am

Hi Anna,

Yes, I mean the LLRs from the channel to the variable nodes. I can’t think why your BER plot stays the same when you introduce puncturing - puncturing should degrade the performance. It sounds like you think you are puncturing the bits, but actually you are still transmitting them…

Take care, Rob.

May 4th, 2012 at 4:12 pm

Elnaz says…

“Hi Rob,

I am using two BCJR decoders, parallely concatenated, to decode a rate 1/3 turbo code. As you said earlier, I am adding the systematic LLR to the output of one decoder and feed them as apriori_uncoded_llr to the other decoder.

My question is with BER plot where I add a variance \"sigma2\" noise (\"randn\" command) to my sequence and simply plot the error rate vs. (Eb=1)/(2*sigma2). However, with small sigma2 values (smaller than 1.3), the waterfall discontinues and a series of up and down fluctuations begins. Fluctuations are about BER = 10^-4. Why this is happening?

Thank you,

Elnaz”

Hi Elnaz,

It may be that you have reached the error floor of your turbo code. The position of your error floor will depend on the design of your turbo code. In particular, it will depend on your interleaver length. An error floor of 10^-4 suggests to me that your interleaver length is quite small - does this sound correct to you?

Take care, Rob.

May 4th, 2012 at 6:15 pm

Hi Rob,

Thanks for your answer.

I am interleaving the whole sequence (about 4000 bits) at once with Matlab\’s \"randintrlvd\". What do you think?

Elnaz

May 4th, 2012 at 6:39 pm

Hi Elnaz,

That sounds okay to me. I normally use Matlab’s randperm function though…

interleaver = randperm(4000);

interleaved_bits = bits(interleaver);

deinterleaved_bits(interleaver) = interleaved_bits;

Take care, Rob.

May 4th, 2012 at 6:55 pm

Yes, Rob; doing exactly like above, the fluctuations around 10^-3 remain to be. They are happening with SNRs higher than -5 dB.

May 4th, 2012 at 7:03 pm

Hi again Elnaz,

My advice would be to try using a 10-times longer interleaver length, to see how that affects things.

Take care, Rob.

May 4th, 2012 at 7:11 pm

Hi Rob,

Can it be longer than the whole data length? Currently I am using

interleaver = randperm(4000); which covers my whole data length (entire stream of original bits).

Do you mean to somehow zero-pad the data?

Thanks,

Elnaz

May 7th, 2012 at 6:18 am

Dear Rob,

I was studying the paper \"Analysis of low-density parity-check codes based on EXIT functions\" by Sharon, E. \"Communications, IEEE Transactions on\". It was describing the EXIT trajectories in very good ways.

I tried to implement the EXIT trajectory using the update rules in equations (8,9,10), but I couldn\’t. Is anyone can implement these trajectory rules update for me please?

In this way, it is easy to calculate the trajectories. Thanks

May 7th, 2012 at 11:34 am

Hi Elnaz,

I would suggest to repeat your 4000-bit message ten times, to create a 40000-bit message.

Take care, Rob.

May 7th, 2012 at 11:49 am

Hi Ideal,

It is Equations 11 and 12 from that paper that you should use to draw the EXIT functions and trajectories.

Take care, Rob.

May 7th, 2012 at 12:00 pm

Dear Rob,

You are right, Equation 11 and 12, we can get the extrinsic mutual information for the variable node using 11 and we can get the extrinsic mutual information for the check node using equation 12.

But how can we plot the trajectory curve (stair curve). I don’t understand the trajectory curve. I can plot the variable node curve and check node curve but can’t able to plot the trajectory.

Secondly, Is the trajectory curve plot will also be same with the puncturing scheme? Thanks, I am asking so many questions, but I learn a lot with your supervision. Thanks

May 7th, 2012 at 1:34 pm

Hi Ideal,

You need to use Equation 11 and 12 alternately. You start with an input MI of zero, then use the output MI of each equation as the input to the other. This process is the same whether you use puncturing or not.

Tae care, Rob.

May 7th, 2012 at 3:37 pm

Hi Rob,

Even with increasing the length to 40000, it still jumps around 10^-4, not much of an improvement I guess. Could you guide me as to what is the standard basis of BER floor in a 1/3-rate Turbo decoder?

I don\\\’t know if 10^-4 is normal or there is something wrong with my code.

Thanks,

Elnaz

May 7th, 2012 at 5:28 pm

Hi Elnaz,

It sounds to me like you have a problem with a few bits in each frame. This suggests to me that there may be a problem with the way you are terminating the trellis. Perhaps you are not using termination in the encoder, but your decoder assumes otherwise…

Take care, Rob.

May 7th, 2012 at 5:49 pm

Hi Rob,

Actually, for the encoding part, I am the same encoder twice, once with systematic output and once without that:

trellis1 = poly2trellis(3,[7 5],7);

code1 = convenc(msg,trellis1);

Then I interleave:

intrlvd = msg(interleaver);

trellis2 = poly2trellis(3,5,7);

code2 = convenc(intrlvd,trellis2);

Using convenc, do I have control over termination?

After this I am using your BCJR decoder in a turbo-decoder and decode the whole stream at once (not frame-by-frame).

Could you please explain more as to what you mean by \\"decoder assumes otherwise…\\"?

Thanks,

Elnaz

May 7th, 2012 at 8:23 pm

Hi Rob,

Let me add this: in high SNRs I am having, as you said, a few bits of error.

Also, in my turbo decoder, I initialize the extrinsic LLR of the 2nd decoder with zeros before iterations start.

Does this mean that the encoders should be in state zero at the end of the stream? If yes, how can I force them to zero state while using \"convenc\". I appreciate if you could clarify this for me more.

Thanks,

Elnaz

May 7th, 2012 at 9:29 pm

Hi Elnaz,

I think my suspicion is correct. I don’t think that convenc uses termination. You can stop my component_decoder.m from using termination by changing the line…

betas(1,length(apriori_uncoded_llrs))=0;

…to…

betas(:,length(apriori_uncoded_llrs))=0;

As you say, termination forces the encoder into the state zero at the end of the stream. If you want to achieve this using convenc, I think you will need to add some additional bits to the end of your message, as in the UMTS turbo encoder.

Take care, Rob.

May 8th, 2012 at 5:18 am

Dear Rob,

Thanks for reply, I have one more question please.

In the capacity approaching codes, we usually compare the SNR_threshold of LDPC code with the corresponding channel capacity.

I found the channel capacity equation is the J function, i.e., C = J(sigma_ch), where the sigma_ch = sqrt(8 * R * Eb/No).

my question is that the equation I wrote is OK to compare with the SNR_threshold, if yes, then what should be the value of R? It will be the mother code rate or not? Thanks

May 8th, 2012 at 5:34 am

Hi Rob,

Changing the code, as you said, nulled out the error at high SNRs; Now I have zero errors at high SNRs. But BER still doesn’t fall smoothly. For example, with #iterations=20, it sharply falls from 10^-2 to -inf. And, with #iterations=2 it fluctuates between 10^-4 and -inf (step size is very small). I never see 10^-6, 10^-7 or…

Let me add that, here, since I just want to test the decoder, I only encode, then add noise, then decode. I’m thinking maybe this is causing that since I’m not doing any modulation and all.

Do you think the decoder is what it should be? Is not using termination causing this?

Thanks a lot,

Elnaz

May 8th, 2012 at 9:31 am

Hi Ideal,

I’m not sure where you have got that relationship between the channel capacity and J function from - I’m not sure that you can use it in this context. Presumably, you are trying to quantify the BPSK DCMC capacity. I would suggest doing this using the code at…

http://users.ecs.soton.ac.uk/rm/resources/matlabcapacity/

Take care, Rob.

May 8th, 2012 at 9:33 am

Hi Elnaz,

What you are describing sounds like a steep turbo cliff, which is usual for turbo codes. I think that the reason why you cannot observe low BERs is because your simulation is not long enough. To observe a BER of 10^-7, you need to be transmitting 10^9 or 10^10 bits, which requires a very long simulation (perhaps taking one week to run).

Take care, Rob.

May 8th, 2012 at 11:59 am

Dear Rob,

I got it the plot of channel capacity vs the coding threshold. Thanks again.

I am a bit poor in writing tricky MATLAB code. Like about the EXIT trajectory, You gave the idea that the two equations are used alternatively and in this way the information stairs end up at (1,1) but when I start writing the code, I am facing problem. Can you please give me a bit clue please. Thanks

May 8th, 2012 at 12:25 pm

Hi Ideal,

Something like this should do the trick…

IA = 0;

IE = 0;

results = [IA,IE];

for iteration_index = 1:100

IE = variable_node(IA);

results = [results; [IA,IE]];

IA = check_node(IE);

results = [results; [IA,IE]];

end

plot(results(:,1), results(:,2);

May 16th, 2012 at 7:52 am

dear rob,

i want to test an image transmission over wireless sensor network using turbo codes can this be done using the code …?

THANK YOU

May 16th, 2012 at 8:28 am

Hi Sowmya,

This depends on if your wireless sensor network can run Matlab code. If not, you can build a turbo code in the C programming language using the code at…

http://users.ecs.soton.ac.uk/rm/resources/cfixedbcjr/

Take care, Rob.

May 16th, 2012 at 3:40 pm

Hi Rob,

Actually, I have a question on iterative timing recovery and decoding. Since, you helped me alot previously with BCJR decoder, I thought I try to ask this one as well.

I’m using GD (Gradient Decent algo) to estimate timing points and a Turbo decoder to estimate user bits. My question is that at each iteration when I receive the estimated user bits, how should I use the output of the channel to update the timing estimate?

Since the channel output is constructed using coded bits, I can’t use the same channel output in my GD, unless, I code the decoded bits of the Turbo decoder before feeding it to GD. Am I correct in assuming this? How does this generally work?

Thanks a lot,

Elnaz

May 16th, 2012 at 6:57 pm

Dear Rob,

In answer to my last comment, I think, I need to actually change BCJR blocks in my Turbo decoder to make decisions over coded bits as opposed to user bits. I know I should change the sums to sum over different terms in alfa, gamma, betta but I’m not sure how exactly. Could you please guide me how to change your code to get there?

Thanks,

Elnaz

May 17th, 2012 at 9:57 am

Hi Elnaz,

You are quite right - you need the BCJR decoders to generate extrinsic LLRs (or perhaps a posteriori LLRs would suit your application better) for the coded bits. You can do this by creating an additional set of deltas using the same for loops, but with the line…

deltas2(transition_index, bit_index) = alphas(transitions(transition_index,1),bit_index) + uncoded_gammas(transition_index, bit_index) + betas(transitions(transition_index,2),bit_index);

Then, you can create an additional set of extrinsic LLRs using the same for loops, with with the line…

if transitions(transition_index,4)==0

If you decide that you want the a posteriori LLRs, you can obtain them by adding the extrinsic LLRs to the a priori LLRs.

Take care, Rob.

May 17th, 2012 at 3:17 pm

Hi Rob,

Don’t I need to modify alfas, or betas, or anything else in BCJR?

Now, I expect that the extrinsic outputs that I get from my Turbo decoder to be equal to parity1 and parity2 (encoded bits) but they’re not.

Thanks,

Elnaz

May 17th, 2012 at 4:46 pm

I’m not sure I understand why replacing encoded_gammas with uncoded_gammas will give me extrinsic LLRs for the coded bits. Could you please explain more.

Thanks,

Elnaz

May 17th, 2012 at 5:14 pm

Hi Elnaz,

There is no need to modify the alphas, betas or gammas. In fact, you can see some code that gives extrinsic LLRs for the coded bits at…

http://users.ecs.soton.ac.uk/rm/resources/matlabexit/

The reason why replacing encoded_gammas with uncoded_gammas will give you extrinsic LLRs for the coded bits is because extrinsic information is obtained by considering all available information, except for the corresponding a priori information. So, we leave out encoded_gammas when we want encoded extrinsic information, and we leave out uncoded_gammas when we want uncoded extrinsic information.

Take care, Rob.

May 17th, 2012 at 5:27 pm

Hi Rob, Am I correct in assuming that sign of encoded extrinsic information (the output of say 1st BCJR decoder) after completing the iterations of Turbo decoder should be equal to parity1 i.e. the coded bits resulted from the first encoder in my 1/3 turbo decoder?

Currently, I’m not changing the tturbo decoder at all but only replacing the new BCJR blocks. Also, as you advised me before, I am adding systematic output to the extrinsic LLR of the second decoder and feed them as apriori_uncoded_llrs to the first BCJR decoder.

If all above is right, then whay am I not getting the same parity1 coded bits from the turbo decoder?

Thanks,

Elnaz

May 17th, 2012 at 5:30 pm

Hmm, I’m not sure. You should still be iteratively exchanging extrinsic LLRs pertaining to the uncoded bits. So you need to output extrinsic LLRs for both the uncoded and encoded bits. You should find that the sign of the LLRs correlates with the encoded bits, as you say. My recommendation would be to compare your code with that which I linked to in my previous reply…

Take care, Rob.

May 17th, 2012 at 5:43 pm

Oh, in that case, I was doing wrong. I have basically replaced the old BCJR with new BCJR that outputs extrinsic llrs which is made from delta2. So, turbo decoder was iteratively exchanging new extrinsic llrs.

But you are saying that turbo decoder should still be exchanging the old outputs (extrinsic llrs for uncoded bits). Here is where I’m not clear. Suppose I get both outputs old and new from each BCJR block, If the turbo decoder should still be exchanging old outputs, then what I should do with the new ones? exchange them too? what does the schematic of the new turbo decoder look like?

Thanks a lot,

Elnaz

May 17th, 2012 at 6:18 pm

No, you should only exchange the uncoded extrinsic LLRs. You should not use the encoded extrinsic LLRs until the iterative decoding process has finished. After this, you can provide these LLRs to your synchroniser.

Take care, Rob.

May 21st, 2012 at 9:16 am

Dear Rob,

Thanks for your thorough help. You suggest me the code for trajectory plot. I tried the following code. It works for variable and check node curve but doesn’t work for trajectory.

clear all; clc;

IA = 0;

IE = 0;

results = [IA,IE];

load newJ.mat; % loading J-function

Es = 2;

dv = 3; dc = 6;R=0.5;Ite_max=100;

snr_L = 10^(Es*0.1);

sig_Ch = sqrt(8 * R * snr_L);

% Variable Node curve

Iav=0:.1:1;

sigmaA = (interp1(J,SIGMA,Iav));

sigmaE = sqrt((dv - 1).*sigmaA.^2 + sig_Ch^2);

Iev=interp1(SIGMA,J,sigmaE);

Iev(isnan(Iev)) = 1;

% CNode EXIT curve

Iac=0:.1:1;

sigmaCn1 = interp1(J,SIGMA,(1-Iac));

sigmaEC = sqrt(dc-1) * sigmaCn1;

Iec = 1 - interp1(SIGMA,J,sigmaEC);

Iec(isnan(Iec)) = 0;

plot(Iav,Iev,’b-’,'LineWidth’,1.6),hold on,grid on

plot(Iec,Iac,’r-’,'LineWidth’,1.6)

% Trajectory

while iteration_index == Ite_max

sig_A = interp1(J,SIGMA,IA);

IE = interp1(SIGMA,J,sig_A);

results = [results; [IA,IE]];

sig_C = interp1(J,SIGMA,IE);

IA = interp1(SIGMA,J,sig_C);

results = [results; [IA,IE]];

end

plot(results(:,1), results(:,2));

I am really messed with this program.

May 21st, 2012 at 9:31 am

Hi Ideal,

It looks to me like your while loop for the trajectory will never be performed. You should replace it with a for loop, such as…

for iteration_index = 1:100

Take care, Rob.

May 21st, 2012 at 11:26 am

Dear Rob,

I am really very happy. I done that trajectory. Now its quite good. Thanks for your support. Thanks

May 22nd, 2012 at 11:28 am

Dear Rob,

Thanks for all the responses. One more question please

How can we find the area between the two EXIT curves? I want to find the minimum positive area (distance) when the tunnel open.

Is there any script for finding the minimum area or distance between the two curves please? Thanks

May 22nd, 2012 at 12:33 pm

Hi Ideal,

The code that I have posted at http://users.ecs.soton.ac.uk/rm/resources/matlabexit/ includes a calculation of the area beneath an EXIT curve. If you know the area beneath each EXIT curve, then you can subtract one from the other to find the area within the EXIT tunnel.

Take care, Rob.

May 23rd, 2012 at 7:20 am

Dear Rob,

I implemented that method. I was thinking that it will give negative area when the tunnel closes (Area = variable node curve area-check node curve area), but it still gives me the positive area for closed tunnel.

Can I find the minimum positive distance between the two curves. I thought may it will work but don’t know how to find the minimum positive distance between the two curves?

Thanks

May 23rd, 2012 at 8:03 am

Hi Ideal,

A greater variable node area than check node area does not guarantee an open tunnel. This is the reason why some LDPC codes cannot operate close to the channel capacity. To find the smallest gap between the curves, you’ll have to use interpolation to convert one of the EXIT curves into the domain of the other. You can then subtract each data point in one curve from the corresponding point on the other curve.

Take care, Rob.

May 25th, 2012 at 5:47 am

Dear Rob,

I got some of your idea but not the full one. can you please elaborate a bit more. How can we find the minimum positive distance between the two curve? I really need it. if possible please provide me some example script. Thanks a lot

May 25th, 2012 at 11:31 am

Hi Ideal,

Here is an example for you…

IA_outer = 0:0.1:1;

IE_outer = (0:0.1:1).^2;

IA_inner = 0:0.1:1;

IE_inner = 0.5:0.01:0.6;

IE_outer_new = IA_inner;

IA_outer_new = interp1(IE_outer, IA_outer, IE_outer_new);

plot(IA_inner, IE_inner, ‘-’, IE_outer_new, IA_outer_new, ‘–’)

Take care, Rob.

IE_inner - IA_outer_new

May 29th, 2012 at 12:47 pm

Dear Rob,

Thanks for reply,

I have one simple question. Usually, we implement puncturing in the codeword, for example something like this

Transmitter side (Uncoded Case)

temp = randperm(N); randomly permuted vector of size N

Np = N - N*p; length after puncturing with puncturing fraction p

pattern = temp(1:Np); Assignment of pattern

bits = rand(1,N)<0.5; Random binary vector

% Puncturing

punc_bits = bits(pattern);

% Calaculating LLR

tx = -2*(punc_bits - 0.5); (0->1, 1->-1)

y = tx + n;

Lch = 4 * SNR_Linear * y;

In the receiver side

llrs = zeros(size(bits));

llrs(pattern) = Lch;

My question is what about the coded bits, something like below?

code_bits = [p data]; (p=parity bits) (codeword)

tx_bits = -2*(code_bits - 0.5); (0->1, 1->-1)

% Puncturing

1. c_punc_bits = tx_bits(pattern); ?

OR

2. c_punc_bits = code_bits(pattern);

tx_bits = -2*(c_punc_bits - 0.5); (0->1, 1->-1)

Which of the above two way for coded system is right? Thanks

Please comment

May 29th, 2012 at 12:54 pm

Hi Ideal,

Those two ways are equivalent - you can use either.

Take care, Rob.

June 4th, 2012 at 12:29 pm

Dear Rob,

one quick question please. I have defined puncturing fraction p. now i want to distribute it into four different ratios whose overall results should be equal to p.

i.e. p = p1 + p2 + p3 + p4. how to define four variables based on one variable over a range of values for each variable. Not sure right question:(

Thanks

June 4th, 2012 at 2:33 pm

Hi Rob,

I have another question that is not related to the decoding here.

I want to implement a convolution of a stream of bits (with zero-padded intervals in between to avoid ISI) with a delayed pulse shape. However, delay is a variable of bit index; I want to implement something like this:

Sigma over k of (a(k) * g(t-kT-tau(k)),

where tau(k+1) = tau(k) + noise + some offset;

Do you know how to implement this?

Thanks,

Elnaz

June 6th, 2012 at 6:12 pm

Hi Ideal,

It seems to me that if you want p to be composed of four components, then it should be obtained as their product, not their sum…

p = p1*p2*p3*p4

This is because the overall coding rate of serially concatenated codes is given by the product of the individual coding rates.

Take care, Rob.

June 6th, 2012 at 6:15 pm

Hi Elnaz,

I’m afraid that I’ve never done this before, so I’m not sure how to do it. You may like to look at Matlab’s convolution functions, although I’m not sure that it will let you use a variety of delays…

Take care, Rob.

June 7th, 2012 at 5:09 am

I am running a simulation of umts turbo codes, with frame length 380, 8 iterations, using max-log-map , but BER results doesn\’t go below 10^-6.

I am using C code. I want this to go down to 10^-8. any suggestions

June 7th, 2012 at 9:04 am

Hi Nauman,

If you want your BER results to go down to 10^-6, then you will need to simulate the transmission of at least 10^8 bits. With your frame length, this corresponds to 264k frames…

Take care, Rob.

June 8th, 2012 at 7:39 am

Hi Rob,

I have sent these number of bits, I am only concerned about the error floor of

UMTS turbo codes, and can I achieve 10^-8 BER with the UMTS interleaver.

June 8th, 2012 at 8:37 am

Hi Nauman,

As you can see in my results above, the error floor of the UMTS turbo code is very steep. It may be that you can’t tell the error floor apart from the turbo cliff…

Take care, Rob.

June 11th, 2012 at 6:20 pm

Hi Rob,

I have one question about adding AWGN:

I need to plot my error versus (Eb/N0 (dB)) and I have about 4000 bit which after coding with 1/3 rate turbo code, I arrange them into a 111X111 matrix.

I calculate the noise variance as:

sigma2 = 1/(2*(10^(EbN0/10)));

r = R + sqrt(sigma2).*randn(size(R,1),size(R,2));

But I’m having second thoughts as if I have to somehow enter the message length 4000X3 bits or 4000 bits into the above equations. Should I?

Thanks,

Elnaz

June 12th, 2012 at 8:15 am

Hi Elnaz,

You should be adding the noise to the 12000 encoded bits. One thing to consider is that you are using purely real valued noise here. If you are using BPSK then this is okay. But otherwise, you should be using complex valued noise…

r = R + sqrt(sigma2)*(randn(size(R))+i*randn(size(R)));

Take care, Rob.

June 12th, 2012 at 4:12 pm

Hi Rob,

Yes, I am adding the noise to all the samples. The problem is with power of noise that I\’m adding. I\’m using a 1/3-rate turbo code and then I multiply my coded bits each with a 2D sinc function and then adding the noise.

Does the above formula: sigma2 = 1/(2*(10^(EbN0/10))) still hold here?

Thanks,

Elnaz

June 12th, 2012 at 4:18 pm

Hi Elnaz,

Your equation sigma2 = 1/(2*(10^(EbN0/10))) can be rearranged to give N0 = 2*sigma^2. This implies that your noise will have two dimensions (i.e. it will be complex). If you were using purely real-valued noise, then N0 = sigma^2.

Take care, Rob.

June 12th, 2012 at 5:17 pm

Shouldn’t I use something in line with: Eb/N0 = Fs/(2 x R x sigma2) ?

where

R = code rate 1/3 in my case

Fs = No. of samples per modulated symbol. Since my my system is two-dimensional, I guess I will have to take the no. of samples in my sinc(x).sinc(y) function.

Since I’m adding pure real-valued noise then: Eb/N0 = Fs/(R x sigma2) .

Actually, I think my error vs. EbN0 plot are shifted to the right compared to the correct plots as if I’m not adding enough noise power. That’s the reason I’m trying to increase the noise power somehow.

Thanks,

Elnaz

June 13th, 2012 at 3:31 pm

Hi Elnaz,

When I perform a baseband simulation, I use SNR = Es/N0, where Es is the energy per symbol. I then convert to Eb/N0 by using Es = R*log2(M)*Eb, giving Eb/N0 = Es/(N0*R*log2(M)). Here M is the number of constellation points in the modulation scheme.

Take care, Rob.

June 14th, 2012 at 9:59 pm

Hi Rob,

Yes, right. I depends, I think, where we’re adding the noise in the system. What I’m doing is somewhat different than a standard modulation and that is the reason I have difficulties applying the same Eb/N0 concept to my problem.

Actually, I am rearranging my stream of coded bits into a matrix and then convolve this matrix with a 2D sinc function ending up with an image (2D waveform). Therefore, I have an image wherein the coded bits are hidden. I add my noise to this matrix.

I want to know how exactly I should calculate the noise variance in terms of Eb/N0.

Any reference or suggestion would be great.

Thanks,

Elnaz

June 15th, 2012 at 5:04 pm

Hi Elnaz,

I’m afraid that I haven’t come across the approach you are describing here so I’m afraid that I can’t help you with the relationship between Eb/N0 and the noise variance. My suspicion is that N0 = 2sigma^2, since you are probably using complex noise. The bit I’m not sure about is the value of Eb, or Es…

Take care, Rob.

June 20th, 2012 at 11:05 am

Dear Rob,

In your post above (May 25th, 2012 at 11:31 am), you suggest me interpolation to convert one of the EXIT curves into the domain of the other. In this way we can then subtract each data point in one curve from the corresponding point on the other curve. This way can find the smallest gap between the curves.

I already implement this method and it is working fine but now I want to write these step of interpolation of two curves in a mathematical way but feeling difficulty. Please can you give me example of writing this?

June 20th, 2012 at 3:18 pm

Hi Ideal,

Suppose you have two data points (x1,y1) and (x2,y2). You wish to find the y-coordinate y3 that corresponds to a particular x-coordinate x3. You can interpolate along a line passing through the two known data points using the following steps…

m = (y2-y1)/(x2-x1)

c = y1 - m*x1

y3 = m*x3+c

Take care, Rob.

June 20th, 2012 at 6:28 pm

the exit chart graph is plotted in between Eb/No vs BER.i got the out put as after10 trial and 500 message bit passed i got the waveform.can i calculate the code rate?

June 21st, 2012 at 7:32 am

Hi Somya,

The code rate of a UMTS turbo code is given by N/(3*N+12), where N is the interleaver length. In the case of UMTS, the interleaver length N should in the range 40 to 5114 bits.

Take care, Rob.

June 25th, 2012 at 12:24 pm

Hello Rob,

I know you very competent in turbo codes and MIMO systems. I read some papers from L.Hanzo and you.

I work on turbo-MIMO matlab code: spatial multiplexing MIMO with MAP (or ZF, MMSE) and turbo-code (PCCC with MAP).

Could you please to suggest me any help in this field?

June 25th, 2012 at 2:19 pm

Hello Stas,

I’m afraid that I don’t have any Matlab code for spatial multiplexing MIMO, but the Matlab code that I have provided on this page can help you with your turbo-code - it is a PCCC with MAP.

Take care, Rob.

June 27th, 2012 at 1:27 am

Dear Rob,

Thanks for your help. Please one more question.

Do you have any script for plotting “zoom out a portion of plot within the same plot”?

Thanks

June 27th, 2012 at 9:24 am

Hi Ideal,

I’m afraid that I don’t, because I do all my final-version plotting in Gnuplot, rather than Matlab. I’m sure you will be able to find some code for this on Google though.

Take care, Rob.

June 29th, 2012 at 6:08 am

Dear Rob,

Yes I did that plot very well thanks

July 4th, 2012 at 5:00 am

Dear Rob,

You explain me last time the way of puncturing in MATLAB. I did that one and working very good. It was the way for binary channels.

But Now I am think if I am using QPSK or 4-PAM, in this case, each symbol consist of two bits. I am thinking to apply puncturing (pairwise bits), i.e., puncturing symbol treating all the bits uniformly but I don’t know how to implement pairwise puncturing. Please can you give me any idea, how to implement it. Thanks

July 4th, 2012 at 9:34 am

Hi Ideal,

My feeling is that puncturing bits in pairs places an unnecessary constraint on your design and that it may degrade your performance. My recommendation would be to puncture the bits in the way you have already been doing it, then pair the remaining bits up and modulate them.

Take care, Rob.

July 4th, 2012 at 12:02 pm

Dear Rob,

Thanks for your advise. But after puncturing, I need to calculate the LLR for each bit, i.e., after puncturing I need to perform QPSK modulation and then calculate the LLR(x1) and LLR(x2) in a symbol and then calculate the EXIT chart? I think your idea is very useful but..

I am still confuse please can you explain bit more. Thanks

July 4th, 2012 at 12:51 pm

Hi Ideal,

You just need get the LLRs from the demodulator, then insert a zero-valued LLR to replace each punctured bit. You should think of the puncturer as being independent from the modulator.

Take care, Rob.

July 5th, 2012 at 12:17 pm

Dear Rob,

I am thinking of first puncturing the codebits and then modulate them into QPSK and then transmit. On the receiver side, I will receive y=x + n, x is the QPSK symbol and n is the noise. Now LLR(y) = 2/sigma *y is the right thing? or I need to calculate the LLR for real bits and then for imaginary bits and combine the LLR into one LLR.

Then I can add the same pattern of punctured, I did on the transmitter side.

Am I right or any sample example will be very helpful please. I want to calculate the EXIT chart of the LDPC code. Thanks

July 5th, 2012 at 6:30 pm

Hi Ideal,

If you use Gray coded QPSK then you can use…

LLR1 = sqrt(2)*real(y)/sigma^2;

LLR2 = sqrt(2)*imag(y)/sigma^2;

I think…

Take care, Rob.

July 6th, 2012 at 12:41 am

Dear Rob,

Thanks but I am still confuse. Like I punctured some codebits, then I modulate using the Gray mapping QPSK, now each bit is converted into a symbol (two bits), Then similarly I calculate the bitwise LLR for real bit 1 and imaginary bit 2. It means that I have two LLRs on the receiver side, LLR1 and LLR2. How can I combine two LLRs into one LLR so that I can assign zeros to the punctured position(I did at the transmitter).

I am still not understanding the cooprations between bits and symbols? After all to calculate the extrinsic mutual information, I need one LLR vector (either bit 1 or bit 2 or combined) that is the question? Thanks for your help please

July 6th, 2012 at 3:26 pm

Hi Ideal,

There is no need for you to “combine two LLRs into one LLR”. Suppose you have 100 bits, then you puncture 40 of them to leave 60 bits. After that you combine two bits per QPSK symbol, to give 30 symbols in total. In the receiver, you can use the code I provided above to separate each QPSK symbol in to two LLRs. So you go from 30 symbols to 60 LLRs. After that you can insert 40 zero-valued LLRs to get you back to 100 LLRs.

Take care, Rob.

July 7th, 2012 at 1:22 pm

Dear Rob,

I am really very thankful to you for your detailed example. So it means that LLR1 is of 30 bits and LLR2 is of 30 bits too and total of 60 bits so LLR = [LLR1, LLR2]; to have LLR equal to 60 bits.

Secondly I need to use QPSK gray mapping to convert from 60 bits to 30 symbols. Can I use your script for QPSK gray mapping scheme, you provided for QPSK demapper exit chart?

I am very much thankful to you.

July 8th, 2012 at 8:36 am

Hi Ideal,

That’s right. You can use the QPSK code I have put on my website for this job.

Take care, Rob.

July 9th, 2012 at 4:10 am

Dear Rob,

Thanks, I implement all the steps and now my algorithm is working. But I have one question about the LLR, You wrote that

If I use Gray coded QPSK then I can use…

LLR1 = sqrt(2) * real(y) / sigma^2;

My question is why you write sqrt(2), I was thinking, it will be just 2 without square root, i.e..

LLR1 = 2 * real(y) / sigma^2;

Please can you clarify me or any reference will be Fine as well. Thanks

July 9th, 2012 at 8:46 am

Hi Ideal,

This is because QPSK puts half of its power into the I signal and the other half into the Q signal. By contrast, BPSK puts all of its power into the I signal. This is why different coefficients are needed.

Take care, Rob.

July 9th, 2012 at 5:11 pm

Hi Rob,

As you know, I am using a Turbo Decoder which includes your BCJR algo in my application. However, I am getting an inferior result which is traced down to my Turbo Decoder. I have a number of uncertainties which may be the culprit.

First is the format you defined the Trellis structure in your BCJR code. I have translated my Trellis (i.e. poly2trellis(3,[7 5],7); in MATLAB) aacording to your structure. But, I feel, I can never be 100% sure unless check it with you. So, could you please have a look at this and conform its correctness:

% FromState, ToState, UncodedBit, EncodedBit

transitions = [1, 1, 0, 0;

2, 3, 0, 0;

3, 4, 0, 1;

4, 2, 0, 1;

1, 3, 1, 1;

2, 1, 1, 1;

3, 2, 1, 0;

4, 4, 1, 0];

July 9th, 2012 at 6:00 pm

Hi Elnaz,

Actually, there is a simple way to know if you have programmed the turbo code correctly or not. You should start by drawing the turbo code EXIT chart by using the histogram method. After that, you should do it again, but using the averaging method. If there are no problems with your code, you will get the same result with both methods. If there are problems, the two EXIT charts will look very different to each other.

Take care, Rob.

July 9th, 2012 at 6:08 pm

Ok; meanwhile, I’m also suspicious if using convenc which doesn’t force termination may be the culprit also. So, I need to add zeros to the end of the input bits before convenc, I guess. The question is how many zeros for poly2trellis(3,[7 5],7) to ensure termination?

Thanks,

Elnaz

July 9th, 2012 at 6:45 pm

Hi Rob,

I need to correct my previous comment by changing the question to: What bits should I add to my input stream to ensure encoder termination? Is there any way to find out other than manually drawing the trellis?

Thanks,

Elnaz

July 9th, 2012 at 7:21 pm

Hi Elnaz,

Since your encoder includes two memory elements, you will need two extra bits to terminate it. These bits should be set equal to the feedback bits. If you take a look at the UMTS encoder schematic, you will see what I mean…

Take care, Rob.

July 9th, 2012 at 7:40 pm

Hi Rob,

Do you think that not using termination can contribute to noticable error when 4000 message bits are encoded with a 1/3-rate Turbo code?

Elnaz

July 10th, 2012 at 12:43 am

Hi Rob,

Thanks for your comments. I have worked out the things and well with gray mapped QPSK. Now please can you tell me a bit about the anti-gray mapped QPSK.

I mean, how to mapp from bits to symbols (ant-gray QPSK) and what will be the LLR expressions. Thanks a lot for your thorough help.

July 10th, 2012 at 4:20 am

Hi Rob,

I studied the parts in the report which deals with termination and also the way you have implemented it.

Actually, since my application is different than yours, I can not exactly do what you did. I am using a 1/3 rate turbo encoder. Therefore, I will be having two tail bits padded to the systematic output, and the parity 1 and parity 2 will each also be two bits longer than the original stream. These 3 stream of bits then pass through a channel. The discrepancy between my application and your application is that in the receiver site , I won’t have access to the termination bits of the second encoder to feed the second BCJR. But for the 1st BCJR there is no problem since termination bits of the 1st encoder is already padded to the systematic output and passed through the channel. So, I will have the channeled version of the termination bits of the 1st encoder but not the second one.

What would you suggest?

Thanks,

Elnaz

July 10th, 2012 at 7:54 am

Hi Elnaz,

Getting the termination wrong can give you a few bit errors per frame. So this would give you an error floor at about 10^-3, using a frame length of 4000 bits.

The UMTS turbo code solves the problem you’ve described by transmitting the bits that are padded onto the interleaved bit sequence in order to terminate the second encoder. The alternative is to pad the apriori LLRs with two zero-valued LLRs.

Take care, Rob.

July 10th, 2012 at 7:56 am

Hi Ideal,

If you use anti-Gray mapping (also known as natural mapping) then you can’t use the simple demodulator equations any more. You can see how to implement this using the code that I provided at…

http://users.ecs.soton.ac.uk/rm/resources/matlabexit/#comment-1545

Take care, Rob.

July 10th, 2012 at 2:59 pm

Hi Rob,

I implemented the alternative i.e. padding the apriori LLRs with two zeros. And my apriori encoded LLRs already include two encoded and channeled termination bits (output of component_encoder are passed through channel).

I am using the encoded outputs of BCJR decoder at each iteration.

Is it correct to use their entire length or the last two bits are erroneous?

Thanks,

Elnaz

July 11th, 2012 at 6:45 am

Hi Elnaz,

All of the LLRs should be good to use. The last tow will pertain to the two termination bits.

Take care, Rob.

July 11th, 2012 at 11:21 am

Dear Rob,

Thanks for QPSK demapper exit chart code. Can you advise me a bit its implementation for the LDPC EXIT chart, i.e., I want to simulation for Variable node and check node (3,6) half rate. In that case, we will have two a-priori LLRs from check nodes and one channel LLR in order to calculate the variable node extrinsic information. Can you advise me about its implementation like this. please

July 11th, 2012 at 3:37 pm

Hi Ideal,

As you say, each variable node takes LLRs from it’s connected check nodes and one LLR from the QPSK demodulator. If you use natural QPSK mapping, there is a benefit to iterating between the demodulator and variable nodes, in addition to between the check and variable nodes. In which case, each variable node can generate one extrinsic LLR to give to the demodulator.

Take care, Rob.

July 11th, 2012 at 5:44 pm

Hi Rob,

I don\’t see any performance gain after adding termination. I changed BCJR decoder line from

betas(:,length(apriori_uncoded_llrs))=0;

back to ….

betas(1,length(apriori_uncoded_llrs))=0;

And, I pad two zeros to apriori uncoded LLRs before feeding them to the decoders.

Also, previously I changed your BCJR code to also output encoded LLRs and I am only using those.

Is there anything I might be missing?

Thanks,

Elnaz

July 12th, 2012 at 3:06 am

Dear Rob,

I want to know, is there a way to test turbo decoder performance independant of modulation and channel scheme?

My observation is that in my application the turbo decoder performs inferior than what is expected. The EXIT charts that you advised me to draw, I think, test the encoding, modulation, and one decoder but not the turbo decoder.

What would you suggest?

Many thanks,

Elnaz

July 12th, 2012 at 5:07 am

Dear Rob,

Thanks for reply,

I think still I can’t understand the operation of iterations between the demodulator and variable node, and also between the variable node and check node. May be I don’t still have a clear overview of these iterations. Any clarification, how these iterations will work in simulation setup will be appreciated please.

July 12th, 2012 at 8:27 am

Hi Ideal,

You can see what I mean in Figure 2 of this paper…

http://gps-tsc.upc.es/comm2/publications/C_2006_ISTC06_Lahuerta_1274.pdf

Here you can see that LLRs are flowing from demodulator to variable node, but also from variable node to demodulator.

Take care, Rob.

July 12th, 2012 at 8:31 am

Hi Elnaz,

Termination doesn’t make any significant difference to the performance in the turbo cliff region of the BER plot. It only really makes a difference in the error floor region. Probably your error floor is at such a low BER that you can’t see it (and its improvement) in your BER plots.

You can test that the turbo decoder is operating property by plotting decoding trajectories into the EXIT chart. If there is not a good match between the trajectories and the EXIT functions, then you know something is going wrong.

Take care, Rob.

July 12th, 2012 at 1:56 pm

Hi Rob,

I plotted the EXIT charts and the two are different. The one with histogram method is more square-like but the other is more elliptical.

I know this means there is something wrong but I don’t know how should I further debug the problem.

All I did is that I replaced your encoding and modulation with mine. I don’t have demodulator and my frame-length is about 4000 bits and I used termination but Eb/N0 value was -4 same as yours.

Can you help me find out what is wrong here? I don’t exactly understand what these charts are measuring.

Thanks,

Elnaz

July 12th, 2012 at 2:33 pm

Hi Elnaz,

The first thing I would do is try to narrow down the problem to either the turbo code or to the demodulator. You can do this by seeing how the mutual information of the LLRs provided by the demodulator vary with SNR. You can plot this using both the averaging and histogram methods. If your demodulator is working, then these plots will match each other, suggesting that the problem is in the turbo code. If the plots don’t match then it suggests that the problem is in your demodulator.

Take care, Rob.

July 12th, 2012 at 3:02 pm

Hi Rob,

But I do not have demodulator only modulator. Did you mean modulator?

Without demodulator should I still expect to get similar plots (histogram and averaging) with different SNRs?

By the way, I ran your code to see what should I expect with my code. I’m not even sure if those plots I’m getting are correct. They are not exactly matching either. Is there any way I can send you the figures from your code?

Thank you so much,

Elnaz

July 12th, 2012 at 3:25 pm

Hi Elnaz,

You must have some kind of demodulator - this is the part of your code that converts the received signal into LLRs. The histogram and averaging methods will never give identical results, but the results should be close to each other. You may like to compare the results you get from running my code with the results that I’ve provided at the top of this page. Can you provide a link to a .jpg of your results?

Take care, Rob.

July 12th, 2012 at 4:08 pm

Rob, thank you so much for your help.

I see that by increasing SNR in your code, the discrepancy between histogram and averaging charts as well as EXIT functions in each graph decreases.

This is the graph I get from running my code with averaging method:

http://www.sendspace.com/file/epsckt

And, this is the one with histogram method:

http://www.sendspace.com/file/69eon8

(please use the download link at the bottom of the page)

Elnaz

July 12th, 2012 at 4:10 pm

Hi Elnaz,

These EXIT functions are significantly different and so I suspect that there is a bug somewhere in your code. This could be in your turbo encoder, modulator, demodulator or turbo decoder. You should try to narrow it down…

Take care, Rob.

July 12th, 2012 at 5:12 pm

Dear Rob,

I just want to make sure I understand correctly. When you say “If your demodulator is working, then these plots will match each other, suggesting that the problem is in the turbo code” do you mean that e.g. the histogram graphs of different SNR values should match each other? Or that for each SNR value the histogram graph should match the averaging graph?

What I see with running your code is that by changing SNR value graphs significantly change as well but by increasing the SNR the similarity between histogram and averaging graphs grows too.

Am I correct?

Elnaz

July 13th, 2012 at 7:19 am

Dear Rob,

can you please extend your MATLAB code for EXIT demapper (natural QPSK mapping) to half rate regular LDPC EXIT chart (natural QPSK mapping). Please I really need it. Please help me in this regards. I will be thankful to you.

July 13th, 2012 at 5:18 pm

Hi Elnaz,

I mean that for each SNR value, the histogram graph should match the averaging graph.

Take care, Rob.

July 13th, 2012 at 5:22 pm

Hi Ideal,

I’m afraid that I don’t have any Matlab code that combines LDPC with QPSK. The QPSK demodulator has an input and an output pertaining to the LDPC-encoded bits - in main_exit.m from QPSKEXIT.zip, the input is called a_a and the output is called a_e. Likewise, the variable node decoder has an input and an output pertaining to the LDPC-encoded bits - in main_exit_vnd.m the input is called c_a and the output is called c_e. It’s just a matter of connecting these inputs to these outputs - you should also use an interleaver and deinterleaver though.

Take care, Rob.

July 14th, 2012 at 3:10 am

Hi Rob,

I have some results. At higher SNR values e.g. SNR=2, my graphs almost match. Those graphs that you saw earlier were from SNR=-4.

Then, according to your advice above, this means that there is something wrong with the decoder. However, I replaced my encoder and decoder which are your codes exactly only with different transition matrix, and yet I get almost same graphs as of when using my encoder and decoder!

At this point, I guess, I’m left without a clue as to what is wrong with my turbo decoding.

What would you suggest?

Thanks,

Elnaz

July 14th, 2012 at 12:48 pm

dear Rob,

thanks for the suggestion. I will try it to combine the ldpc with qpsk. but how can I add interleaver and deinterleaver? is there any matlab command or function which perform interleaving deinterleaving. thanks for your support

July 16th, 2012 at 7:19 am

Dear Rob

I find the QPSK Exit script written by you but I cannot find the main_exit_vnd.m file in you resources. please can you update for me thanks

July 16th, 2012 at 6:49 pm

Hi Elnaz,

I’m afraid that you will need to track down the bug in your code and fix it. I would suggest starting from my code and gradually converting it into your code, checking that the averaging and histogram methods give the same result after each stage of the conversion. In particular, you may like to double check that your trellis and component encoder correspond to each other. Something you may like to do is rewrite the encoder so that it is trellis-based, rather than shift-register based. This way, you can copy-and-paste the transitions from the decoder to the encoder, allowing you to be sure that they match…

Take care, Rob.

July 16th, 2012 at 6:52 pm

Hi Ideal,

An interleaver and a deinterleaver can be implemented in Matlab as follows…

interleaver = randperm(length(bits));

interleaved_bits = bits(interleaver);

deinterleaved_bits(interleaver) = interleaved_bits;

You can find main_exit_vnd.m at…

http://users.ecs.soton.ac.uk/rm/wp-content/LDPC.zip

Take care, Rob.

July 16th, 2012 at 7:48 pm

Hi Rob,

I’m not clear on one point as to why you think there is a bug in the encoder/decoder part of my code? since when I replaced my encoder and decoder with your “component_decoder” and “component_encoder” codes I still get almost the same graphs at SNR=-4.

Having this am I correct in assuming that it is the modulation/demodulation part rather than the coding/decoding part?

Elnaz

July 17th, 2012 at 8:37 am

Hi Elnaz,

You are right - this suggests that the problem is in the modulator or demodulator.

Take care, Rob.

July 17th, 2012 at 12:26 pm

Hi Rob,

Thanks; Therefore, my takeaway point would be, correct me if I’m wrong, that depending on the modulation/demodulation used Histogram and averaging graphs can look different on low SNRs (depending on modulation).

Since what I do before turbo decoding (I call it modulation) is not a standard modulation technique but the model of some sort of channel, I guess it is burdening the decoding part and cause this diiference at low SNR.

Here is my BER plot. Could you please take a look at it and let me know what you think?

http://www.sendspace.com/file/d7brlw

Thanks,

Elnaz

July 17th, 2012 at 6:23 pm

Hi Elnaz,

Your BER is heading towards zero as the SNR increases, so your system is as least partially working. In my experience, the averaging and histogram methods should agree with each other, even at low SNRs. It sounds to me like your method for converting the received signal into LLRs is not quite right. You may like to use my display_llr_histograms.m Matlab code for checking this. You can see lots of discussion about this in the comments of…

http://users.ecs.soton.ac.uk/rm/resources/matlabexit/

Take care, Rob.

July 18th, 2012 at 6:28 am

Hi Rob,

I am trying to combine the QPSK (natural) demaper exit with with LDPC exit but still cannot get the clue. Please help me in this regards a bit more. Thanks for all your help

July 18th, 2012 at 9:13 am

Dear Rob,

Now I understand the concept of QPSK demapper plus ldpc exit chart but still confusion in implementing it it matlab. I have initiallt two questions.

1. I am not understanding the iterations between demapper + variable node decoder + check node decoder.

2. Do I need to implement the soft demodulation bitwise (symbol-wise) or at first I calculate the channel LLR a vector and then apply demapping. and again don’t know how to implement iterations?

Please help me in this regards. Thanks a lot

July 18th, 2012 at 5:14 pm

Hi Ideal,

You can see an example of implementing iterations in main_ber.m, which you can download from this page. When you perform demodulation, you will need a for loop that demodulates each symbol individually.

Take care, Rob.

July 19th, 2012 at 2:44 am

Dear Rob,

I did some kind of flow for the transmitter and reciever for first iteration.

Transmitter:

codebits –> interleaver –> group/mapping(QPSK) –> adding noise and txt.

Receiver:

Demapper input: a_a(zeros at first iteration) + received symbols

ouputs: a_e and then perform the deinterleaving

variable node decoder inputs: a_e + a_a(from check node)

outputs: c_e and then perform interleaving

now 2nd iteration start at the input of demapper with a_a = c_e(interleaved) with channel signal.

But I am thinking to introduce the soft values at the input of the demapper as was done by ten Brinks or the above procedure is OK? Thanks

and I think I need to have for loop for the above process.. Thanks

July 19th, 2012 at 4:36 pm

Hi Ideal,

Your flow looks good to me. I’m not sure what you mean when you say “I am thinking to introduce the soft values at the input of the demapper”. It seems to me that you are already doing this when you say “2nd iteration start at the input of demapper with a_a = c_e(interleaved) “.

As you say, you can use a for loop to perform the iteration. One thing to think about is what decoder activation order you will use within each iteration of your loop. Examples of potential activation orders include:

1) demodulator - variable node - check node

2) demodulator - variable node - check node - variable node

3) demodulator - variable node - demodulator - variable node - check node

4) demodulator - variable node - check node - variable node - check node

and so on…

I would recommend option 1, since it activates each of the decoders equally often…

Take care, Rob.

July 20th, 2012 at 6:46 pm

Hi Rob,

Are you familiar with Cramer-Rao bound analysis?

Thanks,

Elnaz

July 23rd, 2012 at 6:38 am

Dear Rob,

Thanks for all your fruitfull suggestions. I appreciated it. I am really interested in calculating the EXIT function of the variable node decoder(VND). I did implement it (QPSK gray and natural mapping) and hopefully the results are good.

I just did the iterations between the demapper and the variable node decoder as,

ist iteration on transmitter side:

demapper input (chanel plus a priori LLR, (L_ma) (all zeros)) = L_me (out)

VND input (a priori LLR from check node plus L_da(L_me(Interleaved))) = L_de (out)

2nd iteration

demapper input= same channel + L_ma, where L_ma = L_de(de-interleaved) and same out L_me

VND input (a priori LLR from check node plus L_da(L_me(Interleaved))) = L_de

and so on.

EXIT chart

I can calculate the VND EXIT curve using L_de and the demapper curve using L_me.

Now one question:

I want to implement puncturing in this setup. please can you give me feedback about the puncturing implementation please. Thanks again

July 23rd, 2012 at 4:59 pm

Hi Elnaz,

I’m afraid that I have never used the Cramer-Rao bound…

Take care, Rob.

July 23rd, 2012 at 5:08 pm

Hi Ideal,

The simplest way to implement puncturing is to do it at the same time as interleaving.

You can design the interleaver/puncturer using…

interleaver = randperm(number_of_ldpc_encoded_bits);

interleaver_puncturer = interleaver(1:number_of_bits_after_puncturing);

You can perform puncturing in the transmitter using…

interleaved_punctured_bits = ldpc_encoded_bits(interleaver_puncturer);

You can perform puncturing in the receiver using…

apriori_interleaved_punctured_llrs = extrinsic_ldpc_encoded_llrs(interleaver_puncturer);

You can perform depuncturing in the receiver using…

apriori_ldpc_encoded_llrs = zeros(1,number_of_ldpc_encoded_bits);

apriori_ldpc_encoded_llrs(interelaver_puncturer) = extrinsic_interleaved_punctured_llrs;

Take care, Rob.

July 24th, 2012 at 11:31 am

Dear Rob,

I am really thankful for your responses. One question about the soft demodulation.

In your script of soft_demodulate.m, you are converting from symbol to bit extrinsic llr (a_e). Now my question is that do we need to take into account the two variable node as well, i.e., one symbol consist of two bits (x1×0), so for each bit one variable node of ldpc code, it means for one symbol, we need to use two variable nodes connecting with check nodes. I am confused here?

2ndly, If I want to plot the EXIT curve between the check node and demapper, I think, do I need to calculate the demapper curve after performing the complete iterations and using the llr (a_e) value from last iteration( using in you average MI script) so that the demapper EXIT curve shows some slop. Can you tell me the right way please. Thanks again

July 24th, 2012 at 5:45 pm

Hi Ideal,

There is no need to modify the soft demodulator to take the variable nodes into account. Taking care of the variable nodes is the job of the variable node decoder.

There are a few ways of plotting the EXIT charts of your 3-stage concatenated scheme. The way that I would recommend is to treat the LDPC decoder as a black box that the demodulator exchanges LLRs with. You can then draw a single EXIT function for this black box. You can achieve this by using the following process for each value of Ia in the set 0.0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0:

1) Generate some random bits

2) Encode these bits using the LDPC encoder, in order to generate the encoded bits

3) Use these encoded bits as the input to generate_random_llrs, together with the current value of Ia

4) Give the resultant LLRs to the LDPC decoder and perform 10 decoding iterations between the variable node and check node decoders, then obtain the extrinsic LLRs out of the variable nodes

5) Measure the mutual information Ie of these extrinsic LLRs

You can then plot Ie vs Ie.

Take care, Rob.

July 26th, 2012 at 1:42 am

Dear Rob,

Thanks for the simple example.

One thing please. I want to use your modulate.m and soft_demodulate.m script for the 4-PAM constellation, where 4-PAM symbol ={0,1/3,2/3,1} and the symbol energy I calculate E_s = 1/M(0+1/3+2/3+1)d=1/2d (optical),

can I use your scripts by just changing the constellation labels and bit labels and what should I need to multiply with symbols for normalization, i.e.,

constellation_points = [0; 1/3; 2/3; 1]/(which factor rather than sqrt(2)).

Thanks

July 26th, 2012 at 5:19 pm

Hi Ideal,

Yes - it is as simple as changing the constellation points and bit labels. You should normalise by using…

constellation_points = constellation_points/sqrt(mean(abs(constellation_points).^2))

Take care, Rob.

July 27th, 2012 at 1:50 am

Hi Rob,

Your code is very helpful. I need to generate LLR values of the coded bits at the decoder. I tried your code and it didn\’t work.

% Calculate encoded extrinsic transition log-confidences. This is similar to

% Equation 2.18 in Liang Li’s nine month report or Equation 4 in the BCJR paper.

deltas2=zeros(size(transitions,1),length(apriori_encoded_llrs));

for bit_index = 1:length(apriori_encoded_llrs)

for transition_index = 1:size(transitions,1)

deltas2(transition_index, bit_index) = alphas(transitions(transition_index,1),bit_index) + uncoded_gammas(transition_index, bit_index) + betas(transitions(transition_index,2),bit_index);

end

end

% Calculate the encoded extrinsic LLRs. This is similar to Equation 2.19 in

% Liang Li’s nine month report.

extrinsic_encoded_llrs = zeros(1,length(apriori_encoded_llrs));

for bit_index = 1:length(apriori_encoded_llrs)

prob0=-inf;

prob1=-inf;

for transition_index = 1:size(transitions,1)

if transitions(transition_index,4)==0

prob0 = jac(prob0, deltas2(transition_index,bit_index));

else

prob1 = jac(prob1, deltas2(transition_index,bit_index));

end

end

extrinsic_encoded_llrs(bit_index) = prob0-prob1;

end

Does it computes the LLRs of (lc+ld+le+lf) ? I need just lc.

Thanks,

Ushi

July 27th, 2012 at 1:59 am

Dear Rob,

Thanks for the response.

You also suggest me some iterative loop for QPSK but I am thinking, if I just generate the random LLRs and give them to the variable node input and then take out the extrinsic LLRs from the variable node and measure the MI of this extrinsic LLR. In this situation, I didn’t see any iterations between the demapper and the variable node. Thats why I was doing the following steps for 3,6 half rate code so dv=3.

Receiver side:

first iteration demapper:

1)Lm_e=perform soft demodulation with all zeros a priori LLRs (from VND).

2) Ld_e=input the Lm_e to the VND plus all zeros priori LLRs(incoming from CND (dv-1)).

2nd iteration demapper:

1)Lm_e=perform soft demodulation with a priori LLRs (ld_e from VND).

2) Ld_e=input the Lm_e to the VND plus a priori LLRs(from CND, this time not all zeros (random)).

this iterative process carry on until the iteration count end or MI from the demapper stops improving.

After that I got the final Ld_e from the above iterative process and I use that to measure the VND extrinsic MI for each point of the IAs and this is called VND EXIT chart?

July 27th, 2012 at 6:37 pm

Hello Ushi,

The code you have provided will calculate extrinsic LLRs that pertain to the output bits of the corresponding component encoder. Using the notation from my main_ber.m file, these LLRs would be c_e for the first component decoder and d_e for the second component decoder.

Take care, Rob.

July 27th, 2012 at 6:48 pm

Hi Ideal,

You are close getting it right, but your variable node has d_v+1 number of ports - one connected to the demapper and d_v connected to the check nodes. Therefore you should be generating d_v number of random apriori LLRs and you should be measuring the mutual information of d_v number of extrinsic LLRs. You currently have d_v-1 number of LLRs in both cases.

The LLR that is output on a particular one of the variable nodes ports should be equal to the sum of the LLRs that are input at the other d_v ports. So,

1) the LLR that is passed to the demapper by the variable node should be equal to the sum of the d_v number of randomly generated LLRs.

2) the LLR that the variable node outputs on a particular one of its check-node-connected ports should be equal to the LLR provided by the demapper and the d_v-1 randomly generated LLRs that are provided on the other check-node-connected ports.

Take care, Rob.

July 28th, 2012 at 1:59 am

Dear Rob,

Thanks, this is right now. One thing about the soft demodulation. I am wondering that you are using the same equation (2) in ten brinks paper “Iterative demapping for QPSK modulation” Electronic letters July 1998?

I think that equation (2) is calculating bitwise LLR value. If the soft_demodulate.m is the output of same equation then I am wondering where is the first additive term. If not then what is the possibility of using that equation. I mean how can we implement that equation? Thanks for discussion

July 30th, 2012 at 5:45 am

Dear Rob,

One more thing in plotting the trajectory of QPSK. I am using the following code. It works well in Gray mapping but not for natural mapping.

IA = 0;

IE = 0;

results = [IA,IE];

snr_L = 10^(SNR*0.1);

sig_Ch = sqrt(8 * 0.5 * snr_L);

% Trajectory (dv=3,dc=6)

for iteration_index = 1 : 100

sig_A = (interp1(J,SIGMA,IA));

sig_D = sqrt(sig_Ch^2 + 3*(sig_A.^2)); % one channel + 3 apriori

sig_E = sqrt((3 - 1).*sig_A.^2 + sig_D.^2); % VND extrinsic

IE = interp1(SIGMA,J,sig_E);

results = [results; [IA,IE]];

sig_C = interp1(J,SIGMA,(1-IE));

sig_C(sig_C>20) = 20; sig_C(sig_C20) = 20; sig_EC(sig_EC<=0) = 0;

IA = 1 - interp1(SIGMA,J,sig_EC);

results = [results; [IA,IE]];

end

plot(results(:,1), results(:,2),’g-.’); hold on

July 31st, 2012 at 6:00 pm

Hi Ideal,

Equation (2) in the paper you mention calculates the aposteriori LLR. By contrast, my soft demodulator calculates the extrinsic LLR, which is equal to the aposteriori LLR minus the apriori LLR. The first additive term is the apriori LLR. Therefore, my soft demodulator is equivalent to the right hand term in equation (2) of that paper.

It looks to me that your trajectory code is using ten Brink’s equations for the VND and CND EXIT functions. These hold for Gray mapping, but not for natural mapping, as you have found. When using natural mapping, you will need to use your simulation to get the VND EXIT function, rather than an equation.

Take care, Rob.

August 1st, 2012 at 4:40 am

Dear Rob,

It means in order to calculate the EXIT chart, we can use the extrinsic LLR rather than the aposteriori LLR. So we can just implement the equation (2) without the a prioi LLR and fed that extrinsic LLR into the LDPC decoder after interleaving?

Or we still need the aposteriori LLR to fed to the LDPC decoder? If that is the case then how can we introduced the a priori LLR in simulations? Thanks

August 1st, 2012 at 4:42 pm

Hi Ideal,

Yes, it is the extrinsic LLRs that should be iteratively exchanged with the variable node decoder. The aposteriori LLRs should only be used once the iterative decoding process has finished, in order to provide the final output. If the aposteriori LLRs are used during the iterative decoding process, then this will become a positive-feedback loop and the performance will be severely degraded.

Take care, Rob.

August 3rd, 2012 at 2:45 am

Dear Rob

I see in your script of calculating the demappe exit curve that you wrote

N0 = 1/(SNR_linear), then in the channel model sigma^2 = (No/2) and in the demodulation No according to the pdf of n. But I am not sure where you define the code rate R?

I think, sigma^2 = (2RmRc(EbNo_Linear))^-1, where for QPSK Rm=2 and Rc is the code rate. Because ten Brinks define the demapper EXIT curve Ie = J(sqrt(8Rc(EbNo_Linear))) for gray mapping in his paper “Design of low density parity check code for modulation and detection” equation (19) and plot the results in Fig. 6 for 1X1. It gives different result from your demapper script. Can you please comment? Thanks

August 4th, 2012 at 2:54 pm

Hi Ideal,

There is a difference between SNR and Eb/N0 - I am using SNR. The relationship between them is Eb/N0 (dB) = SNR (dB) - 10*log10(R*log2(M)), where R is the channel coding rate and M is the number of constellation points, e.g. M=4 in QPSK.

Take care, Rob.

August 6th, 2012 at 5:38 am

Dear Rob,

Thanks, it means that I can replace N0 with (R*log2(M)*EbNo_Lin)^-1 in your script of soft_demodulate in order to represents it in Eb/No. I think I am right now? Thanks again

August 6th, 2012 at 5:45 pm

Hi Ideal,

That looks correct to me.

Take care, Rob.

August 15th, 2012 at 4:10 am

Dear Rob,

Thanks but I have again a basic question please.

Actually for symbol mapping like QPSK/4PAM, we write the symbol energy as Es = 1/M(\sum|x|^2) in electrical domain and Es = 1/M(\sum|x|) in optical domain and Eb/No = (1/R*log2(M)) * Es/No, My confusion is that, when we define No, we simply write No=(R*log2(M)*Eb/No)^-1, where is the value of Es?

I was thinking in BPSK Es = 1, that’s why we don’t need to write it but for the case of QPSK/4-PAM, Es will not be 1 or I am wrong. Please can you rectify me. Thanks a lot

August 15th, 2012 at 5:12 pm

Hi Ideal,

Typically, people design all of their modulation schemes to have Es = 1. For example, in QPSK we multiply the constellation points by 1/sqrt(2), in order to make Es = 1. In 16QAM, we multiply the constellation points by 1/sqrt(10) to achieve this.

Take care, Rob.

August 16th, 2012 at 6:17 am

Dear Rob,

Thanks for answering. I am thinking to do some derivations for natural mapping, in order to calculate the EXIT chart. can you give me any advice in matter please? Thanks

August 16th, 2012 at 4:34 pm

Hi Ideal,

The first step is to calculate the probabilities P(y|x) for each constellation point x, where y is the received signal. After that, you can convert this into a probability for each bit value.

Take care, Rob.

August 17th, 2012 at 12:26 am

Dear Rob,

It looks quite OK. I think if I calculation the conditional probabilities for each constellation point assuming the AWGN channel for each constellation point. the second point I didn’t understand, like why should I need to convert the conditional probabilities into a probabilities for each bit? can you elaborate it bit more please. or any reference, from where I can have a start-up please please. Thanks

August 17th, 2012 at 5:32 pm

Hi Ideal,

You need to calculate probabilities for each bit because you want to obtain LLRs for each bit. References on BICM-ID will give you the mathematics behind all this.

Take care, Rob.

August 20th, 2012 at 3:15 pm

Hi Rob,

I was wondering if there is a known-to-be best performing equalizer the same way as we have LDPC code as the best performing coding strategy?

Basically, I am convolving a matrix of coded bits with a 2-D pulse shape which introduces ISI in both dimensions and I need to use equalizer to mitigate the problem.

Thanks,

Elnaz

August 20th, 2012 at 5:14 pm

Hi Elnaz,

Non-linear equalisers are the best performing. A good example is a turbo equaliser, which uses the BCJR algorithm.

Take care, Rob.

August 20th, 2012 at 9:15 pm

Hi Rob,

I will be doing joint iterative decoding, timing recovery, and equalization. I have done the first two but I don\’t know how should I combine my Turbo-decoder (BCJRs) to include equalizer (Turbo with BCJR) as well. Are there any resources or codes that I can benefit from? What do you suggest?

Thanks,

Elnaz

August 21st, 2012 at 2:20 pm

Dear Rob,

I am still not clear how to start calculating the analytical derivation of EXIT chart for natural QPSK mapping. Please elaborate it bit more. Thanks

August 21st, 2012 at 5:59 pm

Hi Elnaz,

My advice would be to keep the turbo equaliser separate from the other system components. You can then connect the various system components through interleavers and then perform iterative decoding among them all.

Take care, Rob.

August 21st, 2012 at 6:04 pm

Hi Ideal,

I didn’t realise that you are trying to derive an analytic expression for the EXIT function of the QPSK demapper. I’m afraid that I’ve only ever considered the simplest of cases for analytic EXIT functions (namely LDPC variable and check node decoders) - so I don’t know where to get started with this. It looks like this reference may be of use to you though…

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4114253&tag=1

Take care, Rob.

August 21st, 2012 at 6:49 pm

Hi Rob,

I know that there are 3 blocks of timing recovery, Turbo decoder, and Equalizer which should iterate together. What I don’t know is that how should I exchange the output llrs between the decoder and the equalizer. Should I exchange extrinsic llrs or full llrs? and how?

Also, I am under the impression that “Turbo Equalizer with BCJR” is the same “Turbo decoder” I already have with the only difference that in the equalizer we output the llrs for the coded bits instead of the llrs for the message bits. Am I correct? Or the equalizer is totally something else?

Thanks a lot,

Elnaz

August 22nd, 2012 at 4:34 am

Dear Rob,

This old paper,

read.pudn.com/downloads144/doc/comm/626085/a2.pdf, seems to be doing exactly that. But instead of keeping the turbo equalizer separate, it combines turbo decoder with an equalizer to build a turbo equaliser.

More important, my concern is that since I don’t have access to the channel transfer function (my channel is when I “2-D convolve” a matrix of coded bits with a 2-D pulse shape), how can I use BCJR in my equalizer since I don’t have the trellis of the channel?

Thanks,

Elnaz

August 22nd, 2012 at 2:42 pm

Hello Dear Rob,

I want to use a general turbo code (which uses M-QAM or M-PSK) in a system, when space-time coding is used too. Channel is Rayleigh fading.

Could you please guide me?

Best Regards

ALI

August 22nd, 2012 at 5:51 pm

Hi Ali,

The turbo code that is provided here is compatible with M-ary space-time coding, although I’m afraid that I don’t have any Matlab code for the latter. You just need to build a space-time encoder that accepts bits from the turbo encoder and a space-time decoder that provides corresponding LLRs for the turbo decoder.

Take care, Rob.

August 22nd, 2012 at 6:00 pm

Hi Elnaz,

“how should I exchange the output llrs between the decoder and the equalizer. Should I exchange extrinsic llrs or full llrs? and how?” You should exchange extrinsic LLRs that pertain to the turbo-encoded bit sequence. In order to obtain this, you will need to modify component_decoder.m so that it also outputs extrinsic_encoded_llrs. Also, you will need to obtain extrinsic LLRs for the systematic bits by adding the uncoded_extrinsic_llrs that are provided by both component decoders. Finally, the last three bits in the extrinsic_uncoded_llrs can be extracted to give you extrinsic LLRs for the termination bits.

“Or the equalizer is totally something else?” The equalizer should exploit knowledge about the dispersion in the channel to recover LLRs about the transmitted bits. It can represent the dispersion using a trellis and apply the BCJR algorithm to get the LLRs.

If your receiver does not have knowledge of the channel dispersion then you will not be able to use an equaliser unless you use channel estimation to obtain this knowledge.

Take care, Rob.

August 22nd, 2012 at 8:50 pm

Hi Rob,

I see. Therefore, turbo equaliser is/cen be an equaliser + turbo decoder combined with turbo principle.

My question, therefore, boils down to “how can I transfer my channel into a trellis?”. All my channel is doing is convolving a matrix of coded bits with a 2-D pulse shape. I have the pulse shape but I do not know how to convert it to a transfer function in a suitable form that can be translated into a trellis.

Is there a way to do this?

Thanks,

Elnaz

August 23rd, 2012 at 12:21 pm

Hello dear Rob

Thanks for your help. where is that file? Does it have adjustable rate?

Best Regards

Ali

August 23rd, 2012 at 5:58 pm

Hi Elnaz,

You need a trellis having M^(T-1) states, where M is the number of constellation points you are using and T is the number of consecutive received symbols that are affected by a particular transmitted symbol. There should be M number of transitions that emerge from each state and M number of transitions that merge into each state.

The trellis I have described is shown in the following paper, where L = T-1…

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1267050

Take care, Rob.

August 23rd, 2012 at 6:01 pm

Hi Ali,

You can download my turbo code from…

http://users.ecs.soton.ac.uk/rm/wp-content/matlabturbo.zip

This doesn’t have an adjustable rate, but you could modify it to do this by using puncturing or by repeating some of the encoded bit sequences.

Take care, Rob.

August 24th, 2012 at 7:58 pm

Hi Prof. Rob,

Thanks for your attention

Best Regards

Ali

August 26th, 2012 at 4:05 pm

Hi Prof.Rob

Could you please give me a Code that achieves to capacity (or closely near to capacity, like LDPC code). If MATLAB code is available, please give me.

Best Regads

Ali

August 26th, 2012 at 7:10 pm

Excuse me, which algorithm does your code follow? log-MAP, max log-MAP or another one?

Thanks

Ali

August 26th, 2012 at 8:43 pm

hi Dear Prof.

Excuse me, I have another question:

Let, we have two space-time code. First code achieves better capacity than the second one., but in a state, the second code has better BER. Now, we want to use a strong outer channel coder. Can we say that because first code has better capacity in all SNRs, we can use a good channel encoder to proof this claim?

August 28th, 2012 at 6:35 pm

Hi Rob,

why does a turbo encoder work worse than a LDPC code?

thanks

August 29th, 2012 at 8:32 am

Hi Ali,

I have some Matlab code for drawing the EXIT chart of an LDPC at…

http://users.ecs.soton.ac.uk/rm/resources/matlabturbo/#comment-10965

…you can modify this to build an LDPC decoder.

My Matlab code for the turbo decoder can use the Log-MAP, Appox-Log-MAP (which uses a lookup table to approximate the Jacobian logarithm) and the Max-Log-MAP. You can choose between these by modifying jac.m

The space-time code that offers the greatest channel capacity is the one that can give the best performance when combined with a convolutional code.

Take care, Rob.

August 29th, 2012 at 8:39 am

Hi Jack,

The relative advantages and disadvantages of LDPC and turbo codes is a matter of debate. My opinion is that because LDPC codes have longer interleavers than the equivalent turbo codes, the LDPC designer has more design choices to make, yielding a greater opportunity to find a good design.

Take care, Rob.

August 29th, 2012 at 6:37 pm

Hi Rob,

I have a question about how to use your BCJR decoder as an equalizer.

Say, the channel model (pulse shape) is a vector of 5 coefficients which makes a 4-tap delay line. How should I change your transition matrix to include this coefficients in the tap-delay line?

Thanks,

Elnaz

August 30th, 2012 at 6:03 pm

Hi Elnaz,

The transition matrix should have M^T number of rows. Each row should one column for the from state, one column for the to state and T number of columns that list every possible combination of T constellation points. The from state depends on the first T-1 constellation points and the to state depends on the last T-1 constellation points. You also need log2(M) number of columns to list the bits that are mapped to the last of the T constellation points.

Take care, Rob.

August 30th, 2012 at 6:28 pm

Hi Rob,

I know how your code and transition matrix works for the decoding job. What I was asking is how should I change your code to be an equalizer instead of a receiver? Could you please explain what parts of the code should be changed? For example, what are the uncoded_apriori_llrs here since all the bits are channeled and there are no systematic bits to use for this purpose.

Thanks,

Elnaz

August 31st, 2012 at 6:21 pm

Hi Elnaz,

The calculation of alphas, betas, deltas and extrinsic llrs should remain very similar to how they are now. The big change will be in the calculation of the gammas.

Take care, Rob.

September 3rd, 2012 at 2:57 am

Hello dear Rob,

Thank your for your constructive expressions. Excuse me, How can I change your matlab code for turbo code to M-ary modulations. you have presented these sentences:

% BPSK demodulator

% These labels match those used in Figure 2.11 of Liang Li\’s nine month report.

a_c = (abs(a_rx+1).^2-abs(a_rx-1).^2)/N0;

c_c = (abs(c_rx+1).^2-abs(c_rx-1).^2)/N0;

d_c = (abs(d_rx+1).^2-abs(d_rx-1).^2)/N0;

e_c = (abs(e_rx+1).^2-abs(e_rx-1).^2)/N0;

f_c = (abs(f_rx+1).^2-abs(f_rx-1).^2)/N0;

What will be new commands?

Thanks alot

September 3rd, 2012 at 6:12 pm

Hello Ali,

Anything higher than M=2 becomes a lot more complicated that these equations. You can download a soft demodulator for higher order modulation from…

http://users.ecs.soton.ac.uk/rm/wp-content/QPSKEXIT.zip

Take care, Rob.

September 4th, 2012 at 5:11 am

Dear Rob,

Please correct me if I’m wrong. Transferring BCJR decoder into an equalizer, I have my transition matrix for 16 states and the encoded bits here are numbers in 0:16 range (there is no feedback path; so generating the transfer matrix is a lot easier and can be done automatically). Therefore, I think, I no longer can do log(P0/P1) normalization when calculating encoded-gammas the way you’ve done.

However, I think, I calculate |channeloutput-encoded bit|^2 /(2*variance) for each row of my transition matrix separately; and then, add those values that correspond to uncoded bit=0 in prob0 and the rest in prob1 and that should do it. Everything else stays the same. What do you think? Correct?

Elnaz

September 4th, 2012 at 6:41 pm

Hi Elnaz,

Nothing that you have said here seems wrong to me - my suggestion would be to build it and then test it by comparing the EXIT curves obtained using the averaging and histogram methods - these should match each other.

Take care, Rob.

September 5th, 2012 at 3:21 am

Hi Rob,

I have changed:

\"if transitions(transition_index, 4)==0

encoded_gammas(transition_index, bit_index) = apriori_encoded_llrs(bit_index);

end\"

to

\"encoded_gammas(transition_index, bit_index) = -(encoded_input(bit_index)-transitions(transition_index,4))^2/(2*sigma2); \"

which is calculated for all the lines in the transitions matrix.

Also, I\’ve changed the transitions matrix according to the figure 4 in the paper you referred earlier. These are the only changes I\’ve made to convert the BCJR decoder into an equalizer. Also, I\’ve set apriori_uncoded-llrs to all zeros.

Now, I expect that if I pass an stream of bits(-,+1s) through the channel (convolve the bits with E2PR2 h = [1 4 6 4 1]) and then add the noise and then through the equalizer, the extrinsic_uncoded_llrs from the equalizer should have the same sign as of the original message bits. I think even before going to BER or EXIT plots, this simple sign check can show if the equalizer is working.

I don\’t get the result I expect i.e. the signs don\’t match.

Do you know what might be wrong? How can I debug? Have I done correctly so far you think?

Thank you,

Elnaz

September 5th, 2012 at 5:21 pm

Hi Elnaz,

Your encoded_gammas needs to be a function of not only the constellation point in transitions(transition_index,4), but also the constellation points that relate to the previous symbol periods.

The reason why the signs may not be matching could be because you have strong dispersion. To do your test properly, you may like to remove dispersion - this will effectively transform your equaliser into the soft demodulator of…

http://users.ecs.soton.ac.uk/rm/wp-content/QPSKEXIT.zip

Take care, Rob.

September 5th, 2012 at 5:36 pm

Dear Rob,

I did not understand the first part. Did you mean that my calculation for encoded_gammas is wrong? I am putting the power in the exponential term as my encoded_gammas for each line in the transitions matrix separately, i.e. I calculate |channeloutput-encoded bit|^2 /(2*variance) for each row of my transition matrix separately (channeloutput is simply the ISI-channeld output).

In your comment before the last one, you mentioned it was correct. Have I missed something?

Thanks,

Elnaz

September 5th, 2012 at 6:15 pm

Hi Elnaz,

You only have one encoded gamma, not one per column in the transitions matrix. This is because you only have one received symbol.

I’m afraid that I must refer you back to for a clearer explanation…

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1267050

I don’t think that I will be able to explain it any better than the example given in Figure 5 of that paper.

Take care, Rob.

September 6th, 2012 at 4:21 pm

Hi Rob,

I am testing my BCJR equaliser with channel response E2PR2 (h=[1 4 6 4 1])).

The part that I do not understand is that I should discard the last 4 extrinsic_uncoded_llrs to get the signs match. If it is because of the zero-padding that the convolution in MATLAB does then I had to discard the first and last 2 numbers rather than the last 4.

I am trying this with both termination on and off cases and it does not make a difference (I am testing with no encoder just randn BPSK bits). Also, there is no additive noise here.

Also, even after discarding the last four, sometimes the first bits are incorrect. All the rest match.

Do you know why this is so?

Thank you,

Elnaz

September 7th, 2012 at 12:37 pm

Hi Elnaz,

I’m afraid that I don’t know why this is - I’ve never actually programmed a turbo equaliser myself. Sorry for not being much help.

Take care, Rob.

September 7th, 2012 at 6:57 pm

Hi Rob,

It’s alright. You’ve been a great help so far.

Do you know of a reference for BER plot for a BCJR equalizer for EEPR2 response? I’ve searched a lot and I was not able to find any.

Thanks,

Elnaz

September 8th, 2012 at 5:42 pm

Sorry Elnaz, I’m afraid that I don’t know where to find that BEE plot. Take care, Rob.

September 9th, 2012 at 10:29 pm

Dear Rob

Please if possible , can you provide me the code of the turbo code with DS-CDMA or WCDMA and the documentation of this work, because it is very important.

Thank you,

Belal

September 10th, 2012 at 7:08 pm

Hello Balal,

I’m afraid that I don’t have any code for CDMA.

Take care, Rob.

September 11th, 2012 at 9:10 am

Dear Dr Robe,

Thank you for all help,

And if possible, can i insert spreading code “walsh code” in to your code of UMTS with turbo code, becuase this work is a part of the requirements of my master thesis.

Thank you for all help,

Kind Regard,

Belal

September 11th, 2012 at 5:14 pm

Hi Belal,

It would certainly be possible to combine my turbo code with a Walsh spreading code. However, I’m afraid that I don’t have any Matlab code to do this.

Take care, Rob.

September 14th, 2012 at 12:51 am

Dear Rob,

I have one simple question about the QPSK demapper EXIT chart using the natural mappings (b_1,b_0) –> X(QPSK Symbol). Example

1. I have input bits for the QPSK mapper (b_1,b_0) mapp–> QPSK Symbol (using natural mapping).

2. add noise and transmit.

3. At the receiver I try to find the extrinsic llr of bit b_1 –>e_llr(b_1) which needs a priori information from bit b_0 –>a_llr(b_0).

But I see your QPSKEXIT chart script, you are generating an input random vector (a) and converting it into QPSK symbols and at the demapper you are demapping, i.e., calculating the extrinsic llr –> e_llr(a) with the a priori llr –>a_llr(a). I am confuse here that where can I relate the e_llr(b_1) and a_llr(b_0),

and on the other way around, you know that If I want to calculate the extrinsic llr of bit b_0, e_llr(b_0), I need a priori llr of bit b_1, a_llr(b_1), what will be the difference in this regards in your script in both way?.

can you please highlight this concept? Thanks and kind regards

September 14th, 2012 at 4:55 pm

Hi Ideal,

In soft_demodulate.m we have…

function extrinsic_llrs = soft_demodulate(apriori_llrs, rx, channel, N0)

Here apriori_llrs is a vector containing [a_llr(b_0), a_llr(b_1)] and extrinsic_llrs is a vector containing [e_llr(b_0), e_llr(b_1)].

Take care, Rob.

September 15th, 2012 at 11:32 am

Dear Rob,

Thats right and now if I want to calculate the extrinsic llr e_llr(b_1) with the a priori llr a_llr(b_0), then how can I separate these vectors conditioning on each other. Like I want the output extrinsic llr e_llr(b_1) given the a priori llr a_llr(b_0) or the extrinsic llr e_llr(b_0) given the a priori llr a_llr(b_1).

Thanks for your responses

September 15th, 2012 at 5:25 pm

Hi Ideal,

If you look closely at soft_demodulate.m, you will see that e_llr(b_1) is calculated based only on a_llr(b_0). Here, a_llr(b_1) does not affect this calculation. This is achieved using the following if statement…

if bit_index2 ~= bit_index

So, my Matlab code is already doing what you are asking for.

Take care, Rob.

September 17th, 2012 at 1:00 am

Dear Rob,

Thanks for that, now I got it. It means, if I want to see the extrinsic llr e_llr(b_0), I can extract the odd terms in extrinsic_llrs vector containing [e_llr(b_0), e_llr(b_1)] and even terms for e_llr(b_1). Am I right now?

and as you said the e_llr(b_1) is calculated based only on the a_llr(b_0) then in this way, I can see the histogram of e_llr(b_1) based on a_llr(b_0) and can calculate the extrinsic mutual information of the extrinsic llr.

I am really thankful for all your help. You are so great

September 17th, 2012 at 5:23 pm

Hi Ideal,

It looks like you’ve got it figured out now.

Take care, Rob.

September 19th, 2012 at 2:13 am

Dear Rob,

Thanks for the responses.

Now I want to compare the histogram of e_llr(b_1) given a_llr(b_0) with the normal pdf and want to see if these two fits.

I think, in order to perform these comparison, at first I can calculate the histogram of e_llr(b_1) using the hist function of matlab and then hold the plot and then using the normpdf(vector, mean(e_llr(b_1),0.5*mean(e_llr(b_1))) matlab function I can see if these two plots fit.

Am I right or give me any suggestion please? Thanks

September 19th, 2012 at 8:05 am

Hi Ideal,

That sounds right to me.

Take care, Rob.

September 22nd, 2012 at 9:42 am

Hi Dear Prof.

Before two weeks ago, I wanted QPSK modulation for turbo coded system. and you proposed me this file:

http://users.ecs.soton.ac.uk/rm/wp-content/QPSKEXIT.zip

How can I see the BER performance of this system?

Thanks for your help

Ali

September 23rd, 2012 at 3:39 pm

Hi Ali,

For this QPSK scheme to fulfil its full potential, you should concatenate it with some kind of soft-in soft-out channel code. You could use the outer code from…

http://users.ecs.soton.ac.uk/rm/resources/matlabexit/

This will give you bit-interleaved coded modulation with iterative decoding (BICM-ID).

Take care, Rob.

September 24th, 2012 at 2:22 pm

Hi Rob,

I have some questions:

1. If I have the extrinsic LLRs of the decoder E=(E1,E2,…En), how to estimate the pdf p(E=Ei|x=-1) and p(E=Ei|x=+1) using a histogram of E?

2. How to determine the range and the bin width of the histogram?

3. After p(E=Ei|x=-1) and p(E=Ei|x=+1) have been calculated, how to get the mutual information Ie=I(E;X)?

Thank you very much!

Yue

September 24th, 2012 at 2:29 pm

Hi, I have some question.

If I select the different bins, the Ie has large different value?

If I select the smaller bin, Ie could be more accurate?

Thank you!

September 24th, 2012 at 6:08 pm

Hello Yue,

1. You can do this using my Matlab code at…

http://users.ecs.soton.ac.uk/rm/resources/matlabexit/#comment-246

2. My code will do this automatically for you. There is some discussion about how this is done at…

http://users.ecs.soton.ac.uk/rm/resources/matlabexit/#comment-158

3. You can do this using measure_mutual_information_histogram.m, which you can download from above.

Take care, Rob.

September 24th, 2012 at 6:10 pm

Hi Tui,

This requires a careful optimisation - if the bin width is too small or too big, the MI will be measured inaccurately. I do it by determining the bin width automatically, as discussed in my comment to Yue, immediately above.

Take care, Rob.

September 25th, 2012 at 9:50 am

Dear Rob,

I want to plot the histogram of the extrinsic llr and see either it is normal distributed or not?

Usually using your display_llr_histogram() function, I can see the distribution of the two bits in llr but when i want to plot histogram over this plot, it doesn’t plot well.

please give me advise what should I need to do? In the previous comments I told you about it but I see that doesn’t give me the right results.

Actually I want to plot like you script is giving the results of two bits in one llr but with histogram comparison. Thanks

September 26th, 2012 at 8:39 am

Dear Rob,

Thank you for your reply.

I still have some questions:

1. I notice that you use the numerical integration to calculate I_E which is an integration. Why don’t you multiply bin_width in accumulation for calculating I_E in program “measure_mutual_information_histogram.m”?

2. If the encoded sequence is an all zero sequence instead of round(rand(1,n)), how to use your program “measure_mutual_information_histogram.m” to calculate Ie giving E={E1,E2,…,En}?

Thank you again!

September 26th, 2012 at 12:45 pm

Hi Ideal,

I’m not sure why you want to plot a histogram over the plot that display_llr_histogram() gives you - this plot is already the histogram of the LLRs.

I think that you want to plot the histograms for the two QPSK modulated bits separately. In which case, you need to arrange the LLRs and the bits into two groups and then give each of these to display_llr_histogram() separately.

Take care, Rob.

September 26th, 2012 at 12:59 pm

Hi Tui,

1. I can’t remember the specific reason for this but I remember there an important one. Perhaps this is because the bin width gets cancelled out by something else. In any case, the histogram method doesn’t work if we multiply by the bin width…

2. measure_mutual_information_histogram.m assumes that the values of the bits are equiprobable, i.e. that the bit sequence contains an equal number of 0s and 1s. So it doesn’t work when all of the bits are 0s. Here are some versions that work when the bit sequence does not contain an equal number of 0s and 1s…

http://users.ecs.soton.ac.uk/rm/wp-content/measure_MI_averaging_generalised.m

http://users.ecs.soton.ac.uk/rm/wp-content/measure_MI_histogram_generalised.m

Note however, that when all the bits are zero, the MI will come out as zero. You should also note that these methods assume that the LLRs know that the bit values are not equiprobable. i.e. in the absence of any information, the LLRs should have values of ln(p0/p1), rather than values of 0.

Take care, Rob.

September 27th, 2012 at 7:19 am

Dear Rob,

I have a question to your comment 2 to Tui. Sometimes, we can sent all zero codewords, in that case the process of encoding can be cut down. According to the symmetric pdf of E, we have that p(E=Ei|x=+1)=p(E=-Ei|x=-1). Therefore, we can estimate p(E=Ei|x=+1) using histogram, and also get p(E=-Ei|x=-1) as well. Is it okay?

Besides, I want to ask you another questions:

1. If the length of code is too small, such as 11, I think that it is not accurate to calculate p(E=Ei|x=+1) and p(E=-Ei|x=-1) using histogram. How to deal with it?

Thank you very much!

September 28th, 2012 at 9:01 am

Hi Yue,

This will only work if a number of conditions are met:

- your code needs to be linear and symmetrical

- your modulation scheme needs to be symmetrical

In general, I would not recommend this approach. I would always generate random messages, rather than all-zero messages.

If the length of each frame short, you can simulate a number of frames and then concatenate them together. Then you can use the histogram method on the long concatenated sequence.

Take care, Rob.

September 30th, 2012 at 1:16 am

Dear Rob,

I want to plot the histogram after puncturing symbols and want to see that whether after puncturing, the pdf of LLR is still normal. That’s why I was trying to see the histogram plot and display_llr_histogram plot over each other and by this I can confirm that the llr is Normal after puncturing or not.

This was the objective behind. But I am not sure I was doing right or wrong? Thanks

September 30th, 2012 at 10:17 pm

Hi Rob,

Could you please explain how best I can list the error patterns in a certain order, given the generator matrix, for syndrome decoding. Basically, I want to do what \"syndtable\" in MATLAB does but in a simpler way.

Thanks,

Elnaz

October 1st, 2012 at 8:47 am

Hi Ideal,

I see - you want to compare before and after histograms. I think that this sounds reasonable. My expectation is that if you are using random puncturing, the two histograms will be very similar.

Take care, Rob.

October 1st, 2012 at 8:51 am

Hi Elnaz,

If the generator matrix is not too big, the simple way is to build a list of every possible input into the encoder and then use this to obtain every possible output. Then, for each possible pairing of outputs, you can do an XOR to get the error pattern.

Take care, Rob.

October 3rd, 2012 at 3:00 am

Hi Rob,

Do you know how can I generate a Hamming code parity check matrix (with parameter m) in systematic format? Basically, I’m trying to do what “hammgen” of MATLAB does in a simpler way.

Thanks,

Elnaz

October 3rd, 2012 at 6:29 am

Hi dear prof.

I want to use LTE intelreaver, how will be the MATLAB code after considering this matter, to find BER?

Best regards

October 3rd, 2012 at 5:37 pm

Hi Elnaz,

I’m afraid that I don’t know of any simpler way than hammgen…

Take care, Rob.

October 3rd, 2012 at 5:39 pm

Hi Ali,

All you need to do is replace the mentions of get_UMTS_interleaver in main_ber with get_LTE_interleaver. You can download get_LTE_interleaver.m from…

http://users.ecs.soton.ac.uk/rm/wp-content/get_LTE_interleaver.m

Take care, Rob.

October 3rd, 2012 at 7:18 pm

Hi dear Prof.

What is the range of LTE interleaver?

Are you sure that the other parts of MATLAB code are the same as the MATLAB code for UMTS interleaver?

Best Regards

Ali

October 5th, 2012 at 4:50 pm

Hi Ali,

The shortest LTE interleaver is 40 bits, the longest is 6144 bits. The other parts of the LTE turbo code are the same as in UMTS. To see this for yourself, you can take a look in the documentation for the standards in the links at the top of this page.

Take care, Rob.

October 8th, 2012 at 12:14 am

Dear Rob,

How can I use your modulate and soft_demodulate scripts for higher order mapping schemes like 16-QAM or 64-QAM?

and how can they be implemented in simulations, like for 16-QAM, each symbol has four bit, then how can I replace it in your script of calculating demapper exit curve, where you write modulate(a(start:stop)) and a_e(start:stop) = soft_demodulate(.) for QPSK (2 bits per symbol);

I appreciate your help. Thanks

October 8th, 2012 at 6:06 pm

Hi Ideal,

All you need to do is modify the constellation_points and bit_labels - every thing else is automatic.

In main_exit, you just need to change bit_count.

Take care, Rob.

October 11th, 2012 at 1:26 pm

Hi dear Prof.

Excuse me, I don’t know about channel Coding very much. Could you please help me to puncture your turbo code to rate 1/2.

I thank you for your lats helps.

Bests Ali

October 11th, 2012 at 6:07 pm

Hi Ali,

You can see how to implement puncturing at…

http://users.ecs.soton.ac.uk/rm/resources/matlabturbo/#comment-8929

Take care, Rob.

October 16th, 2012 at 4:00 am

Dear Rob,

Thanks for all your support. I again has one tricky question please.

Lets I got one demapper curve (y) for a particular SNR, i.e.

y = ax^3 + bx^2 + cx + d, SNR = 1dB, x=0:0.1:1 (a-priori MI), (a,b,c,d=constants).

It means I have three different variables, y,x,SNR. I need to store a large number demapper equations for a range of SNRs and want to utilize them somewhere else. How can I use interpulation in matlab function which can cover three parameters in which x and y are vector but SNR is number? I though I can use interp2 but not sure how to use it?

Please do you have any idea about this problem? Thanks a lot

October 16th, 2012 at 6:10 pm

Hi Ideal,

I’m afraid that I’ve never done that sort of thing before, so your guess is as good as mine…

Take care, Rob.

October 17th, 2012 at 5:53 pm

Hi Rob,

I have a question about the PLL with Mueller-Muller estimate. Do you know how it works?

Thanks,

Elnaz

October 18th, 2012 at 2:20 pm

Hi Elnaz,

I’m afraid that I have never worked on PLLs, so I can’t help you with this.

Take care, Rob.

October 22nd, 2012 at 5:25 am

Dear Rob,

Thanks for the fruitful discussion. I have very basic question please.

If I am considering the real valued 4-PAM constellation, i.e.,

constellation_points = [-1; -1/3; 1/3; 1]/sqrt(5/9); and the received signal is represented by y = Px + n; with P is the received signal power and n is the AWGN noise with zero means and unit variance noise. In that case, can I replace N0 = 1 in your soft_demodulate.m? Thanks

October 22nd, 2012 at 5:48 pm

Hi Ideal,

You should use…

constellation_points = sqrt(P)*[-1; -1/3; 1/3; 1]/sqrt(5/9);

y= x+n;

N0 = 1;

Take care, Rob.

October 23rd, 2012 at 10:56 pm

Dear Rob,

I don’t understand why did you write sqrt(P);

The symbol energy of the system considering optical to electrical domain is

Es = P^2(5/9), for the constellation_points = [-1; -1/3; 1/3; 1];, where P is the optical transmit power and considering the zero mean unit variance noise, The SNR=P^2(5/9);

October 23rd, 2012 at 11:12 pm

Hi Rob,

Could you please explain a stopping criteria for turbo decoder or actually for turbo equalizer when applied in a iterative receiver scheme?

Thanks,

Elnaz

October 24th, 2012 at 5:00 pm

Hi Rob,

Regarding my previous question, I’m using sum of LLRs for the stopping criteria. What do you think?

My new question:

I am simulating Turbo equalizer on a E2PR2 channel. I need to somehow convert the LLR outputs of the Turbo equalizer back to the channeld format. Right now I am taking the hard estimate (sign) of those LLRs and modulate them back again with the channel response which is not the optimum thing to do. How can I do this properly?

Thank you,

Elnaz

October 24th, 2012 at 5:56 pm

Hi Elnaz,

I normally stop the iterative decoding process when the mutual information of the extrinsic LLRs stops improving.

It would be possible to modify the turbo equaliser so that it not only outputs extrinsic information about the bits, but also extrinsic information about the modulated symbols. This would be more optimal than modulating a hard decision for the bits…

Take care, Rob.

October 24th, 2012 at 5:58 pm

Hi Ideal,

This square root is needed because amplitude is related to the square root of power. Here, P is relating to power, while x, y, n and sqrt(P) are all related to amplitude…

Take care, Rob.

October 24th, 2012 at 6:10 pm

Hi Rob,

Could you explain a bit more how should I modify the turbo equalizer to output information on the modulated bits?

Thanks,

Elnaz

October 24th, 2012 at 6:12 pm

Please add this to my previous comment:

Also, how should I change the LLRs into soft channel outputs?

October 25th, 2012 at 4:34 pm

Hi Elnaz,

This is easiest for BPSK, in which case the LLRs for the BPSK modulated symbols are equal to the LLRs for the bits. In the case of a higher order modulation scheme, you just need to output the symbol probabilities that are calculated during the turbo equalisation algorithm.

Take care, Rob.

October 26th, 2012 at 4:08 pm

Hi Rob,

What I meant to ask is that should I follow the same procedure that I used to get the encoded outputs from component_decoder in my BCJR equalizer as well i.e. to add the lines:deltas2(transition_index, bit_index) = alphas(transitions(transition_index,1),bit_index) + uncoded_gammas(transition_index, bit_index) + betas(transitions(transition_index,2),bit_index);

And,

prob1 = jac(prob1, deltas2(transition_index,bit_index)); (I don’t think I can do this since the outputs are no longer 0s and 1s but some integer values)

Then, if correct, this will give me the extrinsic llrs for modulated bits?

Then, I can’t change these llrs to normal channel output format by simply dividing them by 2/sigma2 i.e. the way to go in case of decoder but not equalizer. My channel is E2PR2 response applied on BPSK symbols. Then I’m wondering how can I extract channel output from llr-outputs of my turbo equalizer.

Thanks,

Elnaz

October 26th, 2012 at 6:18 pm

Hi Elnaz,

I think that you are on the right track here. For each of the M symbol values (e.g. M = 4 in QPSK), you can use an equation that is similar to the prob1 equation that you have provided. The exponential of these prob values will give you values that are proportional to symbol values. You just need to normalise these values in order to get the correct probabilities.

Take care, Rob.

October 26th, 2012 at 6:35 pm

Hi Rob,

Do you mean summing over delta2s corresponding to each row in the transition matrix and then for example prob4 = exp(sum of delta2s corresponding to output=4)/sum of delta2s for all output values?

Thanks,

Elnaz

October 28th, 2012 at 9:59 am

Hi Elnaz,

That sounds right to me.

Take care, Rob.

October 29th, 2012 at 2:28 am

Hi Rob,

One additional question:

My simulations run verry slowly and according to MATLAB profiler, “conv” and “jac” commands are the major consumers. What can I do to make “jac”run faster?

Thanks,

Elnaz

October 29th, 2012 at 8:45 pm

Hi Elnaz,

If you look into jac.m, you’ll see that it has three possible modes of operation. I would suggest using the second mode, which approximates the exp and log functions using a look up table.

Take care, Rob.

November 16th, 2012 at 6:09 pm

Hi Rob,

As you know I am using your BCJR component_decoder as an equalizer after making some changes of course. However, I can’t use the code for long stream of bits e.g. 32 kbit in MATLAB. I get out of memory problem and it takes a long long time to equalize say 15 kbits. Is there anyway I can improve the code or the speed of its execution?

Thanks,

Elnaz

November 19th, 2012 at 10:47 am

Hi Elnaz,

The sliding window technique can be used to reduce the memory requirement of a BCJR decoder. You can read about this technique in…

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4726112&tag=1

Take care, Rob.

November 20th, 2012 at 7:39 pm

Hi Rob,

Thanks for all your answers, they are really useful and helped me understand much more how you obtained the results.

To my understanding, your results are for a AWGN channel.

Assume we add Rayleigh fading as defined below :

n = 1/sqrt(2)*[randn(1,N) + j*randn(1,N)]; % AWGN o db variance

h = 1/sqrt(2)*[randn(1,N) + j*randn(1,N)]; % Rayleigh channel

% Channel and noise Noise addition

y = h.*s + 10^(-Eb_N0_dB(ii)/20)*n;

Is there anything we have to change in the BPSK demodulator ?

What about the BCJR decoder ?

Thanks again for your help, very useful.

Regards

November 21st, 2012 at 4:58 pm

Hello Sam,

Only the demodulator needs to be updated. The updated code should look like…

a_c = (abs(a_rx+h).^2-abs(a_rx-h).^2)/N0;

Take care, Rob.

November 23rd, 2012 at 5:48 am

Hi Rob,

I’ve tried that, and it’s not giving me very accurate results for BER vs ebno. I also changed frame length to 4000 instead of 40 and result remained the same.

Actually my curve is going up and down, it’s not a stable one. I wonder why.

As a reference, i get to a BER value of 0.01 for 10 dB.

I was wondering if i had to modify something else in the decoder, like for the calculation of gammas, alphas or betas ?

Thanks for your help,

Regards

Sam

November 23rd, 2012 at 4:32 pm

Hi Sam,

You don’t need to modify anything else. My suggestion is for you to test each component of your scheme separately. This will help you to determine if the problem is in your encoder, modulator, channel, demodulator, decoder, etc.

You can do this by drawing the EXIT curves for each component using both the averaging and the histogram methods. These two methods will give similar curves if everything is working okay. If not, then this will identify which component has the problem.

Take care, Rob.

November 27th, 2012 at 5:07 pm

Hi Rob,

In order to test it, i have used the same code as yours, i added the rayleigh and modified the part for the demodulator BPSK as you mentionned :

a_c = (abs(a_rx+h).^2-abs(a_rx-h).^2)/N0;

instead of

a_c = (abs(a_rx+1).^2-abs(a_rx-1).^2)/N0;

But that doesn\’t work neither.

I am still trying to figure out what isn\’t working,

What do you think ? Weird part is without Rayleigh, all works just fine.

Thanks again for your help,

November 27th, 2012 at 5:27 pm

Hello Sam,

You may like to try using the modulator and demodulator that can be downloaded from…

http://users.ecs.soton.ac.uk/rm/resources/matlabexit/#comment-1545

This code demonstrates how to use the demodulator in the case of a Rayleigh fading channel. You can convert the modulator and demodulator into BPSK by replacing the corresponding lines of code with…

constellation_points = [+1;-1};

bit_labels = [0;1];

Take care, Rob.

November 28th, 2012 at 12:28 am

Dear Rob,

one quick question about the variable node extrinsic mutual information (VNEMI) please.

1. we can calculate the VNEMI using the ten Brinks formula, i.e.,

I_ev = J(sqrt[(dv - 1)*sigma_A^2 + sigma_Ch^2]),

where

sigma_Ch^2 = 4/sigma_n^2 and

sigma_n^2 = 1/(2*log2(M)*(Es/No)) (noise variance, Es=symbol energy)

2. My question is that if we implement QPSK and 4-PAM, what will be the difference in sigma_n^2?. We know that the MI of both QPSK and 4-PAM are different. Please comment

November 29th, 2012 at 4:14 pm

Dear Rob,

I am facing the same problem with Sam and trying to demodulate BPSK after Rayleigh channel, I dont understand why it cant just be demodulated as normal e.g (rx+1)/2, where rx are the received values after transmitting through the Rayleigh and AWGN channel. I hope to gain better understanding in this. Just like you Sam, all works well in AWGN. I hope Rob could shed some light on this matter. Thanks I think this is a great site!

November 29th, 2012 at 7:10 pm

Hello Ideal,

In this case, you need to use…

I_ev = J(sqrt[(dv - 1)*sigma_A^2 + [J^-1(I_demod)]^2]),

where I_demod is the mutual information of the LLRs provided by the demodulator.

Take care, Rob.

November 29th, 2012 at 7:13 pm

Hello Mcrave,

The simple answer is that because the channel is different, the received signal will be different and so the demodulator needs to be different. A more involved answer is to say that the demodulator needs to use h, in order to undo the phase rotation and fading that is imposed by the Rayleigh channel.

Take care, Rob.

November 29th, 2012 at 11:37 pm

Dear Rob,

Yes I do need to use, I_ev = J(sqrt[(dv - 1)*sigma_A^2 + [J^-1(I_demod)]^2]),

but still I_demod is the function of sig_Ch^2 and the a-priori values. So I am asking what will be the formula for sig_Ch^ in the case of gray maping QPSK and real 4-PAM modulation schemes.

I think in case of QPSK, we can use sig_Ch^2 = 4/sigma_n^2 = 8 * SNR but for the case of 4-PAM, what is this relationship? thanks

November 30th, 2012 at 4:13 am

Thanks Rob!

That clears up my concepts a little so in other words if the transmitted BPSK signal is corrupted by a Rayleigh and AWGN channel, at the receiver, the first step to perform is equalization with rx_hat=rx./h then turbo decoding right? Also would just want to confirm that in practice, the channel coefficient estimated could be erroneous as it is an estimation right?

Thanks so much Rob

December 3rd, 2012 at 9:43 am

Hi Ideal,

As far as I am aware, there is no closed form equation that links I_demod to sig_Ch^2 in the general case. I think that the only thing you can do is simulate the demodulator and measure I_demod.

Take care, Rob.

December 3rd, 2012 at 9:47 am

Hi Mcrave,

That’s right. In practical scenarios, h would be obtained at the receiver using channel estimation. However, this can never be perfect and so a corrupted version of h must be used in practice. This will degrade the performance. In theoretical work however, it is typically acceptable to assume that perfect channel estimation is possible.

Take care, Rob.

December 3rd, 2012 at 2:45 pm

Thanks Rob. That’s very helpful. Another quick question, what is the difference of equalizing a_rx/h then taking the real part of the result as compared to what you have done for the demodulator with:

a_c = (abs(a_rx+h).^2-abs(a_rx-h).^2)/N0;

I would be very much thankful if you could point me in the right direction on this.

December 3rd, 2012 at 5:39 pm

Hi Mcrave,

If you divide a_rx by h, then you are not only amplifying the signal part of a_rx, but also amplifying the noise part. So, you would need to use a different value for N0 (although I can’t remember what value you should use)…

Take care, Rob.

December 3rd, 2012 at 5:58 pm

Hi Rob,

I see.. I think I have a better idea now. What a_rx/h does is zero forcing. I think that the codes provided might be implementing MMSE which takes into account of the noise power. I hope I’m not wrong, but I think the N0 is the noise power. N0=2/sigma^2. Thanks for the guide. Much appreciated!

Thanks again for your help.

December 3rd, 2012 at 6:04 pm

Hi Mcrave,

You are welcome.

Take care, Rob.

December 7th, 2012 at 2:18 am

Hello Rob,

I’ve just finished my number crunching for a Rayleigh channel with perfect channel knowledge with parameters below:

16 state CCSDS Turbo decoder

Modulation scheme:BPSK

No_of_FRAMES=2810;

INTERLEAVER_SIZE=1780;

ITER=5; %Number of iterations

RATE=1/2;

with different Rayleigh coefficient for each frame and have also changed the demodulator to a_c = (abs(a_rx+h).^2-abs(a_rx-h).^2)/N0; To take account of the Rayleigh channel. The results does not seem right however, it performs much poorly than expected. At 10db, a BER of 10^-2 was obtained. Which does not agree with other reported works. Any ideas what could have gone wrong? again, it all works fine in AWGN.

December 7th, 2012 at 6:09 pm

Hi Mcrave,

It sounds like you are using a quasi-static Rayleigh fading channel - i.e. you use only one value of h for every BPSK transmission in the frame. This will cause some frames to have low BERs, but other frames to have very high BERs, which complete eclipse the frames with low BERs - even when the SNR is 10 dB. My advice would be to try using an uncorrelated Rayleigh fading channel - i.e. you should use a different value of h for each BPSK transmission in a frame.

Take care, Rob.

December 8th, 2012 at 4:17 am

Dear Rob,

That makes sense, I did implement a sort of controlled seed of h values for each frame, but it is a different seed (i.e. different h for each frame). I think I’ll re-run the simulation again and let the randn function run loose without control. Fingers crossed! Thanks again Rob You’ve been a great help

December 12th, 2012 at 7:08 pm

Hi Rob,

Turns out that I didnt transfer correct noise sigma values for AWGN.

h*bpsk_tx + 10^(-ebnodb/20)*n;

Did the trick. Also, I made sure to have different h for every frame. Just wanted to thank you for your advise and contributing to a working simulation Have a good day!

December 12th, 2012 at 7:09 pm

No problem! Take care, Rob.

December 30th, 2012 at 7:41 am

Hi Rob,

Hope all is going well. I just wanted to check whether my understanding is right on something. According to my simulation, for slow fading channels, i.e, the same h for each symbol in the frame yet different for each frame, the BER for turbo decoding is worse than a fast fading channel (different h for each symbols in the frame). After rechecking everything, I’m quite sure the problem earlier (with bad BER at high SNR~10dB) was because of this. Or that simply I would need to do a whole lot of simulation to obtain the correct BER result. From what I have observed, a channel with magnitude of ~0.1 has a very bad effect on BER and this seems to be the reason why I have been obtaining unexpectedly high BERs for high SNR. However, from the papers that I have been looking at, my result for fast fading channel seems to match the results reported in papers etc. When do I use fast fading or slow fading channels? Which do I stick to for publishing purposes? Thanks in advance for your advice

December 30th, 2012 at 8:47 am

Hi Rob,

Another thought, maybe its better if I plot the Frame error rate instead of BER? Thanks.

December 30th, 2012 at 1:40 pm

Hi Mcrave,

In general, I would recommend fast fading channels for physical layer work and slow fading channels for network layer work. This is because these are typically the simplest channels that sufficiently challenge these types of work. Of course, you may prefer to use more sophisticated channel models if this is your particular focus. For physical layer work, I typically use BER, because this more easily shows how the frame length affects the performance of an iterative decoding scheme. For network layer work, I recommend FER.

Take care, Rob.

December 31st, 2012 at 3:00 am

Hi Rob,

Thanks again for your advice Much appreciated!

January 10th, 2013 at 6:25 pm

Hi Robert. Is it possible to generate bits (e.g 1000000 bits) instead frame (40 to 5114) in yours matlabturbo ?

January 10th, 2013 at 8:00 pm

Hi Luke,

It is possible to use any interleaver length, although the UMTS and LTE interleavers only support a limited range of lengths. You can instead use a random interleaver by setting random_interleaver=1. This will allow any interleaver length, such as 1000000 bits.

Take care, Rob.

January 11th, 2013 at 10:02 pm

Hi Rob,

I’m using BCJR-based turbo equalizer in my application and I have cross interference between different adjacent streams of BPSK message bits. In the receiver I somehow compensate for this interference but there is a weird behavior I see in my final BER curve. My BER curve is not always monolitically decreasing with increasing number of iterations and also with respect to increasing SNR values. I’m using sum reliability as my stopping criteria. However, if I use the error rate as my stopping criteria (i.e. if I cheat) to terminate iterations I do get monolitically decreasing curves. What do you think is the best terminating criteria to use here?

Let me also add that I think this is related to the high amount of interference I’m dealing with; and, for now, I should stick to my not-so-strong interference cancellation method and try to get smoother BER curves.

January 14th, 2013 at 3:43 pm

Hi Elnaz,

If successive iterations are not improving the BER, then it sounds to me like your equalizer (or some other part of your receiver) is not working properly. My advice would be to see if this is the case by comparing the EXIT curves of the equalizer using both the averaging and histogram methods of measuring the MI. If the curves obtained using these two methods are significantly different, then it suggests that your equalizer is not working properly…

Take care, Rob.

January 25th, 2013 at 8:25 pm

Hi Rob,

I have a question regarding the “eoor floor” problem. In order to get rid of this problem and have a nice, continuous falling down curve, do I increase the length of the stream or the number of averaging trials? Which one is more effective?

Thanks,

Elnaz

January 28th, 2013 at 1:29 pm

Hi Elnaz,

Increasing the number of trials in your simulation will not change the error floor - it will just give you a smoother and more accurate BER plot. Increasing the frame length will reduce the error floor. Another way of reducing the error floor is to use a better interleaver design - a good starting place is replacing a random interleaver with an S-random interleaver…

Take care, Rob.

January 31st, 2013 at 8:19 am

Hi Rob,

I’m trying to study the effects of SNR mismatch on turbo codes. In all literatures, the lowest BER will happen when there is perfect knowledge of

the SNR, i.e SNR offset=0. I find that my simulation fits the overall profile but the problem is that the lowest BER point is at offset -6dB. I think one of the reasons for this is probably the way I calculated the values for sigma etc. Here is a snippet of my program, hope you could point

me in the right direction

%% SNR is fixed

ebnodb=2;

ebno=10^(ebnodb/10);

N0=1/ebno;

sigma=sqrt(N0);

%% SNR Offsets

START=-8;

INTERVAL=1;

STOP=8;

for OFFSET=START:INTERVAL:STOP

ebno_off=10^((ebnodb+OFFSET)/10);

L_c_off=4*ebno_off;

%% Fast fading channel (COMPLEX)

h_u1=(randn(1,ENC_OP_LEN)+1i*randn(1,ENC_OP_LEN))/sqrt(2); %ENC_OP_LEN is the length of the systematic bits u1, from RSC1.

h_u2=(randn(1,ENC_OP_LEN)+1i*randn(1,ENC_OP_LEN))/sqrt(2);

h_v1=(randn(1,ENC_OP_LEN)+1i*randn(1,ENC_OP_LEN))/sqrt(2);

h_v2=(randn(1,ENC_OP_LEN)+1i*randn(1,ENC_OP_LEN))/sqrt(2);

%% AWGN noise (COMPLEX)

n_u1=sigma/sqrt(2)*(randn(1,ENC_OP_LEN)+1i*randn(1,ENC_OP_LEN));

n_u2=sigma/sqrt(2)*(randn(1,ENC_OP_LEN)+1i*randn(1,ENC_OP_LEN));

n_v1=sigma/sqrt(2)*(randn(1,ENC_OP_LEN)+1i*randn(1,ENC_OP_LEN));

n_v2=sigma/sqrt(2)*(randn(1,ENC_OP_LEN)+1i*randn(1,ENC_OP_LEN));

%% Apply channel effect

rx_sig_u1= h_u1.*bpsk_tx_u1 + n_u1; % bpsk_tx are bpsk modulated bits

rx_sig_u2= h_u2.*bpsk_tx_u2 + n_u2;

rx_sig_v1= h_v1.*bpsk_tx_v1 + n_v1;

rx_sig_v2= h_v2.*bpsk_tx_v2 + n_v2;

%% Fix phase rotation and fading from channel with equalization

u1rx = real(rx_sig_u1.*conj(ho_u1));

u2rx = real(rx_sig_u2.*conj(ho_u2));

v1rx = real(rx_sig_v1.*conj(ho_v1));

v2rx = real(rx_sig_v2.*conj(ho_v2));

Then it goes on to perform turbo decoding. Would appreciate very much if you have any advice. Thanks!

January 31st, 2013 at 5:06 pm

Hi Mcrave,

I suspect that the mistake is in the lines that come after what you have provided. Your equaliser makes the signal appear to have been transmitted over an AWGN channel, rather than a Rayleigh fading channel. However, the equaliser will change the SNR - I guess that you are just using ebno_off.

My advice would be to use code of the sort…

tx = -2*(bits-0.5);

h = sqrt(1/2)*(randn(size(tx))+i*randn(size(tx)));

n = sqrt(N0/2)*(randn(size(tx))+i*randn(size(tx)));

rx = h.*tx + n;

llrs = (abs(rx+h).^2-abs(rx-h).^2)/estimated_N0;

Take care, Rob.

February 3rd, 2013 at 12:13 am

Hi Rob,

I am using the your BCJR algo as my equalizer in my application and the problem is that I see a “dip” in my BER curve in low SNR values.

I have tested one thing: when using the exact Jacobian logarithm in the code I have the “dip” in low SNRs but when I replace it with the approximation of mode=2 the problem is resolved and the curve is linear, slightly higher error at low SNRs but linearly decreasing with increasing SNR.

Do you know of any justification why this is happening? I mean why the exact definition gives me nonstandard behavior at low SNRs but the approximation doesn’t?

Thanks,

Elnaz

February 4th, 2013 at 6:45 pm

Hi Elnaz,

I’m afraid that I’ve never come across anything like that before. The only thing that springs to mind is perhaps one part of your system is assuming LLR=ln(P0/P1), while another part is assuming LLR=ln(P1/P0). I’m afraid that some of the code on my website assumes the former, while some of my code assumes the latter…

To track the bug down, you could try my usual recommendation, which is to plot the EXIT curve for each of your system components using both the averaging and histogram methods. If these don’t match for a particular one of your system components, then that’s where I think you’ll find the bug…

Take care, Rob.

February 9th, 2013 at 12:02 pm

Hi Rob,

Much thanks for your advice earlier. I did as advised but the results are not as expected. All of the mismatch results in 0 errors, only a severely underestimated channel with -8dB showed a slight error. There are a few things that I would need advice on.

1) the estimated_N0 in this line should be from ebno_off from my earlier post right? i.e estimated_N0= ebno_off=10^((ebnodb+OFFSET)/10); then this is applied to

llrs = (abs(rx+h).^2-abs(rx-h).^2)/estimated_N0;

2) I also read from a few sources now that Lc=4*a*Eb/No*R, where R=rate, and a=fading attenuation (a=1 when channel is gaussian I wonder if I need to change for Rayleigh, if so to what value?)

Again hoping you could point me in the right direction. It’s Chinese New Year from where I am, Happy Chinese New Year to all!

February 11th, 2013 at 2:47 am

Hi Rob,

I found something that might be useful in point 2). “In case of no fading a = 1. In the case of fading with no channel state information the fading value becomes the expected value a = E[a] of the underlying fading channel. When we have ideal channel state information available at the decoder then a takes the exact fading value.”

I am assuming a non CSI system. So a =E[h], for the codes suggested,

h = sqrt(1/2)*(randn(size(tx))+i*randn(size(tx)));

a=sqrt(1/2)?

Thanks.

February 11th, 2013 at 3:03 am

Think I might be wrong. Rayleigh channels are zero mean. Hmm…

February 11th, 2013 at 6:01 pm

Hi Mcrave,

What you are saying in point 1 looks correct to me.

With regard to point 2, the equation that I provided…

llrs = (abs(rx+h).^2-abs(rx-h).^2)/estimated_N0

…can be rearranged into the form…

llrs=4*real(rx*conj(h))/estimated_N0

I think that this is what you are talking about in point 2. Hopefully, my version of this equation shows you how to use it - although I don’t think that it will change your results…

Take care, Rob.

February 12th, 2013 at 3:06 am

Dear Rob,

Many thanks for your reply. I was able to get the expected results only when I apply

h = randn(size(t));

n = sqrt(N0)*(randn(size(tx));

rx = h.*tx + n;

rx = real(rx.*conj(h))/N0_off

Not sure why though. It is probably because of its modulation type BPSK and most papers focuses on the real part of the symbol sent? Most papers I see so far are all on BPSK.

February 12th, 2013 at 3:55 pm

Hi Mcrave,

In this code, you are using only real noise and fading - there is no imaginary component to your results. This makes things more difficult for your scheme because all of the noise and fading is in the same dimension as your BPSK modulation, namely the real part of the complex numbers. In my version, the noise and fading is shared between the real and imaginary parts, making things easier for the BPSK modulation, since it is immune to imaginary noise and fading. I guess that the results you are trying to reproduce are using only real-valued noise and fading, in which case I think you have got everything correct. However, it is normal practice to use complex-valued noise and fading…

Take care, Rob.

March 14th, 2013 at 5:47 am

Hi Rob,

I have a very basic question regarding BER performance. When no coding is applied we get the Uncoded BER. Then we use FEC code to bring down the BER.

Matlab 2012 has Turbo encoder- decoder demo (rate ~1/5.07). In this demo when I use BPSK (Bipolar) scheme, add AWGN noise and perform hard decision on received signal I got uncoded BER around 0.22; and Turbo codes were still giving coded BER ~10^-5. Is it possible?

Is there any coding technique (with code rate < 1/6) available to handle uncoded BER of 0.22??????

How much uncoded BER can be handled using present coding techniques?????

Thanks

March 15th, 2013 at 4:07 pm

Hello Mahesh,

Your results do not sound unreasonable to me. Turbo codes can have very steep BER curves, allowing low BERs to be achieved at relatively low SNRs. By contrast, in the absence of coding, the BER plot will have a very gradual gradient, requiring a high SNR in order to achieve a low BER. Turbo codes having long interleavers are near-perfect codes - they can achieve very low BERs when transmitting data at a rate (in bit/s) which approaches the capacity of the channel.

Take care, Rob.

March 15th, 2013 at 5:56 pm

Hi Rob,

When we say that the Ultimate Shannon limit is -1.59 dB, it means that no matter what code rate we use, we cannot perform reliable communication below -1.59 dB. Now for BPSK scheme, over AWGN channel, the uncoded BER at -1.59 dB is ~0.12. Means on an average 12% of bits (hard decoded) are in error. If turbo code can handle 0.22 uncoded BER isn’t that sound spooky. Because even in CCSDS 2012 report the turbo code with code rate 1/6 can go upto -0.2 dB Eb/No only. Which means it can handle only ~0.095 uncoded BER.

I am working on a channel coding technique. What should be my AWGN channel model ? I have used BPSK scheme with Bipolar (bit 0 is modulated using +1 and bit 1 is modulated with -1) scheme. Then I add AWGN using inbuilt matlab “awgn” function and set the SNR to 3 (SNR=3 so for BPSK scheme Eb/No=0 dB). I do hard decision at the receiving end and I got the uncoded BER ~0.0786, which matches with the theory.

I thought, channel coding is all about the amount of uncoded BER you can handle and everything in-between is just the process to achieve this goal.

Please correct me if there is anything wrong with my understanding:

i.e. Uncoded BER at -1 dB for BPSK is ~ 0.10

When we encode 100 bits with code rate 1/6 we get 600 coded bits.

When these 600 bits are transmitted at -1 dB Eb/No (or equivalent SNR) the received 600 bits will have ~0.10 * 600 bits in error

We need to correct these errors and bring down the BER to < 10^-5

I really appreciate your help.

Regards,

Mahesh Patel

March 18th, 2013 at 10:37 am

Hi Mahesh,

I suspect that you may be getting confused between SNR and Eb/N0. The relationship between these is…

Eb/N0 [in dB] = SNR [in dB] - 10*log10(eta),

where eta is the number of bits of information that are conveyed in each transmitted symbol. In uncoded BPSK, eta is 1, so Eb/N0 = SNR.

Take care, Rob.

April 18th, 2013 at 6:31 am

Hi Rob,

I have implemented my own Turbo Encoder and Interleaver. Now I want to use your decoder. But I am completely stuck in the receiver side programs. Your decoder takes LLR inputs and LLR generation program needs mutual information while the mutual information program needs LLR inputs.

So kindly will you please guide me from where will I start??

One more question, the component_decoder.m is one constituent decoder. So to implement the complete turbo decoder just feeding the output of one decoder to another will work? Or I not to make other modifications?

Waiting for your kind reply. Thanks in advance.

April 18th, 2013 at 4:33 pm

Hi Deep,

The functions for generating LLRs and measuring mutual information are only useful for drawing EXIT charts, to characterise the operation of the turbo decoder. These functions are not a part of the turbo decoder itself. Instead, your LLRs should be output by your demodulator. If you look in my main_ber.m script, you can see how I get a BPSK demodulator to provide LLRs to my turbo decoder. This script will also show you how to feed the LLRs produced by one component decoder into the other.

Take care, Rob.

April 19th, 2013 at 1:42 pm

Hi Rob,

Thanks a lot. I am unable to understand the code in two different places.

1. a_c = (abs(a_rx+1).^2-abs(a_rx-1).^2)/N0;

Is this the LLR output of the BPSK demodulator? If yes, then how is it in the LLR form? Is it in normal domain or logarithmic domain? Will you kindly please explain a bit.

2. errors = sum((a_p < 0) ~= a);

You have used this line to make a hard decision and count the number of errors. My question is how we will make the hard decision. This line will just calculate the number of errors.

Waiting for your kind reply. Thanks in advance.

April 19th, 2013 at 4:51 pm

Hi Deep,

1. This is the LLR output of the BPSK demodulator. It is in the LLR form - namely ln(P0/P1).

2. The hard decision is obtained by

decoded_bits = a_p < 0;

Take care, Rob.

April 19th, 2013 at 6:34 pm

Thanks Rob,

One more question, if I want to use my LLR as ln(P1/P0) then will the changing of the modulator and demodulator be sufficient? Or I have to make changes in the decoder? Will the alpha,beta,gamma,delta calculation be the same??

April 20th, 2013 at 6:47 am

Dear Rob,

Will you please explain a bit elaborately how

a_c = (abs(a_rx+1).^2-abs(a_rx-1).^2)/N0;

is in the ln(P0/P1) form?

April 22nd, 2013 at 6:10 pm

Hi Deep,

You will need to make changes in the decoder. You need to change

if transitions(transition_index, 3)==0

if transitions(transition_index, 4)==0

extrinsic_uncoded_llrs(bit_index) = prob0-prob1;

to

if transitions(transition_index, 3)==1

if transitions(transition_index, 4)==1

extrinsic_uncoded_llrs(bit_index) = prob1-prob0;

Note that the equation

a_c = (abs(a_rx+1).^2-abs(a_rx-1).^2)/N0;

gives an identical result to

a_c = 4*real(a_rx)/N0;

Perhaps you are more familiar with this version of the equation…

Take care, Rob.

May 6th, 2013 at 12:46 am

Hi Rob,

Thanks for the code. My question is why you use the rate 1/2 for SNR calculations in main_ber.m. To quote your comment, you said \"Only the information that is used by the upper decoder is transmitted in this simulation.\" but it seems like you use the parity bits from both encoders in main_ber.m as far as I understand. Then, the rate would be 1/3 as in standard UMTS turbo code. Could you please clarify this?

May 6th, 2013 at 1:24 am

Dear Rob,

I need to find the best LDPC code (parity check matrix) for my application. My block length =10 Kbits and rate=1/2 are fixed. Should I use PEG algorithm? Can you guide me to some codes for calculating the parity check matrix? I want to use the H with MATLAB’s LDPC encoder/decoder.

I tested the code you uploaded above but it takes a long time considering 10 Kbits block length. When I feed the resulted H to fec.ldpcenc I get the error : “the last (n-k) columns of the parity-check matrix must be invertible in gf(2).” Can you please explain why?

Thank you,

Elnaz

May 6th, 2013 at 10:09 pm

Hi Rob,

I would like to also know if this code is punctured to get rate 1/2, how should I modify the script to simulate the turbo code with original rate 1/3?

FYI, the reported UMTS BER vs Eb/No results on this website are significantly better than the ones I found here http://www.csee.wvu.edu/~mvalenti/documents/DOWLA-CH12.pdf and couple of other papers like this one: http://delivery.acm.org/10.1145/1340000/1330710/a105-iliev.pdf?ip=128.54.22.219&acc=ACTIVE%20SERVICE&key=C2716FEBFA981EF147DAD23CB8BFD2078D75470BC0F2371B&CFID=214026711&CFTOKEN=31039678&__acm__=1367864840_44a3b8b785b00ea67003e472fe6321ca.

Thanks in advance,

Selin

May 7th, 2013 at 8:02 pm

Hi Selin,

This is a rate-1/3 code, as you suspect. Please can you let me know where I say that it is a rate-1/2 code, so that I can fix it!

You can increase the rate to 1/2 by discarding some bits before passing them to the modulator - you can either discard a random selection of bits, or use a particular puncturing pattern. In the receiver, the discarded bits should be replaced, using zero-values LLRs.

Take care, Rob.

May 7th, 2013 at 8:06 pm

Hi Elnaz,

I’m afraid that I don’t know where to get a good algorithm for designing LDPC codes - in the past, I have always focused on the LDPC designs used in the standards. I think that Matlab’s decoder generates this error because it can’t work out the generator matrix from the parity check matrix you have provided it with - I’m afraid that I don’t know how you can fix this…

Take care, Rob.

May 7th, 2013 at 10:29 pm

Hi Rob,

Thanks for the reply. In main_ber.m file, you use this equation to calculate noise variance

% Convert from SNR (in dB) to noise power spectral density.

N0 = 1/(10^(SNR/10));

but σ^2 = 1/(2R *Eb/N0) and R = 1/3 in this case. So I think you are using a wrong variance value for your AWGN channel.

Do you happen to know about the MATLAB\’s new built-in Turbo Encoder and Decoder? I get worse performance in terms of BER with MATLAB\’s own implementation of this and wondered if you have any knowledge of this.

Thanks,

Selin

May 8th, 2013 at 5:10 pm

Hi Selin,

I agree that σ^2 = 1/(2R *Eb/N0) in the case of BPSK modulation. However, I don’t agree that this is inconsistent with N0 = 1/(10^(SNR/10)). I think that both equations are true.

I guess that you are referring to…

http://www.mathworks.co.uk/help/comm/ref/turboencoder.html

http://www.mathworks.co.uk/help/comm/ref/turbodecoder.html

I’m afraid that I don’t have the latest version of Matlab, so I haven’t been able to try these…

Take care, Rob.

May 8th, 2013 at 11:50 pm

Hi Rob,

I guess if you are defining SNR as Es/No, which is R*Eb/No, then yes, both equations would be correct. ( σ^2 = N0/2 = 1/ (2 SNR)) => N0=1/(SNR) = 1/(10^(SNRdB/10)) )

As for MATLAB\’s implementation, they provide the demo results for UMTS turbo code on this page http://www.mathworks.com/help/comm/examples/parallel-concatenated-convolutional-coding-turbo-codes.html#commpccc-13

Link for the BER curve:

http://www.mathworks.com/help/comm/examples/commpccc_03.png

These results are clearly worse than those provided here:

http://www.csee.wvu.edu/~mvalenti/documents/DOWLA-CH12.pdf

Do you have an opinion or sources you can refer me to help me figure which one is correct?

Thanks,

Selin

May 9th, 2013 at 12:51 pm

bsr,svp pouvez vous me donner les differrent code rate (pattern puncturing)des reseaux telephonie(LTE,UMTS,GPRS,CDMA2000,CDMA,HSDPA,HSUPA,HSP+…).j’ai besion de ces codes rates urgent et

merci d’avance

May 9th, 2013 at 5:53 pm

Hi Selin,

That’s right - I’m using SNR as Es/N0.

It looks like the Matlab turbo decoder provides sub-optimal performance for some reason. Both my results and the results in the book you have linked to are superior. If I were you, I would trust that book over Matlab…

Take care, Rob.

May 9th, 2013 at 5:55 pm

Hello Awatef,

I’m afraid that I don’t have any code for these different puncturing patterns - I normally focus on unpunctured codes…

Take care, Rob.

May 9th, 2013 at 5:55 pm

Hello Awatef,

I’m afraid that I don’t have any code for these different puncturing patterns - I normally focus on unpunctured codes…

Take care, Rob.

May 9th, 2013 at 6:50 pm

@ ROB

je vous remercie tout d’abord pour votre reponse.E nsuite

j’ai besion de connaitre les défferentes motifs de poiçonnage non les codes.c.a.d j’entrain de faire une recherche sur les les défferentes motifs de poiçonnage afin de les coder plus tard.

merci

May 9th, 2013 at 10:00 pm

Hi Rob,

Thanks for taking the time to look at those graphs. I decided not to use MATLAB’s turbo decoder since all the other turbo code performances seem to be matching except for MATLAB’s. Did you publish any of your BER results in a paper so I can refer to them as well? (I’m a PhD student at UCSD and using turbo codes as part of my research)

Awatef,

You might want to have a look at the CML library. They have puncturing methods there. http://www.iterativesolutions.com/download.htm

Also, for optimal puncturing patterns for certain constituent encoders (unfortunately not UMTS, but still gives you an idea), you can refer to this paper:

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=848555&tag=1

Cheers,

Selin

May 10th, 2013 at 5:17 pm

un grand merci selin.

mais les fichiers sont (.mat)en exel.ils ne s\’ouvrent plus.

merci

May 10th, 2013 at 5:23 pm

Hi Selin,

We have published some LTE BER plots in…

http://eprints.soton.ac.uk/271820/

Thank you for offering some help to Awatef. Your French must be better than mine!

Take care, Rob.

May 10th, 2013 at 5:24 pm

Hi Awatef,

I’m afraid that my French is rather rusty and so I’m not sure what you are asking for. I wonder if you can restate the question in English?

Take care, Rob.

May 10th, 2013 at 7:41 pm

a big thank you Selin.

but the files that are in (CML Source Code) are files with extension (. Mat) (in Exel) so they s’ open more.

thank you

May 10th, 2013 at 7:42 pm

a big thank you rob

May 10th, 2013 at 8:06 pm

Hi Rob,

Thanks a lot. All the BER curves but Matlab’s are indeed matching.

Awatef,

If you download the cml.1.10.zip, you can see that under cml/mat folder, there are MATLAB files with .m extension. And depending on what you exactly want to simulate, you can just run the CmlSimulate.m with the already existing scenarios by only changing the puncturing pattern as you wish. .mat files are just outputs they simulated before and provided for convenience. Hope that was helpful.

Selin

May 27th, 2013 at 12:42 pm

@ selin

un grand merci pour votre effort.svp pouvez vous me télécharger ces code:from http://www.pudn.com/search_db.asp?keyword=puncturing+vhdl

merci d\’avance

awatef

June 20th, 2013 at 8:46 am

hello Rob,

I want to know, personally I am develop repetition code by using repmat and reshape command in matlab…the problem is, I need to use the a priori uncoded value for repetion code decoder for feedback..but I have no idea how to match it by using repmat and reshape command. the length of the codeword need to be considered if simply put the a priori uncoded.

the BCJR algorithm that you have created is just for systematic value only right…Any suggestion for me..

thank you

June 20th, 2013 at 8:50 am

FYI, actually I am using irregular repetition code which have various value of repetition.

thanks

June 20th, 2013 at 6:04 pm

Hi Zee,

The input to your repetition decoder is a number of a priori LLRs, which all pertain to repetitions of the same bit. You can calculate an a posteriori LLR by adding all of these a priori LLRs together. You can then calculate an extrinsic LLR that corresponds to each a priori LLR by subtracting the a priori LLR from the a posteriori LLR.

I hope that makes sense.

Take care, Rob.

July 3rd, 2013 at 8:22 am

hi Rob,

thanks for your answer. one more thing, regarding to repetition code. Can you give me an idea how to uncode the encoded value? because in repetition code, there is no apriori uncoded calculation take part..

thank you

July 3rd, 2013 at 6:29 pm

Hello Zee,

You can get an aposteriori LLR for the uncoded bit by simply adding together the apriori LLRs for all of the corresponding encoded bits.

Take care, Rob.

July 19th, 2013 at 3:52 am

Dear Rob

can you provide me some references about what is iteration and why we need them pelase

thanks

July 19th, 2013 at 10:12 am

Hello Brad,

Iteration was first proposed by Claude Berrou in…

http://engrwww.usask.ca/classes/EE/814/Papers/turbo_icc93.pdf

You can read a tutorial on this at…

http://www2.elo.utfsm.cl/~ipd465/Papers%20y%20apuntes%20varios/Turbo%20codes%202.pdf

Take care, Rob.

August 9th, 2013 at 8:31 am

I am new to this Turbo Coding and need a simple example of the same. Can anyone help me with a simple implementation of Turbo Product Codes?

Thanks.

August 9th, 2013 at 9:16 am

Hello Sanket,

I’m afraid that I don’t have any implementation of turbo product codes. Perhaps somebody else here can help you…

Take care, Rob.

September 17th, 2013 at 9:57 pm

Hi Rob,

I am looking into PR equalization. Basically, I have the input (BPSK bits) and the output of an unknown channel and I want to equalize the output to a known target. Could you please guide me which algorithm/filter I should be using for this? Any resourse?

Thanks,

Elnaz

September 19th, 2013 at 9:03 am

Hi Elnaz,

There are many different equalization techniques that can be used, including linear and turbo-based methods. The only ones that I am really familiar with are the turbo-based methods, where a trellis is used to describe the interactions between the transmitted symbols. I would suggest typing “turbo equalization” into Google Scholar…

Take care, Rob.

September 24th, 2013 at 7:06 am

Hi Rob

I have a general question please.

I am using puncturing technique in LDPC code with degree 2 and degree 4 variable nodes and degree 5 check nodes.

When I implement puncturing, my simulation results show that we can get better threshold (lower SNR) if we punctured more degree 2 variable node and less degree 4 variable node.

My question is that what can be the reason behind this behaviour of getting good results for more puncturing degree 2 variable node rather than degree 4?

Everything is OK, I am just asking may be there could a basic reason for such special structure of LDPC code or it is always better to puncture more lower degree variable node compared to higher degree or there could be any other relationship behind it?

Can you please give me a clue/idea thanks.

September 24th, 2013 at 12:36 pm

Hi Ideal,

I suspect that this is because the EXIT curve of the VND is affected by how you do the puncturing. Presumably, puncturing in the way you describe gives a VND EXIT curve that offers a better match with that of the CND, allowing an open EXIT chart tunnel to be created at a lower SNR.

Take care, Rob.

October 29th, 2013 at 8:36 pm

Hi Rob, I have just started reading the report and expect to move to the matlab code once done with the report. In section 2.2.1 page 18, it is mentioned that K=4 is the constraint length and m=3 is the memory of convolutional code. Are you defining CONSTRAIN LENGTH as Maximum number of output bits in a single output stream that can be affected by any input bit. This would mean K=1+m. I wanted to confirm the confirm the notion since different books use different definitions.

Take Care, Goshal

October 30th, 2013 at 3:28 pm

Hi Goshal,

That’s correct. As you point out, different people use “constraint length” to mean different things. I normally use it to mean the length of the generator and feedback polynomials. Note than in a convolutional encoder having more than one generator polynomial, the number of memory elements is typically given by m = n*(K-1), where n is the number of generator polynomials.

Take care, Rob.

November 5th, 2013 at 11:03 am

Hi Rob,

I tried to check your code with the Fig.3 in ten Brink 2001 paper.

I have changed the SNR like:

% Channel SNR in dB

EbNo = 0.8;

SNR = EbNo + 10*log(0.5); % Choose the SNR

and I do the puncturing algorithm as your advice:

% Encode using a half-rate systematic recursive convolutional code having a single memory element

[encoded1_bits, encoded2_bits] = convolutional_encoder(uncoded_bits);

% Puncturing

c2 = reshape(encoded2_bits,2,length(encoded2_bits)/2);

b = c2(1,:);

% BPSK modulator

tx1 = -2*(encoded1_bits-0.5);

tx2 = -2*(b-0.5);

% Send the two BPSK signals one at a time over an AWGN channel

rx1 = tx1 + sqrt(N0/2)*(randn(1,length(tx1))+i*randn(1,length(tx1)));

rx2 = tx2 + sqrt(N0/2)*(randn(1,length(tx2))+i*randn(1,length(tx2)));

% BPSK demodulator

apriori_encoded1_llrs = (abs(rx1+1).^2-abs(rx1-1).^2)/N0;

apriori_encoded2_llrs = (abs(rx2+1).^2-abs(rx2-1).^2)/N0;

%In the receiver the inverse operation for LLRs is…

c2_tilde = [apriori_encoded2_llrs; zeros(1,length(apriori_encoded2_llrs))];

c_tilde = reshape(c2_tilde,1,numel(c2_tilde));

after that I decode:

% Do the BCJR

[aposteriori_uncoded_llrs, aposteriori_encoded1_llrs, aposteriori_encoded2_llrs] = bcjr_decoder(apriori_uncoded_llrs, apriori_encoded1_llrs,c_tilde);

but the result do not the same like ten Brink\’s paper.

Even I have tried for (13,15) memory 3 RSC

Hope you can give me some advices

November 6th, 2013 at 8:27 am

Dear sir,

I see my problem is that I forgot to change the code rate after I puncturing the parity bits.

So, it should be: SNR = EbNo + 10*log(2/3);

But the chart still look different with the one of ten Brink sir.

November 6th, 2013 at 6:42 pm

Hi Juan,

You can check if your code has any bugs in it by doing two things:

1) Comparing the EXIT functions drawn using the averaging method with those obtained from the histogram method - these should be the same.

2) Checking that the area beneath your EXIT function matches with C/R/log2(M), where R=2/3 is your coding rate, M=2 is your modulation order and C is the DCMC channel capacity, which can be measured using the code I have at http://users.ecs.soton.ac.uk/rm/resources/matlabcapacity/

If these checks suggest that you have no bugs in your code, then I suspect that your simulation parameters are not the same as those considered by Stephan ten Brink.

Take care, Rob.

November 27th, 2013 at 4:51 am

Hi Rob,

I want to change your BCJR decoder to MAP Equalizer. My question is related to transition matrix in component_decoder.m. In case of ISI channel we don\’t have systematic information so transition(:,3) is to be removed. I was going through the earlier posts and found one similar question and reply which is included below for your reference.

Rob Says:

August 30th, 2012 at 6:03 pm

The transition matrix should have M^T number of rows. Each row should one column for the from state, one column for the to state and T number of columns that list every possible combination of T constellation points. The from state depends on the first T-1 constellation points and the to state depends on the last T-1 constellation points. You also need log2(M) number of columns to list the bits that are mapped to the last of the T constellation points.

Take care, Rob.

I understand the to-state, from-state column. Also we need one more column which is the encoded (isi-ed) output. I did not understand your comment \"T number of columns that list every possible combination of T constellation points\". T is the length of channel memory. Right? Why do we need T columns here?

Also if we consider single bit input to the channel then we need only log2(2)=1 columns to list the bits responsible for this state transition. Does this mean that we need log2(M) columns for channels where we have symbol input instead of single bit input.

According to my understanding the transition matrix should have to-state, from-state, log2(M) columns representing bits responsible for state transition and once column for encoded (isi-ed) output. Is this correct?

Regards

Goshal

November 27th, 2013 at 6:49 pm

Hi Rob,

I think I should one step back to bring my situation in perspective. I am trying to simulate a “Combined Turbo Equalization and Turbo Decoding” setup proposed by Dan Raphaeli and Yoram Zarai.

http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=664220&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel3%2F4234%2F14569%2F00664220

I want to ensure that my constituent encoders start and stop in all-zero state. (You have taken care of this matter in your simulation). Now I need to combine a,[c,e],[d,f] serially which is also simple followed by baseband-BPSK (0—>+1,+1—>-1). Let us assume that my channel response is [1 4 6 4 1].

I have following question.

1. To ensure that my channel trellis starts and ends in all +1 state, should I pad the a,[c,e],[d,f] with +1 bits. Is this the right way. Or should I initialize alphas (:,1)=0 and betas(:,length(a priori_llr))=0. Later case gives leaves us with the possibility that the equalizer trellis can start and end at any state.

How should I go about it.

Regards,

Goshal

November 27th, 2013 at 7:04 pm

Hi Goshal,

As you suggest, you can program this to have log2(M) number of columns, one for each bit that is mapped to each symbol.

You could put some +1 transmissions at the end of your transmission sequence, to guarantee a particular final state. This would correspond to alphas(1,1) = 0 and betas(1,end) = 0.

Alternatively, you can do without the +1 transmissions and use alphas(1,1) = 0 and betas(:,end) = 0.

I think you’ll find that both options give very similar performance…

Take care, Rob.

November 27th, 2013 at 10:12 pm

Dear Rob,

Thank you for your reply.

My system is

binary bits–>Rate(1/3)TE–>Combine–>BPSK–>ISI-Channel+AWGN–>Rx

Since I have BPSK baseband modulation I can add real noise to ISI-ed symbols. These ISI-ed symbols belong to {-16,-8,0,8,16} for Channel = [4 8 4]. So at the receiver end how do I perform BPSK soft-demodulation. Can you please refer some material that talks about the soft-demodulation. I want to modify your component_decoder so that it can be used as MAP-Equalizer.

Regards,

Goshal

November 27th, 2013 at 10:14 pm

I meant *How do I perform soft-demodulation of this M-PAM received symbol stream*.

November 28th, 2013 at 6:01 pm

Hi Goshal,

You can see how to do generalised soft demodulation in the example I have provided at…

http://users.ecs.soton.ac.uk/rm/resources/matlabexit/#comment-1545

Essentially, you can use the fading and ISI in your channel to create a hypothesis of the received signal for each transition in your channel. The corresponding soft information is obtained according to…

symbol_probability = -abs(rx-hypothesis).^2/N0

Take care, Rob.

December 2nd, 2013 at 3:35 pm

Hi Rob,

I have a question related to the setting up of a-priori information for the BCJR equalizer.

In the main_ber.m file, you have following lines

% BPSK demodulator

% These labels match those used in Figure 2.11 of Liang Li’s nine month report.

a_c = (abs(a_rx+1).^2-abs(a_rx-1).^2)/N0;

% We have no a priori information for the uncoded bits in the first decoding iteration.

% This label matches that used in Figure 2.11 of Liang Li’s nine month report.

a_a = zeros(size(a));

% Obtain the uncoded a priori input for component decoder 1.

% These labels match those used in Figure 2.11 of Liang Li’s nine month report.

y_a = [a_a+a_c,e_c];

abs(a_rx+1).^2/N0 is a soft information i-e bit_probability when I compare this expression to the expression symbol_probability = -abs(rx-hypothesis).^2/N0 in your last message. But a_a+a_c is the addition of LLRs where the soft information is not in the form of LLRs. The file component_decoder.m ( which I want to modify for my BCJR equalizer) takes ariori_llrs as input. My soft information is not in this formate. Am I missing something over here. I would appreciate if you could guide me through.

December 3rd, 2013 at 3:17 am

Hi Rob,

I am using a simple 1st order PLL using M&M estimate for TED. Do you know how I should choose the optimum PLL gain (step size) for each SNR? I am looking for a systematically optimum way to do this. Could you please let me know what you think?

Thanks,

Elnaz

December 3rd, 2013 at 1:43 pm

Hi Goshal,

I would expect your turbo equaliser to have two inputs:

- A vector of apriori LLRs, which are provided by your channel decoder. These LLRs should pertain to the bits that are input into your modulator.

- A vector of complex received signals, which are provided by your channel.

I would expect it to have one output, namely a vector of extrinsic LLRs, which you can provide to your channel decoder. These LLRs should pertain to the bits that are input into your modulator.

I’m not sure if this answers your question…

Take care, Rob.

December 3rd, 2013 at 1:44 pm

Hi Elnaz,

I’m afraid that I don’t have much experience with PLLs and so I can’t offer you any help with this.

Take care, Rob.

December 3rd, 2013 at 4:15 pm

Hi Rob,

Thank you for your reply.

You suggested that my equalizer should have one input of received_signal_vector coming from the channel. But if I use this directly as input then I will not be performing soft-demodulation. Also the component_decoder takes apriori_llrs as input which is actually the addition of a priori_llrs from channel decoder and the soft_decisions (a_c = (abs(a_rx+1).^2-abs(a_rx-1).^2)/N0) of the received signal.

As you suggested in one of your previous replies that soft demodulation of the received_signal can be performed by

symbol_probability = -abs(rx-hypothesis).^2/N0.

But since my symbols are m-bit I cannot formulate a_c kind of quantities here.

Should I give up on soft-demodulation and proceed as you suggested or there is some way to get around this problem so that I can perform y_a = a_a + a_c kind of computation before calling the equalizer block. Btw in this case the equalizer will have only one input i-e y_a.

My second question is about soft-demodulation.

ln(P(rx/hypothesis))=ln(gamma in normal domain)= gamma in logarithmic domain

Does it mean that soft-demodulation is just a prior step of computing gammas for log-map algo.

I am confused by y_a=a_a+a_c step in the original main_ber.m. I don’t understand how to incorporate this step in my TE setup. I would appreciate if you could comment on the significance of this step and how it can be performed for the turbo equalizer.

I will be very grateful.

Regards

Goshal

December 4th, 2013 at 5:22 pm

Hi Goshal,

The way I’m imagining the turbo equaliser is that the soft demodulation is performed as an integral part. This means that the apriori_llrs come only from the channel decoder, with no addition of information from the soft demodulator. As you say, soft-demodulation is just a prior step of computing gammas for log-map algo.

Take care, Rob.

December 15th, 2013 at 3:51 pm

Hi Rob

I am trying to adopt turbo code with 16 QAM modulation and Using OFDM system through Selective frequency slow fading channel.

Could you tell me what should I change to your turbo code in order to make it work with my system.

Thanks for your great efforts.

Ali

December 15th, 2013 at 3:53 pm

And how can I made the LLR at the receiver ?

December 16th, 2013 at 6:53 pm

Hi Ali,

There are no changes that need to be made to my turbo code - it doesn’t mind where the input LLRs come from. I’m afraid that I don’t have any Matlab code for OFDM, but you can modify the following code to get 16QAM…

http://users.ecs.soton.ac.uk/rm/resources/matlabexit/#comment-1545

Take care, Rob.

December 17th, 2013 at 12:01 pm

Thank you Rob

December 21st, 2013 at 3:38 pm

Hi Rob,

I am using 1/3 turbo encoder, multiplex the coded streams and send them over ISI channel. At the receiver I have implemented MAP-Equalizer followed by your turbo decoder. The output of equalizer are the extrinsic_llrs of the multiplexed stream. I split them into a_c,a_c,d_c,e_c,f_c and initialize a_a with zeros to start the turbo-decoder. a_a is the a priori information from the decoder2. I perform the iterations within the turbo decoder (just as it happens in your decoder). The additional information which i compute in my decoder are the extrinsic information of the coded streams c,d and termination bits e,f. I recombine them at the output of the decoder so that it can be send to the equalizer for the next out-iteration.

My question is regarding the measuring IA for turbo equalization.

In your turbo decoder

IA = measure_mutual_information_averaging(a_a)

where

a_a(interleaver) = b_e;

So we were comparing, interms of input stream \\\’a\\\’, how much we have gained by iterating between the two decoders. For turbo equalization scenario, we have the information regarding the multiplexed stream at the input and output of equalizer . Should I be measuring the IA for the entire multiplexed stream to see how well one out-iteration (iteration between equalizer and decoder) has performed.

I am confused since we want to decode \\\’a\\\’.Therefore we should be looking at how much we have gained in terms of input stream only instead of measuring information for the entire stream.

What do you think?

Regards

Goshal

December 21st, 2013 at 7:34 pm

Hi Rob,

I have one more question for you.

I am calling iteration between Equalizer-TurboDecoder as an outer-iteration whereas iterations within TurboDecoder i-e iterations between Decoder1-Decoder2 as inner-iterations. Have you come across any literature which address the question of optimizing the number of inner-iterations for turbo-equalization setup. Intuitively speaking more inner iterations would imply minimizing BER using error-correction-coding which is fine but when we try to test the performance of any equalizer in the turbo-equalization scheme we are more interested in seeing the improvement in system caused by the equalizer. Does it mean that we should use only one inner-iteration and more outer-iterations ?

Goshal

December 23rd, 2013 at 2:37 pm

Hi Goshal,

It sounds like you are interested in plotting an EXIT chart trajectory for your three-stage concatenated scheme. One approach for this would be to do it as you describe, looking only at the MI for the message a. The trouble with this is it is blind to the iteration between the turbo decoder and the equaliser. The alternative would be to come up with a new way to visualise a trajectory for the whole scheme. As far as I’m aware, this hasn’t been done before for the type of concatenation you have. My feeling is that it would require several 3D EXIT charts to visualise the whole scheme. The trouble is that this would probably give a very confusing plot that is difficult to interpret. A third option that I can think of is to treat the turbo decoder as a black box, where LLRs from the equaliser go in and LLRs from the turbo decoder come out. Inside the black box, you would perform lots of turbo decoder iterations. This whole scheme would then look like a serial concatenation between an outer turbo decoder and an inner equaliser. You could then use standard techniques for plotting the EXIT chart and trajectory for this scheme.

My intuition is that it would be better to activate each of two turbo decoder components more often that you activate the equaliser. This is because the turbo decoder components are relatively strong error correction codes, whereas the equaliser is a relatively weak at correcting errors. I’m sure that there are papers on the adaptive iteration of schemes like yours, but I’m afraid that I don’t have any to hand.

Take care, Rob.

December 24th, 2013 at 6:34 pm

Hi Rob,

I am confused about the extrinsic-llrs which we compute using BCJR-based equalizer.

When we use turbo-codes at the transmitter and pass the multiplexed-coded stream over an ISI channel, the extrinsic-llrs computed using the BCJR-equalzier act as soft-channel input for the turbo-decoder .

I was going through

http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=664220&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel3%2F4234%2F14569%2F00664220

and the authors have performed an intermediate step on the extrinsic-llrs coming from MAP-channel-decoder to make them into equivalent soft-channel inputs for the turbo decoder.

I am exactly trying to simulate the same block diagram but my SPLIT block only performs the demux operation and splits the stream of extrinsic-llrs coming from equalizer into soft channel information a_e,c_e,d_e,e_e,f_e that can be used by your component_decoders.

Can you please explain the difference between the two approaches.

Regards

Goshal

January 2nd, 2014 at 11:12 am

Hi Goshal,

I wouldn’t recommend the approach used in that paper. Their turbo equaliser outputs LLRs. Then, they use a model estimation block to convert these LLRs into what looks like a received BPSK signal. This then allows them to use a turbo decoder that accepts a received BPSK signal. I guess they did it this way because they didn’t know how to make a turbo decoder that accepts the LLRs directly.

My recommendation would be to cut out the middle man. Instead, I would recommend using a turbo decoder like mine, which accepts LLRs, rather than a received BPSK signal. This turbo decoder can then be driven directly from the LLRs provided by the turbo equaliser.

Take care, Rob.

January 26th, 2014 at 11:41 am

Namastey Rob,

i worked on turbo code since 4 month and try to implement it on MATLAB. few days ego, i found this blog n use your code. but i found something wrong in your code.

i simply performed encoding, modulate n add noise (AWGN) and then decoding. when i used direct command of a_tx = awgn(a_tx,SNR) of matlab instead of [a_rx = a_tx + sqrt(N0/2)*(randn(size(a_tx))+1i*randn(size(a_tx)))] ; program dosen\’t work correctly.

In your code i found that every time errors(errors = sum((a_p < 0) ~= a)) is zero means from 1 iteration to 12 iteration error is zero. And when use awgn command, the program fail to perform operation.

After going through the program i found that there is same sign of entered message bit and the received demodulated message bit (means signs of vector a and a_c are same). so there is no need to perform decoder operations.

So, correct me if i am wrong.

Regards

Rushin

January 27th, 2014 at 3:36 pm

Hi Rushin Shah,

I think I know what is going on here. My code is using complex-valued noise. By contrast, Matlab’s awgn function will use real-valued noise, when a_tx is real valued. Owing to this, there will be a 3 dB difference between the results. To get the same operation as my code, you would need to use…

a_rx = awgn(complex(a_tx),SNR);

Take care, Rob.

January 27th, 2014 at 4:05 pm

Hi sir,

please can u help me for implementing same code in rayleigh channel. please tell me the modifications ..do reply as soon as possible

January 27th, 2014 at 5:53 pm

Thank you so much Rob sir…

January 27th, 2014 at 6:37 pm

Hi Supriya,

You can modify this code to use a Rayleigh fading channel by using code like the following…

a_tx = -2*(a-0.5);

a_n = sqrt(N0/2)*(randn(size(a_tx))+i*randn(size(a_tx)));

a_h = sqrt(1/2)*(randn(size(a_tx))+i*randn(size(a_tx)));

a_rx = a_h.*a_tx + a_n;

a_c = (abs(a_rx+a_h).^2-abs(a_rx-a_h).^2)/N0;

Here, ‘a’ is a bit vector to be BPSK modulated and ‘a_c’ is a vector of corresponding demodulated LLRs.

Take care, Rob.

February 4th, 2014 at 5:19 am

hi sir,

thank u so much. .i implemented as u said..i got output. .but there is no much difference between rayleigh and awgn channel output. .im not getting the curve as it is in your ieee paper . .”low complexity turbo decoder architecture for energy efficient wireless sensor network” please help me

February 4th, 2014 at 4:14 pm

Thanks for doing this job. I changed the main.ber to be valid for higher modulation alphabets M-ary QAM.

Do you have any BER-results, with which I can compare to be sure if everything is correct? Thanks in advance.

February 4th, 2014 at 7:03 pm

Hi Supriya,

My suggestion would be to begin by drawing the EXIT charts for your scheme using both the averaging and histogram methods of measuring mutual information. If both methods give similar EXIT charts, then you can be confident that your results are correct.

Take care, Rob.

February 4th, 2014 at 7:04 pm

Hi BER,

I’m afraid that I don’t have any other BER results. However, I suggest that you follow the advice that I just gave to Supriya, in order to gain confidence that your scheme is working correctly.

Take care, Rob.

February 5th, 2014 at 2:48 pm

Hi Rob,

Thanks a lot for you reply.

What about BLER (block error rate). Do you have any curves for it. Even for BPSK, that would be great.

Thanks in advance!

February 5th, 2014 at 4:55 pm

Hi BER,

I’m afraid that I don’t have any BLER curves. It would be easy for you to modify my code and generate these plots though…

Take care, Rob.

February 13th, 2014 at 7:38 am

hi sir,

thank u so much..actually i implemented your code for both rayleigh and awgn channel. .and i got -3.3db snr for ber of 10^-2 in rayleigh channel and

-4.25db snr in awgn channel. .i dont know its whether correct or not. .please help me

and does turbo codes works well with the awgn channel or rayleigh channel???? please do reply

February 13th, 2014 at 2:04 pm

Hi Sir,

Could you explain why in the file generate_llrs.m, the function you use is: llrs = random*sigma - (bits-0.5)*sigma^2; and NOt llrs = random*sigma + (bits-0.5)*sigma^2; ?

I think the operation + is the correct, am I wrong?

Thank you

February 14th, 2014 at 11:49 am

Hi Supriya,

The SNRs you have mentioned seem reasonable to me - you can compare your results with the ones I have posted at the top of this page. Turbo codes work well in both types of channel. Compared to non-iterative channel codes, turbo codes work particularly well in fading channels, because they can use the information received during the peaks of the fading envelope to help recover that received during the troughs.

Take care, Rob.

February 14th, 2014 at 11:52 am

Hi Juan,

This depends on how you define an LLR. In the case where LLR = ln(P0/P1) my equation is correct. In the case where LLR = ln(P1/P0) your equation is correct. Different people use these different definitions of an LLR, so it is always important to check which to use. I used to prefer ln(P0/P1), but more recently I have been using ln(P1/P0)…

Take care, Rob.

February 17th, 2014 at 3:04 pm

hi sir,

thank u so much. . please can u tell me which type of encoding scheme we are used in this matlab code. .and from where we are getting the input and what is the input to the encoder. . please explain me the encoding part sir..reply me as soon as possible. .thanks in advance

February 17th, 2014 at 6:48 pm

Hi Supriya,

This Matlab code is for the UMTS turbo code, as well as the LTE turbo code. The only difference between these two turbo codes is the design of the interleaver. In the simulation, we are using random bits as the input to the encoder. In real life, these bits would come from a higher layer in the communication stack.

Take care, Rob.

February 18th, 2014 at 4:23 am

hi sir,

how many random bits are used as the input bits to the encoder sir and i think puncturing is done in this program .. is it so?? .thanks for ur reply

February 18th, 2014 at 4:31 am

Hi sir,

sir please can u tell me the bandwidh used for both awgn and rayleigh channel. .sorry for disturbing u a lot sir. .

February 18th, 2014 at 5:49 pm

sir,

Please can u tell me how to obtain constellation diagram for 64 iterative polar modulation using matlab

February 19th, 2014 at 3:36 pm

Hi Supriya,

The number of bits that are input into the encoder is equal to the value of the frame_length variable. This code does not use any puncturing. This turbo code has a coding rate of R=1/3. The BPSK modulator has a modulation order of M=2. The number of bits per BPSK symbol is therefore given by eta = R*log2(M) = 1/3. When using ideal Nyquist pulse shaping, the bandwidth required is given by B=eta*fb, where fb is the bit rate you choose to use. The bandwidth efficiency is given by B/fb, which is equal to eta. The bandwidth required doesn’t depend on the type of channel (AWGN or Rayleigh).

Take care, Rob.

February 19th, 2014 at 3:42 pm

Hello Sharmila,

You can see the IPM constellation for 64-QAM in…

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5970983&tag=1

You can implement this by modifying the Matlab code that I have provided at…

http://users.ecs.soton.ac.uk/rm/wp-content/QPSKEXIT.zip

All you need to do is modify the constellation_point vector and the bit_labels matrix in modulate.m and soft_demodulate.m, so that they correspond to IPM modulation, rather than QPSK modulation.

Take care, Rob.

February 27th, 2014 at 3:43 pm

Hi Rob,

Thanks again for this job. Do you have the bcjr algorithm in case of tail biting convolutional codes? Thanks in advance.

February 27th, 2014 at 5:01 pm

Hello BER,

I’m afraid that I don’t have a version of this code for the tail biting convolutional codes. However, the modification would be quite easy to implement. All you need to do is extend the forward and backward algorithms by one extra trellis stage, so that they produce a set of alphas and beta that come out of the ends of the trellis. During the next decoding iteration, you can use the alphas the came out the right-hand end of the trellis to begin the forward recursion that starts from the left-hand end of the trellis. Likewise, you can use the betas the came out the left-hand end of the trellis to begin the backward recursion that starts from the right-hand end of the trellis.

Take care, Rob.

March 12th, 2014 at 8:11 am

Hello Rob,

Thank you for your sharing. I find the x-label of your simulation result is

‘SNR’, but usually we use ‘Eb/N0′ instead. If we use Eb/N0 as the x-label, assuming that the code rate is 1/3, should we move this curve to

the right about 3.7dB?

Thank you for your repeats.

Sincerely Jun Zhang.

March 12th, 2014 at 8:42 am

Hello Sir,

I have done an implement of matlab turbo with Max-log-Map, but the result is not so good, there exits almost 1 dB deference compared with your results. In Max-log-Map, I find some papers wrote 1-2Xk and 1-2Yk to calculate gama(m’,m), can I use Xk and Yk directly? I still don’t understand Max-log-Map so well.

thanks in advance.

March 12th, 2014 at 9:28 pm

Hi Jun,

The relationship is…

Eb/N0 [dB] = SNR [dB] - 10*log10(eta)

where eta=R*log2(M) is the number of uncoded bits per modulated symbol. In this case, we have a turbo coding rate of R=1/3 and M=2-ary Phase Shift Keying. So we need to move the plot to the right by 4.77 dB…

Eb/N0 [dB] = SNR [dB] + 4.77

I’m afraid that I’m not sure what you mean by “1-2Xk and 1-2Yk”. Can you point me to the papers you are referring to?

Take care, Rob.

March 13th, 2014 at 6:31 am

Thank you Rob,

I think I was wrong. The paper I referred to may be not very formal, it\’s my senior\’s graduation paper. Do you have any classical documents about max-log-map? Besides, I think if we remove ln(1+e(-abs(b-a))) from BCJR, is it similar to max-log-map?

Thanks.

March 13th, 2014 at 8:17 am

Hello Mr. Rob,

Direct at former question about “1-2Xk”, I find Equation 2.14 and 2.15 in Liang Li’s nine month report is (1-y(T))*y~and (1-c(T))*c~, what does (1-y(T)) mean? Because in the paper I’ve read is 1-2Xk instead of 1-y(T), what’s the differences between them? Forgive my poor English to claim this problem..

Thanks.

March 13th, 2014 at 7:01 pm

Hi Jun,

Here is the original paper on the Max-Log-MAP:

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=524253&tag=1

As you say, you just need to remove ln(1+e(-abs(b-a))) from the BCJR to get the Max-Log-MAP.

In Liang-Li’s nine month report, he is saying that if the transition corresponds to a zero-valued bit, then the corresponding gamma should be given a value equal to that of the apriori LLR. By contrast, if the transition corresponds to a one-valued bit, then the corresponding gamma should be given a value of zero.

Some people use different versions to this. For example, if the transition corresponds to a zero-valued bit, then the corresponding gamma should be given a value equal to that of LLR/2. By contrast, if the transition corresponds to a one-valued bit, then the corresponding gamma should be given a value of -LLR/2.

Also, some people use the definition LLR=ln(P1/P0), rather than LLR=ln(P0/P1). In this case, the values assigned to the gammas of transitions corresponding to zero- and one-valued bits should be swapped.

Basically, different people do things in different way and it can be difficult to make sure that everything you are doing is consistent with everything else you are doing…

Take care, Rob.

March 14th, 2014 at 6:08 am

Hello Mr. Rob,

Thank you for your kind explain. I will check my code and simulate. When it comes to some problems, maybe I will search your help.

Thank you!

March 19th, 2014 at 11:26 am

Dear Rob,

I am trying again with the tail biting. I changed your component decoder by doing the following step.

1-assume:

alphas(:,1)=log(1/state_count); % all states are equal probable

2-calculate alpha by feed recursive

3-save the alphas of the last states

ALPHA = alphas(:,end);

4-assume

betas(:,length(apriori_uncoded_llrs))=log(1/state_count); % all states are equal probable

5-calculate betas by feedback recursive

6-save the betas of the first states

BETA = betas(:,1);

7-Now calculate again alphas and betas by initializing them using ALPHA and BETA

8-However, by doing so and using tail-biting encoding, the results are not correct

Did I modify component_decoder in a wrong way?

Thanks in advance!

March 19th, 2014 at 7:55 pm

Hi BER,

I’m not sure if you realise that you need to increase the length of the alphas and betas matrices by one, so that they have dimensions of [number of states,number of trellis stages+1]. Did you figure this part out?

Take care, Rob.

March 20th, 2014 at 9:13 am

Hi Rob,

Thanks for your answer. No, I did not.

1-What should the apriori_uncoded_llrs and apriori_encoded_llrs be for this extra trellis stage? Is it the first element repeated at the end?

2-Should gammas be calculated also twice. First for the extended trellis with equal probable initialization and second for the non-extended trellis with the new initialization of alphas and betas?

Thanks in advance, BER.

March 22nd, 2014 at 10:45 am

Hi BER,

Actually, you don’t need extra apriori LLRs or gammas. If you look carefully, you will see that the right-most apriori LLRs and gammas are not used during the alpha calculations in my code. Likewise, the left-most apriori LLRs and gammas are not used during the beta calculations. You just need to extend the alpha and beta recursions by one so that these LLRs and gammas are used.

Take care, Rob.

March 28th, 2014 at 5:27 am

hi sir,

thank u so much for your reply . .i have a other question. .sir why we wil get negative snr for turbo codes?? wat does it mean sir

March 28th, 2014 at 7:09 pm

Hi Supriya,

A negative SNR implies that the noise power is greater than the signal power. Turbo codes are so good at error correction that they can work even in these conditions.

Take care, Rob.

March 29th, 2014 at 3:54 pm

hi sir,

thank u so much for your reply sir . please can u send the code for ber v/s snr plot for convolutional codes. i just wanted to see the difference. .thanks in advance

March 31st, 2014 at 7:51 am

Hello Dr. Rob,

Recently I’m searching some turbo code puncturing paper and method because of 1/3 code rate inefficient. Do you have any suggestions? Source code is appreciated.

Thanks in advance.

Jun Zhang.

April 1st, 2014 at 6:01 pm

Hi Supriya,

I’m afraid that I don’t have a BER plot for convolutional codes. I’m sure that you can find this in the literature, if you look carefully. Alternatively, you could modify my turbo code, so that it becomes a convolutional code…

Take care, Rob.

April 1st, 2014 at 6:05 pm

Hi Jun,

Here is some Matlab code for randomly puncturing (and interleaving) some bits…

old_bit_count = length(bits);

interleaver = randperm(old_bit_count);

interleaved_bits = interleaver(bits);

punctured_bits = interleaved_bits(1:new_bit_count);

Here is some Matlab code for depuncturing the corresponding LLRs…

interleaved_llrs = [punctured_llrs, zeros(1,old_bit_count-new_bit_count)];

llrs = zeros(1,old_bit_count);

llrs(interleaver) = interleaved_llrs;

Take care, Rob.

April 18th, 2014 at 3:09 pm

Hi Rob,

The performance of Turbo-principle based receiver improves with iterations. This implies that the reliability of soft information that is exchanged between the component decoders(in case of turbo codes) or component equalizer-decoder(in case of turbo equalization) improves with iterations.

How do we measure the reliability of soft information? Is it the size=abs(llr(n)) of soft information. For example llr(n) = 0.5 @ iteration=4 and llr(n) =0.9 @ iteration =8. Or it is defined by the relative change in size(llr(n)) when it is exchanged between component blocks.

As I mentioned in one of my previous posts, that the values of llrs at the output of my mmse-soft output equalizer are very high. When I test my setup for a relatively low ISI channel, the her curves show the normal behavior but incase of strong ISI channel BER curves are almost flat. I understand that this is a clear indication that my setup is not suitable for strong-ISI but I also wanted to settle down this confusion about what do we actually mean by the “reliability of llr”

Thankyou

April 18th, 2014 at 3:23 pm

I would like to add one more comment here.

To see the size of llrs, I made chained the while statement in your code

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

from

while chance < chances && iteration_index <= iteration_count

to

while iteration_index <= iteration_count

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

By doing this, I forced the receiver to perform 8 iterations for each frame.

Range of extrinsic_information after each complete iteration was given by

max(a_a)

min(a_a).

-2<range<2 @ low snr

-500<range<500 @ snr=10 and above

Although the absolute size of extrinsic information was high at high SNRs, still by exchanging them among decoder blocks improved the performance. This makes me think that the reliability is some function of relative change in size(llr) instead of being abs(llr)

Is this correct?

If yes, then instead of looking at abs(llr) of my equalizer-block I should try to see how the relative size of these llr change when they are exchanged between the component blocks.

April 20th, 2014 at 8:29 am

Hi Goshal,

We can measure the reliability of an LLR vector by measuring its mutual information - this is a scalar in the range 0 to 1, where 0 implies no reliability and 1 implies perfect reliability. I recommend two methods for measuring the mutual information, namely the averaging and the histogram methods - you can download code for these from this page.

I would not recommend looking at the min or max LLR values, because these depend on only one LLR within the vector. Mutual information gives a reliability measure that is much more representative of the whole vector. Mutual information also has lots of theoretical motivation, owing to its link to channel capacity and coding rate, for example.

Take care, Rob.

May 6th, 2014 at 8:17 pm

Dear Rob,

For an ISI channel, I am using your Turbo encoder/decoder along with my equalizer in receiver. So this makes it a serial turbo setup.

After performing ECC, I simply make a single stream by X=cat(2,a,c,d,e,f) and pass it over my ISI channel. At the receiver I first perform equalization and send the Extrinsic information to your turbo decoder. I simply separate a_c, c_c, d_c, e_c, f_c from extrinsic information and initialize a_a=0. Now after performing fixed number of iterations in the decoder I need to Re-MUX the soft information that can be used by the SISO equalizer.

I am confused about this Re-MUX part. I understand that soft information for tail bits e and f are

e_e = y_e(length(a):end)

f_e = z_e(length(a):end)

but what about c_e and d_e.

If

a_e = y_e(1:length(a)) is c_e

b_e = z_e(1:length(a)) is d_e

then where is a_e?

In short, How should I do this re-mux step.

Regards

Goshal

May 6th, 2014 at 8:31 pm

Dear Rob,

I want to use your BCJR as an equalizer for my channel (which I have done before) but this time the input to the channel is a vector of bits instead of one bit. I want to do joint detection of a multiple streams of bits. How should I change your code and what will be different?

Thank you,

Elnaz

May 6th, 2014 at 8:41 pm

Hi Rob,

My last question sounds confusing (even to myself). The job of the decoder is to use systematic/non-systematic information to compute a reliable estimate of the original data ‘a’. ECC adds redundancy which is used by the decoder. The decoder updates only the information related to original data and not any other redundant data.

Does that mean that the re-mux step will only update the information related to ‘a’ and rest will remain same.

Goshal

May 6th, 2014 at 11:23 pm

Dear Rob,

I looked at my previous questions and your replies. Its resolved.

Thankyou.

Goshal

May 7th, 2014 at 4:40 pm

Hi Elnaz,

I’m not sure what you mean by a vector of bits and one bit, when referring to the input to the channel. An equaliser is used when the channel imposes inter-symbol interference, which implies that the input to the channel is already a vector of symbols, which will interfere with each other - this vector represents a frame of symbols, which are transmitted one by one in the time domain. Perhaps your “vector of bits” is across another domain, such as frequency or space - are you using MIMO or OFDM?

Take care, Rob.

May 7th, 2014 at 4:41 pm

Hi Goshal,

I’m glad that you figured this out.

Take care, Rob.

May 7th, 2014 at 4:55 pm

Hi Rob,

By vector (joint detection) I mean that I want to equalize multiple signals at the same time. So, at each point in time instead of having one symbol/bit going though an equalizer (detector) I have a vector of symbols/bits going in.

The way I’m thinking is that the BCJR should be changed to deal with non-binary alphabet. I think computational-wise it’ll be prohibitively complex. But right now I’m only aiming for a correct implementation.

Thanks,

Elnaz

May 8th, 2014 at 5:15 pm

Hi Elnaz,

It sounds like you are talking about turbo equalization for QPSK, 8PSK, 16QAM signals, rather than BPSK signals. If so, then you will need to increase the number of states to M*L, where M is the number of constellation points and L is the number of previous symbols that interfere with the current one. You will also need M number of transitions from each state. Each transition will need to consider log2(M) number of apriori and extrinisic LLRs. Apart from that, the turbo equalizer should be similar to the one you already have…

Take care, Rob.

May 9th, 2014 at 8:40 pm

Dear Rob,

Actually, it is not QPSK, 8PSK, or anything of the sort. My channel is a separable 2D channel meaning that I channel my BPSK bits two times: first in the time domain and then in the space domain. The first channel is just a standard interference channel and the second one (another interference channel) is for example a 3-by-5 matrix which combines the set of 5 signals into 3 signals. Therefore, I begin with 5 independent time-domain-channeled signals and then I channel them in the space domain getting a set of 3 signals which are supposed to be the input to the BCJR detector.

This way, in my transition matrix I have 32 inputs (0 to 31) instead of just 0’s and 1’s. Without going too much into details, I wanted to check this with you please:

The uncoded_gammas, in my case, are just the apriori_uncoded_log probabilities (since we’re not binary but we’re 5-ary I do log probability instead of llrs). Therefore, for all the branches of the trellis (all the rows in the transitions matrix) which share the same input, uncoded_gammas will be equal.

Am I correct up to this point?

And, the calculations of alphas, betas, and deltas will not change a bit.

But, the extrinsic_uncoded_log_probabilities will be calculated separately for each of the 32 inputs. Am I correct?

In my thinking, these two are the only changes (except for the trellis of course)I will need to make to turn your BCJR into what I need?

Thank you so much,

Elnaz

May 12th, 2014 at 7:34 am

Hi Elnaz,

You say that “the second one (another interference channel) is for example a 3-by-5 matrix which combines the set of 5 signals into 3 signals”. Am I right in thinking that in addition to the space-domain interference between the five transmitted signal, there is also some time-domain interference between consecutive set of five transmitted signals? If not, then there is no need to use a trellis or a BCJR decoder.

I would suggest that you should use five binary apriori LLRs, rather than a single 32-ary apriori log probability. This is because your outer code will probably work on the basis of LLRs.

The gamma of each transition will be a combination of 5 components that depend on the corresponding apriori LLRs, as well as a component that depends on the received signal.

You would have 32 transitions from each state. The number of states will be 32 * the amount of memory in the channel.

You will end up with five extrinsic LLRs.

Take care, Rob.

May 12th, 2014 at 4:38 pm

Hi Rob,

Yes, the first channel operates in the time domain and it is the same for all 5 signals. After these 5 signals are channeled in time then they get combined through the second channel (space channel) and I’ll have a set of 3 signals.

And, yes I begin with a set of 5 apriori llrs, and then, inside my BCJR I convert them into a set of 32 apriori log probabilities. I use them in calculating uncoded_gammas. So, basically, all uncoded_gammas corresponding to an input (e.g. 00001) will be apriori log probability of that input. Am I correct so far?

The number of states is 32^2 (I have two delay taps in my time channel).

Am I correct? Above, you mentioned 32*the amount of memory. But, I think it is 32 ^ the amount of memory.

The encoded_gammas are calculated this way:

encoded_gammas(transition_index, bit_index) = -norm(encoded_input(:,bit_index)-transitions(transition_index,4)’)^2/(2*sigma2);

My encoded_input is a 3-by-1 vector at each time (bit_index).

Lastly, to calculate extrinsic_uncoded_probs, you do it once for prob0 and once for prob1. I do it for 32 probabilities. So my extrinsic_uncoded_probs is a 32-by-1 vector which are supposed to give the extrinsic probabilities for each of the input 5-bit-long words. I guess I can do as you said as well to end up with 5 llrs. BUT, the problem now is that I get all of my extrinsic-log-probs to be -3.5e6 for the entire length of the signals?

May 13th, 2014 at 2:48 pm

Hi Elnaz,

This all seems correct to me - as you say, it should be 32 ^ the amount of memory.

I’m afraid that I’m not sure why you are getting -3.5e6 for all of your extrinsic LLRs - I would suggest working back through the algorithm to see where it starts behaving weirdly.

By the way, when you have a high number of bits per transition (like the 5 bits per transition that you have here), you can get a much lower complexity by calculating the a posteriori LLRs and then subtracting the a priori LLRs, to get the extrinsic LLRs. This is instead of calculating the extrinsic LLRs directly, which incurs a much higher complexity.

Take care, Rob.

May 13th, 2014 at 4:25 pm

Hi Rob,

In order to reduce the complexity, do you mean to include uncoded_gammas into deltas and then everything else remains the same? If so, then the extrinsic_uncoded_probs will actually be the aposteriori probs when I subtract the apriori probs. How does this reduce the complxity?

Thanks,

Elnaz

May 13th, 2014 at 4:39 pm

Dear Rob, I have to edit my previous comment. This is the corrected version:

“Hi Rob,

In order to reduce the complexity, do you mean to include uncoded_gammas into calculating deltas and then everything else remains the same? If so, then the extrinsic_uncoded_probs in the code will actually be the aposteriori probs. How does this reduce the complxity?

Thanks,

Elnaz”

May 15th, 2014 at 6:37 am

Hi Elnaz,

That’s right. By including all of the apriori LLRs in the delta calculations, the result is aposteriori LLRs. This reduces complexity because you only need one set of deltas, rather than five sets, each excluding a different one of the apriori LLRs.

Take care, Rob.

May 15th, 2014 at 4:56 pm

Dear Rob,

I don’t understand when you say :”you only need one set of deltas, rather than five sets, each excluding a different one of the apriori LLRs.”

I do not have five sets of deltas. The way I do it is that I begin with 5 apriori llrs and then inside my BCJR code I convert them to 32 log probabilities and use them to calculate uncoded_gammas. So, for each line of the transition matrix I have a uncoded_gammas (Well, actually for those lines with the same input, the corresponding uncoded_gammas are the same).

Now, in order to change the code from directly producing extrinsic llrs to aposteriori llrs, I combine uncoded_gammas with encoded_gammas into only one gammas like this:

for transition_index = 1:size(transitions,1)

gammas(transition_index, bit_index) = -norm(encoded_input(:,bit_index)-transitions(transition_index,4:6)’)^2/(2*sigma2)+apriori_uncoded_probs(transitions(transition_index,3)+1,bit_index);

end

Then I use this “gammas” everywhere i.e. in calculating alphas, betas, and deltas.

At the end I get aposteriori llrs from which I subtract the apriori llrs to get the extrinsic llrs.

Am I doing a mistake somewhere?

This only saves me one calculation of gammas i.e instead of computing both uncoded and encoded versions I only have one gammas. Am I correct?

Thanks,

Elnaz

May 16th, 2014 at 9:09 am

Hi Elnaz,

What you are describing sounds correct to me - it is the “only one set of deltas” approach that I am referring to. There is a more complex way of doing thing that would produce five sets of deltas, in order to produce the extrinsic LLRs directly. Since the complexity of this approach would be much higher than that of your approach, I think that you should not consider it.

Take care, Rob.

May 29th, 2014 at 11:23 pm

Dear Rob,

I have a question regarding E2PR2 channel response the discrete form of which we write as: [1 4 6 4 1]. I need to work backwards and find its corresponding transition response p(t) such that E2PR2 = p(t) - p(t-T) where T is the bit period. Do we have a closed form formula for p(t) and maybe its discrete form?

Thanks,

Elnaz

May 30th, 2014 at 4:15 pm

Hi Elnaz,

I’m afraid that I haven’t done this sort of thing before and so I can’t offer you much help with this.

Sorry, Rob.

May 30th, 2014 at 6:17 pm

Hi Rob,

Another question please: if we have an infinitely long channel response, how do we convolve our stream of symbols with that response? How can we efficiently code that up in MATLAB?

Thanks,

Elnaz

June 1st, 2014 at 6:18 pm

Hi Elnaz,

A recursive convolutional encoder is an example of something else that also has an infinite impulse response. In analogy with this, I suspect that you can model your channel as a recursive shift register. You would need to figure out the correct weights for the shift register taps though…

Take care, Rob.

June 9th, 2014 at 11:13 am

Hi Rob,

Could you please explain how to change the standard BCJR to include Pattern Dependent Noise Prediction (PDNP)?

Many Thanks,

Elnaz

June 9th, 2014 at 4:30 pm

Hi Elnaz,

I’m afraid that I haven’t worked on PDNP, so I can’t give you a very specific answer. However, I expect that you just need to describe the situation using a trellis and determine a probability for each transition based on the received signal. Besides that, I think that the BCJR would work in a very similar way to a turbo equaliser or trellis coded demodulator.

Take care, Rob.

June 10th, 2014 at 7:42 pm

Hi Rob,

I think I am back to square one. I am trying to understand how your component_decoder computes EXTRINSIC INFORMATION directly. I am trying to search for it but all I found so far is the usual BCJR that first computes A-Posteriori Information and then computes extrinsic information by performing subtraction. Can you provide a reference here.

Thanks,

Goshal

June 11th, 2014 at 2:13 am

Hi Rob,

My first confusion is the way you are computing gammas. In conventional BCJR, conditional probabilities are used to compute the gammas associated with each transition but you are using a priori-llrs. Can you give a reference to this approach.

I was going through the previous posts and found your reply to this question

—————————

Rob Says:

April 25th, 2012 at 11:05 am

Hi Elnaz,

Equation 2.14 is saying that if a transition is labelled with a zero, then the corresponding gamma should take on the value of the corresponding a priori LLR. If the transition is labelled with a one, then the corresponding gamma should take on the value of zero.

This approach has a slightly lower complexity than the one originally proposed for the Log-MAP. This says that if a transition is labelled with a zero, then the corresponding gamma should take on the value of the corresponding a priori LLR divided by 2. If the transition is labelled with a one, then the corresponding gamma should take on the value of the corresponding a priori LLR divided by -2.

—————————

But I still don’t get this. A reference would be very helpful here.

Thanks,

Goshal

June 11th, 2014 at 3:52 pm

Hi Goshal,

I would recommend looking at “The turbo principle - Tutorial introduction and state of the art” by J. Hagenauer. Here, the notation u_k*L(u_k)/2 is used to mean “if a transition is labelled with a zero, then the corresponding gamma should take on the value of the corresponding a priori LLR divided by 2. If the transition is labelled with a one, then the corresponding gamma should take on the value of the corresponding a priori LLR divided by -2.”

Take care, Rob.

June 11th, 2014 at 6:11 pm

Hi Rob,

Thank you for the reference.

I understand how BCJR works and computes Symbol-by-Symbol A-Posteriori Probabilities. But I don’t understand how we can use the same trellis (modified BCJR) to compute the Symbol-by-Symbol Extrinsic-Information. This is where my problem is.

In my understanding, since Log MAP, max-log MAP are low complexity implementation alternatives to MAP, one can use these for implementing both conventional BCJR (Symbol-by-Symbol A-Posteriori Probabilities) and modified BCJR (Symbol-by-Symbol Extrinsic-Information). Is this correct?

Regards,

Goshal

June 13th, 2014 at 5:07 pm

Hi Goshal,

We can get a posteriori LLRs by forming the deltas using gammas that consider all a priori LLRs. By contrast, we can get extrinsic LLRs by forming the deltas using gammas that consider all a priori LLRs except for the one corresponding to the extrinsic LLR we want. We can do this for both Log-MAP and Max-Log-MAP.

Take care, Rob.

July 10th, 2014 at 9:09 pm

Hi Rob,

Although your comments in the previous posts suggest that you have not implemented BCJR based MAP-equalizer but your input about the possible implementation was very helpful. So I thought of asking you about how the BER curves for a simple setup should look like.

I have considered an uncoded stream of BPSK data passing over an ISI-channel with AWGN noise. My receiver passes this data through the MAP-Equalizer only once. Nothing iterative is in this setup. And BER is computed with hard estimates.

I do observe an overall waterfall in BER curve but its bumpy and not very smooth. I am having an impression that this is because of the non-iterative nature of the receiver. If I perform coding at transmitter and have Turbo-Equalization (MAP-equalizer communicating iteratively with decoder) then this bumpiness should go away. But for a single pass it will stay there.

Kindly comment on this assumption.

Regards,

Goshal

July 11th, 2014 at 4:23 pm

Hi Goshal,

My first guess would be that bumpiness in the BER curve is because your simulation has not been run for long enough. If you average over more frames at each SNR, then I expect that you would get a smoother curve…

Take care, Rob.

July 12th, 2014 at 4:57 pm

Hi Rob,

I am using your simulation setup

%======================================

while bit_count < 1000 || error_counts(iteration_count) BPSK–>ISI-channel–>BCJR-equalizer–>data-hat(hard estimate)

end

%======================================

Therefore, I have not preallocated the number of frames that will be simulated to achieve a smooth ber curve. My channel-memory=2 and I am keeping a frame-length=100.

Should I change the error criterion to get smoother curves?

Moreover, this simulation setup is only an exercise. Since I want to use this BCJR-equalizer in my own simulation , I wanted to ensure that it has no bugs. And therefore decided to do this dummy-simulation to get an idea about the performance of equalizer.

Since from my bumpy-ber curve, I can see that the ber reduces dramatically with SNR improvement, am I correct in assuming that the BCJR-equalizer is working fine?

Also, smoother curve can be achieve if :

1. More data is simulated (as you suggested)

2. ECC is included followed by Turbo-Equalization (as I said in my original post)

Your comments will be appreciated.

Regards,

Goshal

July 13th, 2014 at 3:54 am

Hi Rob,

I’ve been using your BCJR decoder as an equalizer. My question is about the correct way to feed the input and treat the output of this equalizer.

The way I do it which I’ve found out by trial and error is that first I generate a random stream of BPSK bits, then pad the beginning and the end of the stream each by two -1’s. Next, I convolve the resulted stream with my channel h=[1 4 6 4 1] but I only take the central part of the convolution output which has the same length as my padded stream (i.e. in MATLAB I do conv (padded_stream, h, ’same’)), then I feed this to the equalizer. At the output I should discard the last 4 bits (4 llrs) to have the sign of the extrinsic_uncoded_llrs match my original message (not padded).

I do not have the insight/understanding why this works or should I do it another way to get the best out of my BCJR equalizer.

Let me also add that I assume zero initial state and zero final state in the code.

Could you please advise?

Thanks,

Elnaz

July 14th, 2014 at 5:02 pm

Hi Goshal,

The number of errors that you need to observe in order to get a smooth BER plot depends on how many random things there are in your simulation. For example, lots of interesting combinations can occur in simulations with random messages, interleavers, noise and fading - long simulations are needed to represent all of these. These days, I recommend running the simulation of each SNR value until 100 frames have been observed having at least one bit error. If your BER curve is dropping quickly and has no horizontal error floor, then I suspect there are not bugs in your code. You can have extra confidence by using the averaging and histogram methods to compare the EXIT functions and trajectory.

Take care, Rob.

July 14th, 2014 at 5:08 pm

Hi Elnaz,

I suspect that the -1 is the best padding to use because your BPSK modulator maps zero-valued bits to symbols of -1. This makes sense to me because I guess that your trellis assumes a first state of 0.

On the other hand, I would guess that padding is not essential for getting things working. When you don’t use padding, do you get bit errors at the start of the frame, or at the end of the frame, or both? I’m thinking that there may be something to do with the way that your trellis is terminated, which causes this. My suggestion would be to not use trellis termination at either end of the trellis…

Take care, Rob.

July 19th, 2014 at 8:05 pm

Hi Rob,

I am using a BCJR based MAP Equalizer. For my application I am doing self-iterations i-e the extrinsic output of equalizer is feedback as a-priori information at equalizer input. Although, for my problem, the BER reduces by using this self-iterative mechanism but a closer look at the bit indices show that for certain bits this self iterative process introduces errors.

I am storing the extrinsic information obtained at end of each iteration and then perform hard-detection to compute BER. At times hard-estimate of a bit in iteration=i is correct but the hard-estimate for the same bit at iteration>i is incorrect.

What do you think of this. Is it something that is caused by self-iterating equalizer or is it something related to iterative algorithms in general. Did you observe anything like this when performing iterative decoding of your turbo-codes.

I will be looking forward to hear from you.

Sincerely,

Goshal

July 21st, 2014 at 3:58 pm

Hi Goshal,

It doesn’t make sense to feed the extrinsic LLRs from an iterative receiver block back to itself - this causes positive feedback and will give a bad BER. Some colleagues of mine did some work on self-concatenated convolutional codes, but these include an interleaver in the feedback mechanism, corresponding to an interleaver in their encoder - I don’t think that you have this interleaver…

Take care, Rob.

July 21st, 2014 at 5:28 pm

Hi Rob,

You got it right….I don’t have an interleaver in my feedback path. This is the usual mechanism that is found in research literature related to my problem definition. I do agree with your comment of creation of positive feedback, in the absence of interleaved, by using extrinsic information as a-priori information. The consequence of this issue has been well addressed in Turbo-Decoding literature.

Since I need to figure out an argument which can justify this particular approach for my research problem, can you comment on my previous question i-e in an iterative decoding algorithm

1. What is the possibility of getting a particular bit in error in an iteration>i given that it would have been decoded correctly if the iterations would have stopped at iteration=i?

2. If such a possibility exists, what does it say about the decoding algorithm?

Thank you for your time.

Sincerely,

Goshal

July 21st, 2014 at 8:38 pm

Hi Rob,

This brings me to one more point. It is a well known fact that the performance of turbo codes depend upon the length of interleaver. Longer the frame length better will be the performance. Is it right to assume that if the given frame-length is short, then the presence or absence of interleaver will not have significant effect on the BER of the iterative decoding algorithm. and therefore feeding back extrinsic information directly as a-priori information in the absence or presence of interleaver will not make much difference?

I am asking this because right now I am keeping frame length short….less than 100 to be precise. This is the usual frame length that is being assumed in related research literature.

Given these constraints, I am wondering if applying iterative algorithms will yield similar gains as ones achieved for communication problems where frame lengths are increased to see the effect of iterative receiver .

Regards,

Goshal

July 22nd, 2014 at 4:54 pm

Hi Goshal,

I have seen the effect that you describe in short block length codes, namely doing more iterations makes the decoding decision worse. In the EXIT chart, this is manifested as trajectories that start moving towards the bottom-left corner, rather than towards the top-right corner. My feeling is that this is explained by the fact that the turbo principle is not perfect - it only approximates optimal decoding and only really when the frame length is long.

Another way of thinking of it could be that the iterative decoder has become fixed on the wrong decoding decision and that doing more iterations just reinforces that wrong decision.

My feeling is that the interleaver becomes more important for short frame lengths, rather than less important as you suggest. I suspect that the difference between the performance of a short codes having good and bad interleaver designs is greater than the difference between long codes having good and bad interleaver designs. My suggestion would be to try and rearrange your system, so that it has an interleaver in the feedback path.

Take care, Rob.

July 24th, 2014 at 3:53 pm

Hi Rob,

In response to my self-iterative setup which feeds back extrinsic information back to APP block as a-priori information, you made a comment that it will create a positive feedback and will result in poor BER performance.

Since Extrinsic information is the NEW information which we learn and was missing in a-priori information, why would this result in a positive feedback? From literature related to Turbo-decoding I learned that feeding back Full-LLR results in positive feedback and not the Extrinsic information. I do understand that we usually have more than single block in conventional turbo decoding setup and my setup has only one APP block, but still its the extrinsic information that goes back.

Why do you think that using this new and improved information as a-priori information is not a good candidate for a-priori information for next iteration?

Regards,

Goshal

July 24th, 2014 at 4:42 pm

Hi Goshal,

The setup that you describe creates positive feedback because this new information has come from the decoder you are feeding it back to. In the next iteration, it will generate this new information again and have it reinforced by the feedback you have just given it.

Take care, Rob.

November 17th, 2014 at 7:11 am

I want some matlab code for encoding and decong a binary data using turbo code.

November 17th, 2014 at 9:35 am

Hello P.Jana,

Encoding and decoding binary data using a turbo code is exactly what the Matlab code that I have provided on this page does.

Take care, Rob.

December 18th, 2014 at 7:03 pm

hello sir am looking for a mathlab code that can perform dynamic radio management in LTE

December 19th, 2014 at 7:22 am

dynamic radio RESOURCE MANAGEMENT IN LTE

December 19th, 2014 at 12:54 pm

Hello Mohammed,

I’m afraid that I don’t have any Matlab code for radio resource management in LTE.

Take care, Rob.

January 29th, 2015 at 1:30 pm

Hi, Rob,

The code is useful!

I met some problem and maybe you know the root cause:

1. I cannot use your code to get the result on this page. As you know, there is no codeword length 50, 500 or 500 bit in LTE/UMTS definition. Your figure should have some typo;

2. BY setting codeword length to 40bit, I test the BER and BLER (one block is one codeword).In general we think BLER~=40*BER when ber is low and bit error is independent. However I found the result from the code does not match this equation. My test show that BLER is only few to 10 times of BER. I also record the number of error bits in each codeword, the results show one error codeword contains several error bits instead of 1 error bits. Why this happens? Does Turbo encoder introduce some error dependency among bits in detection results?

January 29th, 2015 at 7:36 pm

Hi Yang,

1. It is true that there is no 50, 500 or 5000 bit interleaver in the LTE standard, but the UMTS standard does support these interleaver lengths. The above results were obtained using the UMTS interleaver.

2. The bit errors in a turbo code are not independent of each other. If one error occurs, then it is likely to be accompanied by other bit errors. Simple equations can be used to approximately convert BER to BLER. However, to make this conversion accurately, you need to have a very detailed model of the distance properties of the turbo code. In practice, it is simpler to just rerun the simulations and record the BLER observed.

Take care, Rob.

March 16th, 2015 at 5:47 pm

Hi Rob,

I need to implement an iterative (turbo) receiver that decodes Serially Concatenated Convolutional Code. Right now to understand, I am considering transmission over AWGN channel with identical encoders separated by an interleaver .

Can you please point out important modifications in component_decoder.m that I will need to consider for this system. I understand that the first decoder (working on inner code) has access to received-signal (soft decisions) as well as to the A-priori LLRs but I am not sure how to implement the second decoder (working on the outer code). This 2nd decoder has access to A-priori llrs (extrinsic LLRs coming from the 1st decoder) only. What modifications I need to make in component-decoder.m to make this work.

Thank you

March 16th, 2015 at 7:39 pm

Hi Goshal,

You will not need to make any modifications if you use the code that I have provided at…

http://users.ecs.soton.ac.uk/rm/resources/matlabexit/

The bcjr_decoder.m that I have provided there will accept a priori information about both the uncoded and encoded bits. It will also output extrinsic information about both sets of bits. main_outer. and main_inner.m show how you can use bcjr_decoder.m for the outer and inner codes, respectively.

Take care, Rob.

March 17th, 2015 at 8:43 pm

Hi Rob,

Thank you for your reply. Since I understood the working of your turbo decoder at … http://users.ecs.soton.ac.uk/rm/resources/matlabturbo/

I used two turbo encoders back to back separated by random interleaver. I generated input for encoder-2 by concatenating a1,c1,d1,e1,f1 generated by encoder-1 (digit indicates the encoder. 1:outer encoder, 2:inner encoder). On the decoder side I used component_decoder.m to generate extrinsic-llrs for a2_e and used them as soft-decisions for a1_c,c1_1,d1_c,e1_c,f1_c. Also I initialized a1_a=0. For my outer-decoder, I generated extrinsic-llrs for both coded and uncoded bits. I concatenated them and added them to a2_c while keeping a2_a=0.

Does this sound right to you. Should I add the extrinsic-llr coming from outer-decoder to the soft-decisions (eq2.11) for the systematic-part only. Also, can you comment on how the curves should look like when compared to your setup.

Thank you.

March 17th, 2015 at 9:18 pm

Hi Rob,

Can you please comment on following.

I think I need to generate effective-observations before using the outer-decoder. By this I mean doing (sigma^2)/2.*a2_e and then splitting the resultant sequence to form a1_c,c1_1,d1_c,e1_c,f1_c for outer decoder.

In other words, I cannot use extrinsic-llrs as soft-decisions for outer decoder.

(Ref: Digital Communication by John R. Barry, Edward A. Lee, David G. Messerschmitt- Page 622 )

Thank you

March 17th, 2015 at 11:32 pm

Hi Rob,

I am working on a problem that can be summarized as follows.

I have an ISI+AWGN channel whose input is coming from a finite-set of non-binary alphabets. I need to implement a MAP-based channel-detector that can compute the extrinsic llrs for the given received signal.

How can I modify your code to make it work for the non-binary input.

Do we have a notion of soft-decsions in case of non-binary case as well. If yes, then how can we compute them?

March 18th, 2015 at 6:26 pm

Hi Goshal,

Let’s suppose that you have a message frame comprising 100 bits. The outer turbo encoder will give an encoded frame comprising 312 bits. You can then interleave this and provide it as an input to the inner turbo encoder. This will then give an encoded frame comprising 948 bits.

In the receiver, the demodulator will provide 948 LLRs. You can provide these to the inner turbo decoder, in order to obtain 312 extrinsic LLRs. You can then deinterleave these and provide them to the outer turbo decoder as apriori LLRs. Following this, you may like to run another iteration of the inner turbo decoder. In order to do this, you need to modify the outer turbo decoder to provide 312 extrinsic LLRs. These extrinsic LLRs can then be interleaved to obtain 312 apriori LLRs. The inner turbo decoder needs to be modified so that it can accept these apriori LLRs. If you can figure out how to make these modifications, then you should be able to iteration between the two turbo decoders.

Has this answered your question about turbo codes?

With regard to your question about ISI, you will need to implement a turbo equaliser. You can find some discussion that I have had with other people about these among the comments on these webpages.

It is possible to build non-binary turbo decoders, but I’m afraid that I don’t have any Matlab code for this. In this case, the BCJR equations for generating the extrinsic LLRs need to be replaced with equation similar to those used to generate the alphas and betas, in order to provide extrinsic log symbol probabilities.

Take care, Rob.

March 19th, 2015 at 4:31 pm

Hi Rob,

Your replies and previous posts are quite helpful. After giving some thought, it seems to me that non-binary turbo decoder will be relatively easy to implement in probabilistic domain. So To make a start, I started converted your component_decoder.m from log-BCJR implementation to normal BCJR.

I did the following changes:

1. For computing gammas

uncoded_gammas(transition_index, bit_index) = a_prob_systematic(bit_index)* exp(-(systematic_rx(bit_index)-(-2*(transitions(transition_index,3)-0.5)))^2/N0)

encoded_gammas(transition_index, bit_index) = a_prob_coded(bit_index)* exp(-(coded_rx(bit_index)-(-2*(transitions(transition_index,4)-0.5)))^2/N0);

where systematic_rx = [a_rx,e_rx], coded_rx = c_rx, and a_prob_systematic, a_prob_coded correspond to a-priori probabilities for respective sequences.

2. For Alphas & Betas, initialized alphas(1,1)=1, betas(1,length(systematic_rx))=1

3. everywhere,

A. jac(.,.) was replaced by simple addition.

B. addition was replaced by multiplication.

4. initialized prob0=prob1=0

5. Also I did scaling by a constant factor while computing alphas, betas wherever it was required

Finally to check, I computed

extrinsic_uncoded_llrs(bit_index) = log(prob0/prob1);

and compared to the ones computed from your component_decoder.m

They don’t match. Can you suggest a way to fix this.

Thank you

March 19th, 2015 at 7:42 pm

Hi Goshal,

There is nothing in your code above that looks obviously wrong to me. However, there is a big problem with using the BCJR instead of the Log-BCJR. In the BCJR, some alphas and betas can become very small - so small that double floating point precision numbers are not sufficient to maintain accuracy, when the frame length is long. It might be that you have this problem - I would suggest starting with very small frame lengths, when making the comparison with the Log-BCJR. Overall, I wouldn’t recommend using the BCJR instead of the Log-BCJR for your long-term plans.

Take care, Rob.

March 20th, 2015 at 1:32 pm

In the component decoder uncodded and coded gamma is computed in below way:

for transition_index = 1:size(transitions,1)

if transitions(transition_index, 3)==0

uncoded_gammas(transition_index, bit_index) = apriori_uncoded_llrs(bit_index);

end

end

why there is no branch for transitions(transition_index, 3)==1? I means why the computation is not in this way

if transitions(transition_index, 3)==0

uncoded_gammas(transition_index, bit_index) = apriori_uncoded_llrs(bit_index);

else

uncoded_gammas(transition_index, bit_index) = -apriori_uncoded_llrs(bit_index);

end

March 20th, 2015 at 8:27 pm

Hi yang,

What you are suggesting would need a slight modification before it would work…

if transitions(transition_index, 3)==0

uncoded_gammas(transition_index, bit_index) = apriori_uncoded_llrs(bit_index)/2;

else

uncoded_gammas(transition_index, bit_index) = -apriori_uncoded_llrs(bit_index)/2;

You may like to try this - you should find that it gives identical LLRs to my code. I prefer my way though, because it has a slightly lower complexity.

Take care, Rob.

March 22nd, 2015 at 8:03 pm

Hi Rob,

I know that you have not implemented non-binary turbo-decoder but I am sure you will be able to comment on my statements based on your intuition.

I am using an example of non-binary turbo codes in last chapter of the following book

http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470518197.html

The LLRs of an event \’u\’ being an element in Z4 can be expressed as

Lx_ u =ln(P(u=x)/P(u=0)) where x=1,2,3

Right now I am trying to evaluate this LLRs for a given AWGN channel by keeping noise_variance very small.

My issue is that if for certain indices, L2_u and L3_u are being evaluated as Inf at the same time. For example

input = u_rx+noise = 2.93, N0 = 0.01,

soft_decision = log(exp(((input-0)^2-(input-x)^2)/N0)) = Inf, for x=2,3

Why do these non-binary llrs are not unique as we had in binary-case. Can you please comment on this as I am doing this non-binary stuff for the first time and am not really sure how to proceed.

Regards,

Goshal

March 22nd, 2015 at 8:21 pm

Correction in last comment,

input = u+noise = 2.93

March 23rd, 2015 at 12:36 am

Hi Rob,

I fixed the issue of LLR by replacing

soft_decision = log(exp(((input-0)^2-(input-x)^2)/N0))

by

soft_decision =x*(2*input-x)/N0;

Now in high SNR, the soft_decisons have the right signs and magnitudes. I am mentioning magnitudes cause if x=2 was transmitted then L2_u>0 as we expect it to be but also |L2_u|>|L1_u|>| and |L2_u|>|L3_u|.

Now I am working on the component_decoder.

% FromState, ToState, UncodedBit, EncodedBit

transitions = [1, 1, 0, 0;

2, 2, 0, 1;

3, 3, 0, 2;

4, 4, 0, 3;

1, 2, 1, 2;

2, 3, 1, 3;

3, 4, 1, 0;

4, 1, 1, 1;

1, 3, 2, 0;

2, 4, 2, 1;

3, 1, 2, 2;

4, 2, 2, 3;

1, 4, 3, 2;

2, 1, 3, 3;

3, 2, 3, 0;

4, 3, 3, 1];

How should I initialize uncoded_gammas? My apriori_uncoded_llrs is a matrix having #rows=length(input_sequence), #col=3. These columns contain L1_u,L2_u, and L3_u respectively.

I thought of computing uncoded_gammas separately of each pair of these pairs (0,1), (0,2), (0,3) by following the same logic that you did for your case.

for x=1,2,3

for bit_index = 1:length(apriori_uncoded_llrs(:,x))

for transition_index = 1:size(transitions,1)

if transitions(transition_index, 3)==0

uncoded_gammas_x(transition_index, bit_index) = apriori_uncoded_llrs(bit_index,1);

end

if transitions(transition_index, 3)==x

uncoded_gammas_1(transition_index, bit_index) = 0;

end

end

end

I am confused about how should I initialize the rows in \"uncoded_gammas_x\" that correspond to neither 0 nor x for a given pair of (0,x).

Should I go for creating three different uncoded_gammas for each separate pair (0,1), (0,2), (0,3) or should I generate only one \"uncoded_gammas\". In later case my uncoded_gammas have last 12 entries equal to zero.

Regards,

Goshal

March 24th, 2015 at 4:49 pm

Hi Goshal,

I would suggest that you need something like…

for bit_index = 1:length(apriori_uncoded_llrs(:,x))

for transition_index = 1:size(transitions,1)

if transitions(transition_index, 3)==0

uncoded_gammas(transition_index, bit_index) = 0;

end

if transitions(transition_index, 3)==1

uncoded_gammas(transition_index, bit_index) = L1_u;

end

if transitions(transition_index, 3)==2

uncoded_gammas(transition_index, bit_index) = L2_u;

end

if transitions(transition_index, 3)==3

uncoded_gammas(transition_index, bit_index) = L3_u;

end

end

Take care, Rob.

March 24th, 2015 at 8:18 pm

Hi Rob,

Thank you for your reply.

I have one question from the normal BCJR implementation. On March 19th, 2015 at 4:31 pm, I chalked down the changes that I made to your component_decoder.m to convert it to a conventional BCJR algorithm. Let us ignore, for the time being, the implementation issues that are related to this kind of implementation.

Well my question is about prob0 and probe that I am computing in the following manner:

probabilities =zeros(length(received_sequence),2);

for bit_index = 1:length(received_sequence)

prob0=0;

prob1=0;

for transition_index = 1:size(transitions,1)

if transitions(transition_index,3)==0

prob0 = prob0 + deltas(transition_index,bit_index);

else

prob1 = prob1 + deltas(transition_index,bit_index);

end

end

extrinsic_uncoded_llrs(bit_index) = log(prob0/prob1);

These extrinsic_uncoded_llrs(bit_index) = log(prob0/prob1) turn out to be exactly equally to the ones computed by your component_decoder.m. My point of confusion is the when I look at them individually, sometimes they come out to be greater than 1. Since these are the probabilities, 0<prob0,prob1<1 should be true. which is certainly not the case here.

But since extrinsic_uncoded_llrs are correct, I can always use the following logic to find these probabilities

extrinsic_uncoded_llrs = log(P(0)/P(1)) = log( P(0)/(1-P(0)) )

P(0) = 1/(exp(-extrinsic_uncoded_llrs)+1)

Does this mean that prob0 (computed in code) is not the TRUE probability … P(0)? If it is not, then what exactly is it?

Please excuse my questions since they are hopping between the normal BCJR and log-BCJR implementation…..

Regards,

Goshal

March 24th, 2015 at 8:37 pm

Hi Rob,

Please take this as an additional comment to my last question.

Another way of saying the same things is that

prob0 + prob1 ~=1

for a given bit_index but when I take log(prob1/prob0), I get the correct LLR for that bit_index.

Regards,

Goshal

March 25th, 2015 at 5:29 pm

Hi Goshal,

I suspect that this is because there is a normalisation constant that is being ignored when you do the soft demodulation. As a result, the probabilities produced by the soft demodulator do not add up to 1. However, I suspect that the BJCR algorithm does not care whether the probabilities are normalised or not - more specifically, when converting to an extrinsic LLR, prob1 is divided by prob0 and any normalisation constant will be cancelled out anyway.

Take care, Rob.

March 31st, 2015 at 5:06 pm

Hi Rob,

Thank you for your reply. I did verify what you said in your last comment. Scaling gammas, alphas, and betas causes this problem.

I now have another issue about which I wanted to ask you. Actually I am implementing the following paper

http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1211193

This is my second block which takes apriori_llrs for non-binary symbols as inputs and outputs extrinsic_llrs for binary data. I am using a convoluted method (eq 12 in above referenced paper) to do this. The problem is that when the input apriori_llrs (corresponding to non-binary data) are really big, the extrinsic_llrs corresponding to binary data turn out to be NaN.

I have observed that the results have been reported for low SNR region and I am suspecting that this might be the very reason behind this choice.

Would you please comment on this.

Thank you

April 1st, 2015 at 3:49 pm

Hi Goshal,

I’m wondering why your apriori_llrs have values that are so big as to cause this kind of problem. The only time I’ve experienced this kind of problem is when the LLRs have values of plus or minus infinity (for example when using doping). A simple solution is to clip the values of all apriori LLRs to plus or minus 100. Before doing this, I would suggest identifying the exact point in your code where you first get NaN values - there may be a bug there.

Take care, Rob.

April 4th, 2015 at 8:04 pm

Hi Rob,

I have a question regarding the BER vs SNR curve. For my turbo-like setup, errors were decreasing as iterations increased. This is what we expect to see in iterative algorithms. The problem is that for some SNRs, the error decreases for first couple of iterations and then it increases for the remaining iterations. Is it because of some bug in my code? I observed this for a particular value of SNR and re-ran simulations to check and got similar results. What do you think about this.

Regards,

Goshal

April 9th, 2015 at 4:37 pm

Hi Goshal,

I have seen this behaviour before, but I have never had a good explanation for it. I wonder if it is because initially, the decoder can’t decide between the correct decoding and an incorrect one - if you stop iterating there then you get a medium BER. If you continue iterating then the decoder sometimes fixes on the incorrect decoding and so the BER goes up.

Alternatively, it may be that the LLRs become so high in value that the simulation starts to suffer from numerical overflow issues - but this would surprise me because LLRs don’t tend to have very high values unless there are bugs in the code.

Take care, Rob.

April 16th, 2015 at 5:36 pm

Hi Rob,

This question is related to Partial Response Maximum Likelihood Detection. I saw some discussion on this in previous posts here so thought of asking you.

Well I have designed my equalizer that according to MMSE criterion that minimizes the energy in error between the equalizer output and target output.

I also have code for the ML detector. I have tested the performance of my detector, BER vs SNR, for a given channel and its working alright.

The problem is that when I concatenate my equalizer with ML detector, the BER does not reduce as I increase the channel SNR. In PRML, the detector is designed for a shorter target that is fine because the job of the equalizer is to reduce the length of channel.

Can you suggest something which I should try.

Regards,

Goshal

April 21st, 2015 at 7:30 pm

Hi Goshal,

If you are intending to have iterative decoding between the equaliser and the ML detector, then I would recommend using both the averaging and histogram methods to measure the mutual information of the extrinsic LLRs - if there are any bugs in your code, the two methods will give different measurements. By doing this separately for your equaliser and ML detector, you can determine which one has the bug.

Take care, Rob.

April 23rd, 2015 at 7:55 am

Hi Rob,

I have to implement a Decision Feedback Equalizer, and I need the soft decoded bits for the whole codeword in order to feed the next iteration step of equalization.

Is there a way to obtain them for a turbo code?

Thanks,

Stefano

April 23rd, 2015 at 5:04 pm

Hi Stefano,

Yes, a turbo decoder can be modified to provide extrinsic LLRs about the turbo-encoded bits. You can see how to obtain extrinsic encoded LLRs from the BCJR decoder at…

http://users.ecs.soton.ac.uk/rm/resources/matlabexit/

You can obtain extrinsic LLRs for the systematic bits by subtracting the apriori LLRs provided by the channel from the aposteriori LLRs obtained for the decoded bits.

Take care, Rob.

April 24th, 2015 at 4:23 am

Hi Rob,

When I am running the main.ber code, its taking too long to fetch results if I am giving SNR stop as ‘-inf’ but i know that if i give -inf it will give a smooth curve. How long does it take for the code to be executed if i give -inf?

April 25th, 2015 at 3:59 pm

Hi Rob,

I am trying to analyze my code as my error increases as I iterative between the two detectors. You mentioned about clipping high LLR values in your answer to my question. can you please specify the range beyond which a LLR will be really high. You mentioned about clipping values higher than 100. Does it mean that if LLRs are in [-100,100] range , then they are ok.

Regards,

Goshal

April 25th, 2015 at 6:24 pm

Hi Rob,

For analysis of my MAP equalizer, I tested following two scenarios in high SNR.

Test Case 1: Zero Apriori information.

This matches to the first iteration in any turbo-like-setup. The output LLRS are almost perfect giving zero error. This is what we would expect our trellis based equalizer/detector to do in high snr region.

Test Case 2: Perfect Apriori information

Here, the output LLRs are not perfect. in fact I am observing a considerable numbers of errors if i do hard-decisions on them.

What should I conclude about my code in this situation. Is this anything to do with my strong ISI channel or I should be looking for a bug here.

Goshal

April 26th, 2015 at 7:45 pm

Hi Rob,

Since I am trying to implement non-binary MAP detector, the size of my transition matrix gets really big even for short channel. For analysis purpose I thought of creating some test cases and I need your input.

Right now I am thinking of using perfect A-priori information for the entire range of SNR. What do you say? In my understanding, perfect a-priori information should supersede any other information (like received data- either noisy or noiseless) and we should always get zero error at the detector output.

With this understanding, if I don’t get zero error, should I conclude that there is a bug in my code?

Regards,

Goshal

April 27th, 2015 at 4:55 pm

Hi Kathy,

I would suggest running the code for as long as you have time for, then killing the simulation. If the SNR reached is not as high as you had hoped, then you can change the while loop in main_ber.m to make the code run faster.

Take care, Rob.

April 27th, 2015 at 5:02 pm

Hi Goshal,

As you say, I would consider LLRs having magitudes of above 100 to be very high. I would not expect an equalizer to provide perfect extrinsic LLRs, in response to being provided with perfect apriori LLRs - as in a demodulator, I would not expect the EXIT function of an equalizer to reach the (1,1) point of the EXIT chart. Your results sound correct to me. You can insert an accumulator as a middle code, in order to reshape the equalizer/accumulator EXIT function so that it reaches the (1,1) point.

Take care, Rob.

April 27th, 2015 at 5:35 pm

Hi Rob,

Thank you so much fro your reply. I want to clarify on more point.

Let us consider a MAP equalizer. If this equalizer performs better with no a-priori information as compared to the scenario where llrs are correct for certain symbols and incorrect for rest, what should be the take home.

The problem I am facing right now is that my llrs are in range of [-100,100]. Few of them correspond to correct symbol was was transmitted and few are incorrect. When I feed these llrs to my MAP equalizer, the error tends to increase.

I did some analysis and fixed the wrong llrs to zero. Now my equalizer has access to llrs that are either correct or zero. As expected the BER reduced.

But this is cannot be done in simulation. So I am confused about why my equalizer is so sensitive to incorrect llrs. The error does not reduce as I increase the iterations. I am not using any error correction coding in my setup. Its just the two MAP equalizers that are iterating.

One other thing. When I replace MAP equalizer with Linear MMSE Equalizer (exact implementation from following paper)

http://www2.ensc.sfu.ca/people/faculty/cavers/ENSC805/readings/50comm05-tuchler.pdf

the magnitude of llrs that I get from linear filter reduces dramatically as compared to what I get from MAP implementation. IS this correct .

Thank you for your time.

Regards,

Goshal

May 4th, 2015 at 7:09 pm

Hi Goshal,

It sounds to me that there might be a bug in either your equaliser or in the code that it is iterating with. I would recommend plotting their EXIT functions using both the averaging and histogram methods - if these methods don’t agree for one of your codes, then that suggests that that code has a bug.

I’m afraid that I’m not sure what to expect for the magnitude of the LLRs in the linear and non-linear equalisers. I would expect those given by the non-linear equaliser to be stronger, but I have no experience in how much stronger…

Take care, Rob.

May 13th, 2015 at 8:42 pm

HI

If i take SNR range from -7:1:1 in main.ber program graphs are popping up but the process never ends, it always shows busy. Can you tell me the way to terminate the program without killing it so that i can get good graphs.

Thanks

May 16th, 2015 at 10:12 am

Hi Kathy,

You can make the simulation run quicker by reducing the numbers in the following line…

while bit_count < 1000 || error_counts(iteration_count) < 10

Note however that this will cause the resultant BER plots to be less smooth. The other thing that I would suggest would be to set iteration_count to a lower value when simulating the higher SNRs.

Take care, Rob.

May 21st, 2015 at 11:15 pm

Hi Rob,

Do you have an implementation of soft-output Viterbi detector?

Thank you,

Elnaz

May 27th, 2015 at 10:27 am

Hi Elnaz,

I’m afraid that I don’t have an implementation of the SOVA. I have always used the BCJR and Log-BCJR.

Take care, Rob.

June 2nd, 2015 at 5:25 pm

Hi Rob,

Can you guide me on how to implement a BCJR detector with a delay parameter. My ISI channel has a maximum coefficient at time other than t=0. How should I modify my detector, so that it makes decisions with a delay. Since I want to experiment with a couple of channels each of which has a peak at a different time, I want my detector to adjust this delay parameter on its own. Can you please guide me through this.

Regards,

Goshal

June 3rd, 2015 at 9:27 am

Hi Goshal,

The trellis used for turbo equalisation is the same, regardless of what the delay tap coefficients are. It doesn’t matter which tap has the highest coefficient - you just need to use the coefficients in the calculation of the gammas for the transitions in the trellis.

Take care, Rob.

June 3rd, 2015 at 3:00 pm

Hi Rob,

Thank you for your reply. Let me repose my question with an example. Let us consider two channels. H1 = [1 1 4] and H2=[1 4 1] (time is increasing from left to right). for H1, h0=4. For H2, h0=1. I understand that for both the channels, memory=2, so the trellis remains same as you mentioned in your reply.Coming to the detection process, for H1, our usual decision making scheme works since h0 is dominant here. But in case of H2, we should introduce a delay in our decision process because the dominant coefficient here is at t=-1.

What do you think about this.

Regards,

Goshal

June 4th, 2015 at 2:38 am

Hi Rob,

Please add this to my previous question….I understand that the gammas for each transition in the trellis is a function of the channel response. So computing gammas, alphas and betas are done as explained in the algorithmm.

What confuses me is that at each trellis stage, the BCJR algorithm is computing the APP of the input bit at that stage. If the channel response does not have a maximum at h0 then this implies that output of finite state machine is dominated by the input bit that occurs at some other time instant. But since we are computing APP for the input bit at the current time instance, the performance would be inferior. Ideally, the maximum value of channel response coefficient should be assigned as h0. WE can always introduce a delay parameter in case of decision feedback equalizers for non-causal channels but how do we achieve this incase of MAP equalizer.

Goshal

June 5th, 2015 at 3:21 am

Hi Rob,

I have been trying to find right way to pose my question and this is another attempt…please bear with me.

Let the input to a finite state machine (ISI channel with memory mue) at time tk be xk. we can compute the APP(xk) by BCJR algorithm that computes alphas, betas and gammas for the given FSM.

My question is that for a given FSM, can we compute the APP(x_(k-n)) by using BCJR algorithm where 1<=n<=mue ?

Is there any modification of BCJR algorithm that does this? Is this even possible?

Regards,

Goshal

June 5th, 2015 at 4:53 pm

Hi Goshal,

Your question from today seems to be asking if it is possible to introduce a delay into the BCJR decoder - I suspect that this is possible, although it may need a trellis having more states. I guess it is equivalent to increasing the memory of the system, but using coefficients of zero for the additional taps.

However, I don’t think that you need to introduce a delay, in order to deal with the second channel. Each transition represents a particular combination of what was transmitted during the last three symbol periods. The gamma given to each transmission is a function of how much the symbol received most recently looks like the ISI of these three symbol periods. All of this is true, regardless of which tap is dominant - the only thing that changes is the calculation of the gammas.

Take care, Rob.

June 5th, 2015 at 7:22 pm

Hi Rob,

I would like to understand both modification that you have suggested in your last reply.

—————————-

Introducing delay in turbo decoder: If we increase the channel memory artificially (coefficients are zero for these added taps) how does this correspond to adding delay. Are we introducing these dummy taps between h0,h1,…hN (channel memory = mue = N) ? Can you explain a bit more about this. Are there are algorithms out there that does this. I searched and found that most of the material is about how to reduce the channel memory but you are suggesting to increase mue (which is fine) as long as I am able to compute APP(x_(k-n)). Any reference would be much appreciated

————————-

gammas for H2:

“The gamma given to each transmission is a function of how much the symbol received most recently looks like the ISI of these three symbol periods.”

the gamma computation for each branch in the given stage of the trellis does exactly this. Each transitions represents one possible combination of last m=3 symbols. So why do you suggest that the gamma computations will be different.

—————————

Also, if I try to bring together my last posts, I think I would be right in saying that for channel H2=[1 4 1] I am trying to compute APP(x_(k-1)). Thus if we can come up with a strategy that can take into account any arbitrary delay 1<=n<=mue, then we might not have to look into the 2nd suggestion that mentions modifications in gammas computations. But please correct me if I am mixing up stuff here.

Regards,

Goshal

June 8th, 2015 at 6:13 pm

Hi Rob,

I have noticed that initialization of alphas and betas have a considerable effect on the BER vs SNR. In research literature, this piece of information is usually not mentioned.

When a padded signal is transmitted over ISI channel, the first and last \\\’n\\\’ bits are known so it makes perfect sense to use this information for initialization purpose. Can you please comment on the convention that is followed for turbo-equalization setups.

Goshal

June 15th, 2015 at 5:09 am

Hi Rob,

Thanks a lot for your code on the EXIT charts of concatenated decoders. I am using parallel concatenated decoders (LDPC and LDGM) on a multipath fading underwater acoustic channel with lognormal distribution (channel gain). By using your code as a reference and after doing some modifications, I get my EXIT charts which converge at around 5 dB. First, is this a reasonable result? Second, since your generate_llrs() is for AWGN channel, do I need to make any changes to the generate_llrs() function for fading and lognormal distributed channel or not? And, how do you get the numbers for sigma?

Thanks in advance for your help.

June 16th, 2015 at 2:40 pm

Hi Goshal,

—————————-

I’m afraid that I don’t have any references for how to introduce delay for the turbo equalizer. In fact, I don’t think there is any good motivation for introducing delay - we normally want to reduce the delay, as you say. The only reason I mentioned it is because you seemed to be asking about it. My suspicion is that this is not the best direction for you to take.

—————————-

My explanation of the gammas was describing the same thing that you are talking about - so I’m not suggesting that the calculation of the gammas will be different.

—————————-

I don’t think that you are trying to calculate APP(x_(k-1)) - I think that you really want to calculate APP(x_(k)). I think that you have become confused by the fact that the first tap is not the strongest - I’m saying that this doesn’t make any difference - the turbo equaliser still works in the same way as usual. You may like to should start by considering the case where the first tap is the strongest and get this working. You should then see that if you change the strengths of the taps (and give the receiver this channel knowledge) that the system still works.

Take care, Rob.

June 16th, 2015 at 2:43 pm

Hi Goshal,

One option would be to simply initialise all alphas and betas to zero. I suspect that you can get a slightly better BER by doing something clever to initialise the alphas at the left hand edge of the trellis - this would accommodate the knowledge that there will be no ISI for the first transmission, since there was no transmission before that. However, I’m not sure how to achieve this…

Take care, Rob.

June 16th, 2015 at 2:49 pm

Hello Zafar,

I’m afraid that I can’t tell you if 5 dB is reasonable or not. This is because I don’t have any experience or intuition to draw upon for the underwater acoustic channel with lognormal distribution. If you are able to quantify the capacity of this channel, then you could compare the 5 dB with the SNR at which the capacity of the channel becomes equal to the throughput of your scheme. I would guess that these should be no more than a few dB apart.

I’m assuming that you are only using the generate_llrs function for generating the a priori LLRs that would normally be supplied by the concatenated decoder. If so, then there should be no problem associated with this. There was a paper called “Extrinsic information transfer functions: Model and erasure channel properties” which shows that the distribution assumed for the a priori LLRs doesn’t make much difference to the EXIT functions.

Take care, Rob.

June 23rd, 2015 at 9:33 pm

Hello Rob,

I’m running simulations where I’m applying timing offset to the channel response. For example if my original channel is [1 0.5], then applying a constant timing offset of 0.2 to the original channel produces the shifted channel which is much longer than the original channel. This is the problem since my trellis grows a lot in terms of the number of memory needed and etc. If I only take the two original entries of the shifted channel in an effort to keep the complexity of the detector constant, then I will face the loss of about 1 dB in BER performance. I know I can shift the observations instead of the channel response and keep the channel response fixed but because of some other problem I have to look into the possibility of shifting the channel.

Do you see a way out of this, to deal with the complexity issue?

Thank you, Elnaz

June 29th, 2015 at 12:45 pm

Hi Elnaz,

The only thing that I can suggest for reducing the complexity of the trellis is the T-BCJR and M-BCJR algorithms. These prune unlikely transitions from the trellis, before performing the BCJR, at the cost of a slight BER performance degradation.

Take care, Rob.

July 20th, 2015 at 8:26 pm

Hi Rob,

I need to confirm one idea and I need your input .

Consider a length 5 ISI channel response given by h=[h_0 h_1 h_2 h_3 h_4]. Now consider two possible channels h1 and h2 that have length 5 and have same magnitude response but h1 is a minimum-phase response. In minimum-phase response maximal energy concentrated around h0.

Minimum phase response is preferred for decision feedback equalizers because they throw away energy. On the other hand MLSD uses all energy in the repines and distribution of energy does not matter.

Why is this the case?

Does this mean that if we apply MLSE (viterbi detector) to both these channels, BER obtained for h1 will be same as BER obtained for h2

Regards,

Goshal

July 20th, 2015 at 11:21 pm

Rob,

Let me put it this way:

h1=[1 4 3 2]; h2=[4 3 2 1]. noise is AWGN. and assume that both have same magnitude response, and let h2 be minimum-phase response.

Viterbi algorithm is used for sequence detection. can we say that BER2<BER1?

Goshal

July 21st, 2015 at 3:23 pm

I am very interested in Jacobian logarithm in max log algorithm for mode==1. Are there any paper describe how to decide the optimum segmentation? Thanks a lot.

Yang

July 23rd, 2015 at 4:46 pm

Hi Goshal,

I would guess that the BER would be the same for h1 and h2, since the taps have equal powers, albeit different orders. But I’m afraid that I don’t know for sure - I think that at this point, you have done more work on turbo equalisation than I have…

Take care, Rob.

July 23rd, 2015 at 4:49 pm

Hi Yang,

I have found that the look up table approximation of the Jacobian logarithm correction factor is not very sensitive to the segmentation. However, in Figure 2 of http://eprints.soton.ac.uk/271618/9/Fixed-Point.PDF, we considered the design of the look up table for the case where fixed-point arithmetic is used. I have seen other papers that suggest segmentations for floating-point arithmetic, but I’m afraid that I can’t remember which ones…

Take care, Rob.

July 28th, 2015 at 6:02 am

Sir,

I am a beginner in turbo code. I haven\’t seen such high BER performance, i.e, for 0 dB it is 10^(-5) approx. Why it is like that? How you got such high performance.

Thanks and regards

Geethu S S

July 28th, 2015 at 8:19 am

Hello Geethu,

My BER plots above are plotted against SNR. I suspect that you are comparing them with BER plots that are plotted against Eb/N0. The conversion is Eb/N0 [dB] = SNR [dB] - 10*log10(R*log2(M)), where R is the coding rate of the turbo code (1/3 for UMTS/LTE) and M is the number of constellation points in the modulation scheme (2 for BPSK).

Take care, Rob.

July 28th, 2015 at 3:39 pm

Sir,

Thank you so much.

Regards

Geethu S. S

July 29th, 2015 at 12:55 am

Hi Rob,

In this webpage you show the BER plots for 5000-bit, 500-bit and 50-bit UMTS Turbo Code. I want to know where in Matlab code I can specify/change this parameter, i.e., 5000-bit, 500-bit or 50-bit.

July 31st, 2015 at 11:54 am

Hi Umar,

This is the frame_length parameter of the main_ber function.

Take care, Rob.

September 16th, 2015 at 2:04 am

Hi Rob

I take a long time to read all your comments, and all questions it is really very useful and I found some answers for my questions. But I need to ensure with you that I understand right.

1-I need to built ternary turbo I will just edit the following in main_ber.m

TPSK1=cosd(0 )+sqrt(-1)*sind(0 );

TPSK2=cosd(120)+sqrt(-1)*sind(120);

TPSK3=cosd(240)+sqrt(-1)*sind(240);

a_c = [(abs(a_rx-TPSK1).^2-abs(a_rx-TPSK2).^2)/N0; (abs(a_rx-TPSK1).^2-abs(a_rx-TPSK3).^2)/N0];

the same with c_c to f_c.

2-in component_decoder.m

for bit_index = 1:length(apriori_uncoded_llrs)

for transition_index = 1:size(transitions,1)

if transitions(transition_index, 4)==1

encoded_gammas(transition_index, bit_index) = apriori_encoded_llrs(1,bit_index);

end

if transitions(transition_index, 4)==2

encoded_gammas(transition_index, bit_index) = apriori_encoded_llrs(2,bit_index);

end

end

end

the same is with encoded_gammas.

3-extrinsic_uncoded_llrs

for bit_index = 1:length(apriori_uncoded_llrs)

prob0=-inf;

prob1=-inf;

prob2=-inf;

for transition_index = 1:size(transitions,1)

if transitions(transition_index,3)==0

prob0 = jac(prob0, deltas(transition_index,bit_index));

elseif transitions(transition_index,3)==1

prob1 = jac(prob1, deltas(transition_index,bit_index));

else

prob2 = jac(prob2, deltas(transition_index,bit_index));

end

end

4-finally

a- I do not know how to made it

extrinsic_uncoded_llrs(bit_index) = prob0-prob1

or extrinsic_uncoded_llrs(bit_index) = prob0-prob2 ?!!or both of them?!

b- errors = sum((a_p < 0) ~= a);

how to decide the symbols 0,1 and 2, where the previous one just give 0 or 1

thanks so much

makram

September 17th, 2015 at 3:47 am

hi Rob

I made it, thanks for all who comments in your page and thank you very much for your awesome code.

Makram

September 17th, 2015 at 8:09 pm

Hi Makram,

That’s great.

Take care, Rob.

October 14th, 2015 at 10:51 am

Hello,

How I design wireless channel coding schemes and a wireless transceivers via Monte Carlo simulation and Matlab-Simulink.

October 15th, 2015 at 10:09 am

Hi B. Metin,

I would suggest taking a look at the Matlab code I have provided on this webpage - main_ber.m is a Monte Carlo simulation.

Take care, Rob.

October 17th, 2015 at 2:22 am

Hello sir

Can you please suggest the changes while using 8PSK modulation, one more doubt regarding the soft demodulation of BPSK, why are we using soft demodulation? and also can we use pskmod and pskdemod for that purpose?

shibi

October 19th, 2015 at 2:33 pm

Hi Shibi,

You can convert to 8PSK by using the soft demodulator code that I have provided at…

http://users.ecs.soton.ac.uk/rm/resources/matlabexit/#comment-1545

… You just need to change the bits and constellation points provided at the top of modulate.m and soft_demodulate.m

pskdemod only does hard demodulation. Turbo decoders require soft demodulation, because they operate on the basis of the probability information provided by soft demodulators.

Take care, Rob.

November 10th, 2015 at 7:23 pm

hai rob.

did u have the code of OFDM for power allocation.

November 13th, 2015 at 5:46 pm

Hello Nuryusof,

I’m afraid that I don’t have any code for power allocation.

Take care, Rob.

November 29th, 2015 at 8:26 pm

Dear Rob,

I intend to implement an iterative receiver for my project and would like to start from your code. Before I make any modification to your code, I want to understand it thoroughly.

I have a couple of questions in this regard.

Your code implements modified- Log-BCJR. Term \"Modified\" implies that it takes in soft-information, and computes extrinsic llrs directly. Term \"Log\" implies that this is done by remaining in log-domain so that all the multiplications in normal domain turn out to be additions in log-domain, therefore, saving on computations.

My first question is about the soft-demodulation that you are performing prior to decoder. I have an impression that its a matter to choice whether one goes for hard-demodulation or soft-demodulation. Although this choice will eventually effect the BER performance, but it is by no means ESSENTIAL for implementing modified- Log-BCJR. Is this correct?

Thanks,

~N

November 30th, 2015 at 2:08 pm

Hi Nassi,

In order for the Log-BCJR algorithm to work well, you need to provide it with soft information. Therefore, you need soft demodulation. If you wanted to use hard demodulation, then there would be no point in use the Log-BCJR algorithm - you may as well just use the Viterbi algorithm.

Take care, Rob.

November 30th, 2015 at 4:17 pm

Dear Rob,

I appreciate your reply. I was hoping to find a reference for Log-BCJR in the nine-month-report for supplementary reading but couldn’t find it.

The standard books on the subject explain the original BCJR algorithm that I understand perfectly. Explanation in these books considers two inputs to the APP block. 1. The received signal and 2. The a-priori probability. I understand that the original BCJR computes a-posteriori probability.

I will be grateful if you can provide a reference to paper/book that explains Log-BCJR. This can help me in switching from original BCJR to Log-BCJR.

Thanks,

~N

November 30th, 2015 at 4:53 pm

Hi Nassi,

This algorithm is also called the Log-MAP. The reference that you are looking for is…

Robertson, P.; Villebrun, E.; Hoeher, P., “A comparison of optimal and sub-optimal MAP decoding algorithms operating in the log domain,” in Communications, 1995. ICC ‘95 Seattle, ‘Gateway to Globalization’, 1995 IEEE International Conference on , vol.2, no., pp.1009-1013 vol.2, 18-22 Jun 1995

doi: 10.1109/ICC.1995.524253

Take care, Rob.

November 30th, 2015 at 9:41 pm

Dear Rob,

Thank you for providing the reference. Does your code implements this paper exactly? I know that you are computing extrinsic llrs whereas the authors are computing a-posteriori llrs, but is the rest same? I am asking because I was hoping to see the soft-demodulation module in that paper but it seems that is not the case. Authors have denoted the noisy version of encoder output x_s,x_p as y_s,y_p and are using these y\’s to compute gammas. There is no hint of performing soft-demodulation in this paper.

Can you please explain this. I am sorry about repeating my question but I am finding it really hard to see the connection.

Secondly, which part of your code need modification if one needs to compute a-posteriori LLRs instead of extrinsic-LLRs?

Thanks,

~N

December 1st, 2015 at 9:07 pm

Dear Rob,

I thought about the soft-demodulation and came up with an alternative explanation for it that would like to share with you and know what u think of it.

Your code computes the soft-demodulated symbols. These are then used in component_decoder as apriori_uncoded_llrs and apriori_encoded_llrs. In your component_decoder, you compute uncoded_gammas and coded_gammas for state_transitions corresponding to \’0\’ bit (Ref: equations 2.14 & 2.15 in nine-month report). And You leave the remaining gammas uncalculated. This is only possible because you have first performed soft-demodultion.

In my test-decoder, I used the noisy uncoded and coded bits. I computed uncoded_gammas and encoded_gammas for each state_transition of trellis by taking the log(branch_metric), where branch _metric is product of two terms. 1. Apriori_Prob_uncoded_bit that makes the transition possible and 2. conditional pdf(y/x) where y is the noisy version of x. The rest of the code remains same.

If we do this for both uncoded- and coded-gammas, the extrinsic-llrs computed using your component_decoder and my test-decoder are extremely close. The difference is in the order of 10^(-14) that i believe is because of fixed-point arithmetic.

Therefore, in my view, Soft-demodulation is a step towards implementation of eq:2.14 & 2.15.

I understand that your code is more elegant but I think since text-book version of BCJR computes gammas for each state-transition, this alternate explanation might help in smoothing the transition between algorithms.

What do you think.

Thanks,

~N

December 2nd, 2015 at 12:46 pm

Hi Nassi,

The a posteriori LLRs can be obtained by adding the a priori LLRs to the extrinsic LLRs. Apart from that, my implementation will give you identical results to the algorithm in that paper. There may be some small subtle differences that affect the complexity of the algorithm, but the outputs will be the same, if given the same inputs. For literature on the soft demodulator, you should search for Bit Interleaved Coded Modulation with Iterative Decoding (BICM-ID).

My code considers the BCJR decoder to be independent of and separate from the soft demodulator. Owing to this, my soft demodulator outputs LLRs that are no more special than any other LLRs in the system. My BCJR decoder takes LLRs as inputs and doesn’t care where they have come from, be it the soft demodulator or otherwise.

Take care, Rob.

December 2nd, 2015 at 7:59 pm

Dear Rob,

Thank you for clarifying these points. I appreciate your effort.

Since we have a-priori information for uncoded data, computing a-posteriori LLRs can be obtained by adding uncoded extrinsic llrs and uncoded a priori llrs as you suggested.

For the encoded stream, how can we compute the extrinsic llrs?

Actually, I need to implement turbo decoding architecture for serially concatenated turbo codes. Since I dont have any systematic data stream in this case, I need extrinsic llrs for encoded stream in this case.

What modifications I should be doing to compute this quantity?

Thanks,

~N

December 2nd, 2015 at 8:28 pm

Dear Rob,

I was looking at previous posts and found your replies regarding computing extrinsic-llrs for encoded stream.

I have copied one response below for your reference

%————————————————————–

Rob Says:

June 16th, 2011 at 9:26 am

Hi José,

This is not too difficult to achieve. You just need to modify component_decoder.m so that it has an additional output called extrinsic_encoded_llrs. You can generate this using these lines of code

% Calculate encoded extrinsic transition log-confidences. This is similar todeltas2(transition_index, bit_index) = alphas(transitions(transition_index,1),bit_index) + uncoded_gammas(transition_index, bit_index) + betas(transitions(transition_index,2),bit_index);

% Equation 2.18 in Liang Li’s nine month report or Equation 4 in the BCJR paper.

deltas2=zeros(size(transitions,1),length(apriori_encoded_llrs));

for bit_index = 1:length(apriori_encoded_llrs)

for transition_index = 1:size(transitions,1)

end

end

% Calculate the encoded extrinsic LLRs. This is similar to Equation 2.19 in

% Liang Li’s nine month report.

extrinsic_encoded_llrs = zeros(1,length(apriori_encoded_llrs));

for bit_index = 1:length(apriori_encoded_llrs)

prob0=-inf;

prob1=-inf;

for transition_index = 1:size(transitions,1)

if transitions(transition_index,4)==0

prob0 = jac(prob0, deltas2(transition_index,bit_index));

else

prob1 = jac(prob1, deltas2(transition_index,bit_index));

end

end

extrinsic_encoded_llrs(bit_index) = prob0-prob1;

end

%————————————————————–

You have suggested using uncoded_gammas for computing deltas2. This is possible when code is systematic. But we dont have systematic (uncoded) bits in serially concatenated codes. In this case how will we compute deltas2?

Thanks,

~N

December 7th, 2015 at 4:28 pm

Dear Rob,

I am trying to see how code rate effects BER vs SNR curves for SCCC. And I want to include rate-1 code. Since in rate-1 code, I do not have access to systematic bits at the receiver, therefore, we cannot generate extrinsic-encoded-llrs directly as it would be the case where systematic-bits are present.

I think the only way to compute extrinsic-encoded-llrs is to first compute aposteriori-encoded-llrs and then subtract the apriori-encoded-llrs to generate extrinsic-encoded-llrs. Does this sound correct?

Thanks,

~N

December 17th, 2015 at 9:34 am

Hi Nassi,

Even if you don’t have systematic bits, you may still have uncoded apriori LLRs. These might come from a concatenated code, for example. If not, then you can just set the uncoded apriori LLRs to zero and the code you have mentioned above will work fine.

You can compute encoded extrinsic LLRs directly, without having to subtract the encoded apriori LLRs from the encoded aposteriori LLRs. The code you have posted above will do this for you.

Take care, Rob.

December 21st, 2015 at 6:21 pm

Dear Rob,

I have a general question regarding convolutional codes. What does it mean when they say that “different encoders can lead to same code” in the context of convolutional codes. I don’t understand this term “same code”. What does this mean. Is it related to code-rate or it needs something more to two codes as “same codes”.

Since it has been shown that turbo-codes with recursive convolutional codes perform better than non-recursive, I am exploring why recursive codes are better than non recursive codes.

Assume that I have a non-recursive rate-1 inner code and has no control over it, then how can I improve the BER vs SNR in this situation.

Thank you,

~N

December 22nd, 2015 at 11:10 am

Hi Nassi,

As an example, a recursive convolutional code having the generator 1/(1+D) is the same as one having the generator (1+D)/(1+D^2) - both of these codes will produce identical encoded bit sequences, when provided with the same message bit sequence. I can tell that they are the same because…

1/(1+D) = (1+D)/((1+D)(1+D)) = (1+D)/(1+2D+D^2) = (1+D)/(1+D^2)

Here the 2D is equal to zero because all sums are performed modulo 2. More specifically, 2D is D+D - any bit added to itself in modulo 2 arithmetic gives zero.

Recursive codes are better than non-recursive codes in turbo codes because recursive codes have EXIT functions that reach the (1,1) point in the EXIT chart, which is where a low BER can be obtained. Non-recursive codes cause the iterative decoding convergence to stall before the (1,1) point is reached, leading to an elevated error floor. If you are forced to use a non-recursive code, then you can reduce the error floor by finding the code that have an EXIT chart tunnel that closes as close to the (1,1) point as possible.

Take care, Rob.

December 22nd, 2015 at 5:15 pm

Dear Rob,

Thank you for the nice explanation. Taking the discussion further regarding the non-inner code, I have following point to mention:

Some researchers have suggested using “PRECODER” before non-recursive inner code. Then for decoding purposes, the inner APP module is designed for the combined precoder+original-non-recursive-code. This makes the inner code a recursive code. They have shown, with situations, that at high SNRs, system with precoder performs better than one without precoder, given that the right precoder is used to begin with.

How should one search for a precoder in this situation? What are your thoughts about this.

Thank you,

~N

December 22nd, 2015 at 5:17 pm

Dear Rob,

I have another question regarding the changing code rate. How can I increase the rate of your 1/3 TurboCode. Can you tell me something about how to do puncturing to achieve this.

Thank you,

~N

December 23rd, 2015 at 10:32 am

Hello Nassi,

The precoder you have referred to is called different things by different people - it is also know as an intermediate code, a Unity Rate Convolutional (URC) code and an accumulator. I have written a paper on this topic…

http://eprints.soton.ac.uk/267642/

I proposed a block-based intermediate code. But the convolutional intermediate code is the much more popular way of doing things.

Take care, Rob.

December 23rd, 2015 at 1:22 pm

Hi Nassi,

You can download the LTE puncturer from…

http://users.ecs.soton.ac.uk/rm/wp-content/get_LTE_puncturer.m

You can see how to use this code in the comments at the top of the file.

Take care, Rob.

December 23rd, 2015 at 5:24 pm

Dear Rob,

As I mentioned earlier, I wanted to see the effect of precoder for a serially concatenated code. I will give you a brief description of my simulation setup (I will add precoder to this code later) and will then tell you about my problem.

Transmitter: My outer encoder is a recursive-systematic rate 1/2 code. I am using your component_encoder for this purpose. If I use same nomenclature as in Li’s nine month report, I have ‘a’,'c’, and ‘e’ available at the output of my outer encoder. I multiplex them into a vector g=cat(1,p,a,c,e,p), where p=zeros(1,length(inner_code_memory)). I am doing padding to ensure that trellis for inner code start and end at state=000. ‘g’ is the input to my non-recursive inner_encoder whose polynomial description is [1 0 1 1]. the output ‘h’ of inner_encoder is passed over AWGN channel with know noise variance.

Receiver: Inner_decoder, APP module matched to inner_encoder, computes extrinsic information g_e. During turbo_iteration=1, g_a = zeros(size(g)). This information is split into a_c,c_c, and e_c and a_a=zeros(size(a)). Then I used your component_decoder as outer decoder to compute a_e, c_e, e_e. Mux them again to form g_a, while adding necessary information for vector ‘p’ and send g_a back to inner_decoder.

Please note that a_a is initialized to zero vector in every turbo_iteration. Moreover, I am running only 3 iterations of turbo decoding at the present time.

While I was running my simulation., I had following observations:

1. In low SNR region i-e -5:0, BER for iteration1 is not necessarily more than that for iteration 2 and 3. In other words, the curves criss-cross each other in random fashion.

Is it a normal behavior of a turbo decoder in low SNR region? Or should we have BER_iteration_i>=BER_iteration_i+1 always?

2. I was looking at the number of total bits simulated and errors per iteration when I noticed that at SNR=8 (I am not sure if this happened for other SNR values or not), BER_iteration_1=0 BER_iteration_2=4 BER_iteration_3=14. Then as more bits were simulated, BER_iteration_1<BER_iteration_2<BER_iteration_3.

I am wondering, if this is an indication of a bug in my code. Why BER for iteration 2 and 3 was 4 and 14 respectively while BER=0 for iteration_1.

If its a bug, How should I proceed with debugging my code?

Thank you,

~N

December 24th, 2015 at 9:23 pm

Dear Rob,

I noticed that for few bits, the magnitude of extrinsic_llrs at low SNR does not always increase as I iterate. For your reference, I am define lld as ln(P0/P1).

For example, for bit=0 and at SNR=0, extrinsic_llrs for three turbo iterations are

iteration=1 , extrinsic_llrs = 0.1626

iteration=2 , extrinsic_llrs = 0.1625

iteration=3 , extrinsic_llrs = 0.1605

I thought that ideally magnitude of these values should increase (or at maximum remain unchanged ) with iterative decoding. To verify, I looked at your code and compared a_e obtained from decoder1 and decoder2. Even there this is not always the case. However, for a given SNR BER obtained from your code, decreases with iterations and there is no crossover of BER curves even at low SNRs.

Can you please verify :

magnitude( extrinsic_llr@iteration(i+1) ) >=magnitude( extrinsic_llr@iteration(i) )

Thank you,

~N

December 27th, 2015 at 7:47 pm

Hi Nassi,

It sounds to me that there is a bug in your code - provided that you are measuring the BER over a sufficiently high number of frames, then it should go down in each successive iteration. I would recommend debugging your code using the following process:

1) draw the EXIT function of the outer code using both the averaging and histogram methods of measuring mutual information. If the two methods produce different plots, then this suggests that there is a bug in your outer encoder and/or your outer decoder. Here, the outer code should include everything that you have described above - namely g=cat(1,p,a,c,e,p). Also, if the area beneath the inverted outer EXIT function does not equal to the outer coding rate, then this suggests that there is a bug in your outer code.

2) draw the EXIT function of the inner code using both the averaging and histogram methods of measuring mutual information. If the two methods produce different plots, then this suggests that there is a bug in your inner encoder and/or your inner decoder. Assuming that your inner coding rate is 1, if the area beneath the inner EXIT function does not match with C/log2(M), then this suggests that there is a bug in your inner code. Here, C is the DCMC capacity of the channel and M is the number of constellation points in your modulator.

3) plot the iterative decoding trajectories for the combination of your inner and outer code. If the trajectories does not match with the EXIT functions, then this suggests that there is a bug in your iterative decoding process and/or in your interleaver.

4) Compare the SNR where the EXIT chart tunnel becomes open with the SNR where the BER turbo cliff begins. If these are different, then this suggests that there is a bug in your BER simulation.

Take care, Rob.

December 27th, 2015 at 7:50 pm

Hi Nassi,

Whether the statement

magnitude( extrinsic_llr@iteration(i+1) ) >=magnitude( extrinsic_llr@iteration(i) )

is true or not depends on how many extrinsic LLRs you are considering a time. If you are considering only a small number, then random chance can cause the magnitudes (and equivalently, the MIs) to go up or down, from iteration to iteration. If you are considering long frames, or lots of frames together, then you should find that the magnitudes (and hence the MIs) only go up (or at least remain the same), from iteration to iteration.

Take care, Rob.

December 28th, 2015 at 4:54 pm

Dear Rob,

You wrote:

1) draw the EXIT function of the outer code using both the averaging and histogram methods of measuring mutual information. If the two methods produce different plots, then this suggests that there is a bug in your outer encoder and/or your outer decoder. Here, the outer code should include everything that you have described above - namely g=cat(1,p,a,c,e,p). Also, if the area beneath the inverted outer EXIT function does not equal to the outer coding rate, then this suggests that there is a bug in your outer code.

Please correct me if I am wrong. To plot EXIT chart for outer code-decoder, do you imply that I should remove inner code and transmit g=cat(1,p,a,c,e,p) over AWGN channel and then decode it using component decoder. But then how will I iterate at receiver.

Can you add some more explanation….I have not plotted EXIT chart before, so I need some help.

Thank you,

~N

December 29th, 2015 at 8:54 pm

Hi Nassi,

When drawing the EXIT function for an outer code, you don’t have a channel at all. You also don’t have the interleaver, the inner code, the modulator or any iterative decoding. I would suggest getting started using my code at…

http://users.ecs.soton.ac.uk/rm/resources/matlabexit/

You can start by running main_outer.m

Take care, Rob.

December 30th, 2015 at 6:48 pm

Hi Rob,

I started with your main_outer.m and made changes to include Li’s recursive systematic code ( For the time being, I am generating only a,c,e from your component_encoder.m ). Then I used your component_decoder to generate uncoded_extrinsic_llrs and encoded_extrinsic_llrs.

For this recursive code, the actual code rate is bit_count/(2*bit_count+6). This value approaches 0.5 as bit_count is made arbitrarily large.

For this encoder/decoder, the two methods produce similar curves but the area under the both the curves comes out to be around 0.45 at bit_count=10000. Actually code-rate is 10000/20006=0.4998.

What should I deduce from this? Since, I am using your component_encoder.m and component_decoder.m, I want to believe that their is no bug in these two blocks but the area under the curve is not equal to the code-rate and is confusing me. What do you think about this.

Thank you,

~N

December 30th, 2015 at 8:29 pm

Hi Rob,

Also can you explain how main_exit.m is different from main_outer.m and why do you have different code for generate_llrs.

Thank you,

~N

December 31st, 2015 at 2:30 pm

Hi Nassi,

I think that I would expect the area to be a bit closer to 0.4998 than this. I would suggest simplifying the outer code by removing the termination bits, then trying again - you must remember to unterminate the trellis at the right hand end - all of the beta values should be initialised to 0, rather than only one of the them being zero and the others being -inf.

main_exit.m is for a parallel concatenation, while main_outer is for a serial concatenation. main_exit.m is more like main_inner than main_outer. The two versions of generate_llrs should be interchangable - from memory, I think that the difference is that one of the versions guarantees the requested MI, while the other generates something close to the requested MI.

Take care, Rob.

December 31st, 2015 at 6:17 pm

Hi Rob,

I removed termination bits and unterminated the trellis on right by doing beta(:,length(uncoded_priori_llrs))=0

Following data is from the exit charts that I plowed for bit_count=10000

Histogram Averaging

method method

———————————–

Area 0.49927 0.49964

The two exit charts also look similar except around (0,0) and (1,1) point. Also the two curves have same value of Ia at Ie=0.2,0.5,0.8.

So far I know that if the two curves are similar and area under them is close to code rate, this implies that the coder/decoder are working fine but what else can we learn from these charts? In short, I don’t know how to interpret these charts.

Can you please help me with this? Also can you let me know where should I post my charts online to get your feedback? Can you please mention any particular website that you use for this purpose.

Thank you,

~N

January 2nd, 2016 at 7:51 am

Hi Nassi,

These results sound good to me. I think that you can be confident that there are no bugs in this version of the code. I think that you should try reintroducing the termination now. The main way to learn from your EXIT charts is to compare different plots. But since your two plots are now very similar, there is nothing left for you to learn. I don’t have a particular preference for image hosting. Feel free to include a URL in your reply, if you like.

Take care, Rob.

January 2nd, 2016 at 9:42 pm

Hi Rob,

Thank you for your feedback.

Please correct me if I am wrong.

I am using your component_encoder to generate a,c,and e. For decoding purpose I am using your component_decoder.

For computing extrinsic_encoded_llrs, trellis should be terminated i-e betas(1,length(apriori_uncoded_llrs))=0 .

Whereas to compute extrinsic_encoded_llrs with your component_decoder, I should set betas(:,length(apriori_uncoded_llrs))=0 i-e trellis does not terminate.

When I made this change, the two exit charts and areas under their curves matches with code-rate.

Does this sound right to you?

Thank you,

~N

January 3rd, 2016 at 12:36 am

HI Rob,

Please ignore my last comment. I looked more closely and realized that trellis is terminated in both cases so no change is betas initialization.

And this is also confirmed by charts plotted using averaging method n histogram method. Also the area under the curve is close to code-rate as well.

~N

January 3rd, 2016 at 8:16 am

Hi Nassi,

That’s great.

Take care, Rob.

January 3rd, 2016 at 6:29 pm

Hi Rob,

You have been a great help. Thank you for your quick response.

Now I am testing my inner rate-1 code. I replace your convolutional_encoder.m with my inner encoder in main_inner.m.

My inner_encoder has a memory of 3 i-e h= [h0 h1 h2 h3] = [1 0 1 1]. For this polynomial, the two charts match but the area under the curve for both is close to 0.3 @ SNR=-6.

What do you think. What should we expect to see in our exit charts when our code-rate=1?

Thank you,

~N

January 3rd, 2016 at 6:39 pm

Hi Rob,

You can take a look at these charts at

http://wikisend.com/download/198700/EXIT_Inner_Code.fig

~N

January 4th, 2016 at 1:38 pm

Hi Nassi,

Assuming that you are using an AWGN channel and BPSK modulation, then an area of 0.3 sounds correct for -6 dB. This is because the DCMC capacity is about 0.3 in this case, as shown in the first plot at…

http://users.ecs.soton.ac.uk/rm/resources/matlabcapacity/

However, this polynomial is not a good choice for an inner code, for two reasons:

1) Its EXIT function starts from the (0,0) point in the EXIT chart, which means that iterative decoding won’t be able to get started.

2) Its EXIT function doesn’t get to the (1,1) point in the EXIT chart, which means that even if iterative decoding could get started, it wouldn’t reach a low BER.

You can remove both problems by using a recursive URC, having a generator polynomial of the form [1, 0, 0, …, 0, 0] and a feedback polynomial of the form [1, ?, ?, …, ?, 1]. For example, a generator of [1, 0, 0] and a feedback of [1, 1, 1].

Take care, Rob.

January 4th, 2016 at 10:38 pm

Hi Rob,

Thank you for your response. Since I wanted to write a code for Turbo Decoding of Serially Concatenated Convolutional Codes (SCCC), I made a random choice of my inner code polynomial. I mentioned in one of my previous comments that my inner code has to be a non-recursive rate 1 code. Therefore, I made a random choice for inner code without paying much attention to its characteristics.

I am glad that I made this choice as I was unaware of the two points that you made. Can you elaborate your first comment regarding chart starting from (0,0). What does this mean that iterative decoding won’t be able to get started?

From the inner decoder, in iteration=1, we do compute extrinsic_uncoded_llrs corresponding to the input of inner_encoder. As these are later used by outer_decoder, I was assuming that iterative_decoding was in progress. Can you make some additional comments for this point.

In the mean while, I have complied a matlab lab code for the SCCC. I have uploaded its black diagram on the following link.

http://wikisend.com/download/301750/SCCC_no_interleaver.pdf

So far I have not added any interleaved between the two coders and I am using your component_encoder as RSC. I am not sure if I am making hard_decisions of the correct term for computing my BER. Can you please take a look.

I plan to take following course of action

Step 1. Add interleaver between Inner Code and Outer Code.

Step 2. Replace Inner_Coder RSC with rate 1 non-recursive code.

Step 3. Add precoder before Inner_Code to see if this new system performs better than one in Step 2.

Thank you.

~N

January 5th, 2016 at 12:25 am

Hi Rob,

I have another question.

You made a mention that for BPSK + AWGN channel, DCMC capacity is around 0.3 @ SNR=-6dB.

My question is that if I replace my inner code with a better code, and keep BPSK+AWGN as such, even then I should be getting area=0.3 because this is the property of channel. But with the choice of a better code, the starting point for exit chart will not be (0,0).

Is this correct?

~N

January 5th, 2016 at 1:35 pm

Hi Nassi,

Actually, I’m not sure why your inner EXIT function is starting from the (0,0) point - I had thought that this could only happen to recursive inner codes, but I might be mistaken. You may like to double check that you are using main_inner.m to draw this EXIT function. Alternatively, you may like to try using other generator polynomials.

The problem with having an inner EXIT function that starts from the (0,0) point is that irrespective of how strong your channel LLRs are, the inner decoder will output extrinsic LLRs that have an MI of 0. When these LLRs are given to the outer decoder as a priori LLRs, it will also generate extrinsic LLRs having an MI of 0. In each iteration, the extrinsic LLRs will never have an MI of above 0. In other words, the iterative decoding process never gets started.

You should definitely include an interleaver between the outer and inner codes - this is essential for helping the iterative decoding process proceed smoothly.

I can see in your schematic that your inner code has a coding rate of 1/2. Normally, inner codes should have a coding rate of 1. In other words, they should be non-systematic. If you make your inner code non-systematic (as you suggest in step 2), then your schematic would become much simpler. You could make it even simpler by avoiding termination to begin with. Note that the rate 1 inner code *is* a pre-coder. There would be no need for an additional pre-coder, as you suggest in step 3.

Note that the area beneath the EXIT function of a rate-1 inner code depends only on the channel capacity and the position of the constellation points. It does not depend on the design of the rate-1 code or the bit labelling of the constellation points. However, these things do affect the shape of the EXIT function.

Take care, Rob.

January 5th, 2016 at 4:52 pm

Hi Rob,

SinceI was considering a rate 1 non-recursive inner code with three memory elements, I considered all possible encoder polynomials and plotted corresponding exit charts for inner code using main_inner.m. You can take a look at them at

http://wikisend.com/download/450522/EXIT_Inner_Code_Four_Polynomial.PNG

All of them are starting from (0,0) so that is one issue and your comments will be very helpful in this matter.

Secondly, you mentioned that for SCCC, usually non-systematic convolutional code is considered as inner code. Moreover, you stated that rate 1 inner code *is* a precoder.

Since I will be sticking to non-recursive rate 1 convolutional code in step 2 (mentioned in comment on Jan 4), that is why I will adding a precoder in step 3 to have “Recursive Rate 1 Convolutional Code” as inner_code by combining precoder with non-recursive rate 1 conv code.

Precoder always use feedback. Is this correct? Because with feedback, a precoder is a recursive rate 1 convolutional code.

Thank you,

~N

January 6th, 2016 at 12:00 pm

Hi Nassi,

You are right - it does make sense to have a recursive pre-coder in between your outer code and your non-recursive inner code.

I think that there is something wrong with your inner EXIT function simulation. I would expect at least one of these EXIT functions to start above (0,0) - in fact, I think that all of them should. I would suggest starting again with my main_inner.m and simply modifying my encoder and decoder functions to use a non-recursive design. That should give you an EXIT function that starts above (0,0). After that, you can gradually modify the code, testing after each step, until you get the design you want.

Take care, Rob.

January 6th, 2016 at 9:27 pm

Hi Rob,

I did small experimentation. I took your main_inner.m and performed four simulations to draw exit charts.

Simulation 1. main_inner.m is your code without any modification. This code is a RSC.

Simulation 2. Removed systematic bits from encoder. Decoder computes aposterior-uncoded-llrs by using only encoded bits.

Simulation 3. Defined a new generator polynomial. This is a 1/2 rate systematic, non-recursive code. generator polynomial is [1,1+D]

Simulation 4. Removed systematic bits i-e generator polynomial is [1+D]

EXIT charts for above 4 are uploaded at following link (Starting from top left and going in counter clockwise direction).

https://www.sendspace.com/file/gk5m79

I noticed that when inner_code is rate 1, non-recursive code, the aposterior-uncoded-llrs are almost zero after first iteration. This confirms what you said that in your last comment about iterative process not getting started.

I am not able to figure out what is wrong with my decoder. I changed my transitions matrix, in bcjr_decoder.m, to

% FromState, ToState, UncodedBit, Encoded1Bit Encoded2Bit

transitions = [1, 1, 0, 0, 0;

1, 2, 1, 1, 1;

2, 2, 1, 1, 0;

2, 1, 0, 0, 1];

and did not take into account column 4 i-e column corresponding to Encoded1 Bit (Systematic Bit) in transitions matrix for computing the gammas in bcjr_decoder.m

Why MAP decoding is failing for this encoder?

What do you think.

Thank you,

~N

January 7th, 2016 at 12:15 am

Hi Nassi,

All of your results look good to me. I think that you have identified something that I did not previously know. Namely that the EXIT function of an inner non-systematic non-recursive code has an EXIT function that starts from (0,0). Does your application allow you to use a systematic non-recursive code for your inner code? Or do you have to use a rate-1 code? If it is the latter, then you could try starting with a 1/2 rate systematic non-recursive code and then puncturing some of the systematic and some of the parity bits, such that you achieve an inner coding rate of 1.

Take care, Rob.

January 7th, 2016 at 4:09 pm

Hi Rob,

I realized that an ISI channel can be considered as a non-systematic non-recursive convolutional code in reals. In this paper,

Tüchler, M.; Koetter, R.; Singer, A.C., “Turbo equalization: principles and new results,” in Communications, IEEE Transactions on , vol.50, no.5, pp.754-767, May 2002

exit charts of different equalizer implementations are drawn in fig 9. None is starting from (0,0).

Is it possible that a non-sytematice non-recursive rate 1 inner convolutional code in binary field has an exit chart starting from (0,0) whereas the exit chart of a non-sytematice non-recursive rate 1 inner convolutional code in Reals has non-zero starting point?

It might be possible since in SCCC, it has been mentioned specifically that the inner code should be recursive. This particular choice eliminates (0,0) starting point as we saw in Simulation 2 n 3.

Secondly, can you explain how to compute transitions matrix for an code that is a concatenation of a precoder and a rate 1 non-systematic non recursive convolutional code.

Thank you,

~N

January 8th, 2016 at 8:20 am

Hi Nassi,

I understand now why your inner code must be non-recursive (and non-systematic) - it is because your inner code is an equaliser. I think that you might be right that a binary versus real-valued input from the channel makes the difference between whether the EXIT functions starts from (0,0) or not. My thinking is that in an ISI channel, one tap is the stronger than the others and so it dominates the decision and allows some information to get through. By contrast, if the channel is somehow binary, then each tap has equal strength and they will each tend to cancel out the information that the other taps provide. This is just a guess, but it fits with my previous understanding. You may like to investigate this further.

You can combine the shift register of the pre-coder with that of the ISI channel by using D notation. For example, a 4-state precoder may have the generator 1/(1+D+D^2). Suppose that you use BPSK modulation for a simple 2-tap ISI channel having the polynomial (1+D). The result is an overall polynomial of (1+D)*(1/(1+D+D^2)) = (1+D)/(1+D+D^2). I’m not sure how the polynomials would combine if you were using a higher-order modulation scheme. You may need to use a non-binary precoder in this case. Alternatlvely, you could separate the precoder from the modulator by an interleaver and then iterate between separate precoder and equaliser decoders.

Take care, Rob.

January 8th, 2016 at 7:59 pm

HI Rob,

Yes my inner code is an ISI channel. You can take a look at my block diagram at

https://www.sendspace.com/file/zky6aa

where ISI channel is a finite length channel with real taps. Since precoder is a recursive convolutional code in GF(2), I am still confused about how should I compute the transitions matrix.

You gave an example of a 4-state precoder 1/(1+D+D^2), where addition is implemented in GF(2) i-e mod-2 addition. If I consider a 2-tap ISI channel 1+D, then the overall polynomial is (1+D)/(1+D+D^2). Addition is numerator is in reals where as addition in denominator is in GF(2). This is why I am confused because I don\’t know how to handle this.

In this paper

McPheters, L.L.; McLaughlin, S.W.; Hirsch, E.C., \"Turbo codes for PR4 and EPR4 magnetic recording,\" in Signals, Systems & Computers, 1998. Conference Record of the Thirty-Second Asilomar Conference on , vol.2, no., pp.1778-1782 vol.2, 1-4 Nov. 1998

doi: 10.1109/ACSSC.1998.751630

the authors have considered the same block diagram as mine and have considered a precoded-PR4 channel as an inner-code. Fig 3 is a trellis representation of this channel. I don\’t understand how they arrived at this. Can you please take a look?

Thank you

~N

January 9th, 2016 at 5:58 am

Hi Nassi,

Ah yes - I didn’t think of the confusion that arrises owing to the different types of addition in the precoder and in the ISI channel. I’m not sure how to resolve this. I can see that they have solved this problem in the paper you have referenced, but I’m afraid that I don’t have time to dig into how they have done it. I would suggest starting with separate iterative decoding blocks for the precoder and the equaliser. After you get that working, I think that you will have learned enough to figure out how to combine these two steps…

Take care, Rob.

January 11th, 2016 at 6:18 pm

Hi Rob,

I really appreciate the way you take out time for replying to questions posted by so many people. I made some progress in right direction and want to thank you for your help and guidance.

Thank you,

~N

January 15th, 2016 at 5:04 pm

Dear Rob,

Well my system is employing a 1/3 Turbo code, from Li’s report, and the multiplexed coded bits are transmitted over ISI channel. At receiver I have turbo-equalization setup where MAP equalizer iterates soft-information with Decoder, where the decoder is itself a turbo-decoder.

Do you know how how to optimize the iterations for turbo-decoding and turbo-equalization ? What I mean is how many times the two decoders should iterate amongst each other before sending the information for the entire stream to the MAP equalizer?

Thank you,

~N

January 15th, 2016 at 10:26 pm

Dear Rob,

I have another question. Since it has been shown in research literature that turbo equalization of Precoded-Channel improves results in improved receiver performance in terms of BER vs SNR, I was wondering if we should expect similar gains when turbo-code is replaced with LDPC?

What do you think?

Thank you,

~N

January 16th, 2016 at 6:12 am

Hi Nassi,

The turbo decoder provides strong error correction, whereas the equalizer provides weak error correction. Depending on how many taps there are in the channel, the complexity of the equalizers trellis may be be higher that that of the turbo decoder’s trellis. Owing to these issues, I would recommend performing several iterations within the turbo decoder, for each iteration with the equalizer. We wrote a paper that discusses further techniques for optimising this (you can imagine that our UEC outer code is removed and that our QPSK demodulator is replace with your equalizer)…

http://eprints.soton.ac.uk/375712/

An LDPC decoder also provides strong error correction, like a turbo decoder. By using irregular coding techniques and sophisticated interleaver design techniques, you can achieve near-capacity and/or low error floor BERs using either LDPC or turbo codes. One thing to keep in mind however is that your equaliser is based on trellis decoding, like the turbo decoder. By contrast, LDPC decoding uses factor graphs and belief propagation. There’s no technical reason why one is better than another, but the turbo code is conceptually closer to your equaliser, which may make it feel more appropriate. One tangible benefit of this is that the same hardware could be used for processing the trellises of the equaliser and the turbo decoder, for example.

Take care, Rob.

January 19th, 2016 at 5:02 pm

Dear Rob,

I found a bug in gamma computations for my MAP equalizer code while testing my turbo-equalizer at high SNR.

For my channel (modulator + ISI_Channel), the fourth column in transitions matrix corresponds to ISIed_channel_output. Given a noisy observation ‘rx’ from channel at time=k, I am computing gamma for each row of transition matrix by

gammas(transition_index,k) = log(p(rx/X=x)) + log(P(X=x))

where x = {0,1} depending upon the input bit to my channel block.

The corresponding matlab code is:

for bit_index = 1:length(priori_uncoded_llrs)

for transition_index = 1:size(transitions,1)

if transitions(transition_index,3)==0

gammas (transition_index,bit_index) = -log(1+exp(-apriori_uncoded_llrs(bit_index))) - abs(rx(bit_index)-transitions(transition_index,4)))^2/N0;

else

gammas (transition_index,bit_index) = -log(1+exp(apriori_uncoded_llrs(bit_index))) - abs(rx(bit_index)-transitions(transition_index,4)))^2/N0;

end

end

end

where I am defining lld = log(P(0)/P(1)).

The problem is with the first part of summation where I am computing log(P(0)) or log(P(1)) from the given apriori_uncoded_llrs. For very extreme values of llrs, these computations return NaN entries that is causing all the problem. How should I fix this issue?

Right now I can only think of limiting the dynamic range of llrs. I can find a range in which log(P(0)) and log(P(1)) computed using above code gives numbers from reals.

I want to ask if this is how it is done in practice.

Thank you,

~N

January 19th, 2016 at 9:16 pm

Hi Rob,

To debug my code, I started experimenting with your main_ber for turbo codes.

I initialized a_priori llrs with perfect information. i-e a_a = inf.*a_tx. When I did this, gammas matrix had entries from set {-inf, 0, +inf}. When these gammas were used to compute alphas, the second term passed to jac(.,.) function i-e

alphas(transitions(transition_index,1),bit_index-1) + uncoded_gammas(transition_index, bit_index-1) + encoded_gammas(transition_index, bit_index-1)

turns out to be NaN. The only case when this does not happen is when all the three terms in above summation are either +inf or -inf.

I understand that if we have perfect information there is no point in performing decoding but how do I handle this situation when llrs tend to get large as iterative process continues. I would like to mention that I have not yet included early termination in my matlab code. This implies that my receiver performs fixed number of iterations in turbo-equalization and turbo-decoding.

From this experiment I am having this feeling that dynamic range, that does not include -inf and +inf, of llrs needs to be specified for iterative process

What do you think about this.

Thank you

~N

January 20th, 2016 at 1:05 pm

Hi Nassi,

A simple option is to use 100 instead of infinity. This is typically very robust and imposes no performance degradation.

It is possible to write the BCJR code so that it can cope with infinities - the main thing is to avoid the subtraction of apriori information from aposteriori information, since infinity minus infinity is NaN. However, this requires quite a lot of effort, it typically increases the computational complexity of the algorithm (because some tricks for reducing the complexity are not compatible with inifinities) and doesn’t increase the performance of the decoder.

Take care, Rob.

January 27th, 2016 at 12:21 am

Hi Rob,

I am trying to debug my turbo equalization setup. I plotted exit charts to check the validity of my detector and decoder and found them to be fine. When I combine these two blocks in iterative manner and compute BER, I am not sure if that is correct or not. As I mentioned in my previous postings that my BER oscillates at few SNRs and that is causing confusion.

If I replace my ISI channel with AWGN channel, I know where the turbo-cliff occurs for my turbo-code. In my understanding, BER that is monotonically decreases with iterations in this region implies that the code is correct.

Can I use this information to test my turbo-equalization setup? My real issue is I don\’t know the SNR range in which I should be testing my code.

One option is to run my code over a range of SNRs and see how the BER vs SNR plot comes out to be. But this approach has two problem,

1. This can take very long

2. On average, BER vs SNR are smooth. But I have observed that for few frames the error oscillates with iterations that eventually gets smooth as more frames are simulated. Therefore, averaging obscures this phenomenon.

How can I ensure that my code is bug free?

~N

January 29th, 2016 at 9:35 pm

Hi Nassi,

If the EXIT functions of the two decoders pass the averaging vs histogram test, then the next test is to plot the trajectories in the EXIT charts, to see if they match the EXIT functions. You can see an example in main_traj.m above. The best SNRs to consider are where the tunnel between the two EXIT functions becomes small, but not closed - for long interleaver lengths, this should give you a low BER.

Take care, Rob.

February 1st, 2016 at 12:46 am

Hi Rob,

I looked at previous posts to find what can be done with non-decreasing BER with increasing iterations and found this

Rob Says:

December 27th, 2015 at 7:50 pm

Hi Nassi,

Whether the statement

magnitude( extrinsic_llr@iteration(i+1) ) >=magnitude( extrinsic_llr@iteration(i) )

is true or not depends on how many extrinsic LLRs you are considering a time. If you are considering only a small number, then random chance can cause the magnitudes (and equivalently, the MIs) to go up or down, from iteration to iteration. If you are considering long frames, or lots of frames together, then you should find that the magnitudes (and hence the MIs) only go up (or at least remain the same), from iteration to iteration.

Take care, Rob.

What I understood from this post is that a bug-free BCJR decoder/detector should perform two tasks:

1. At time \’k\’, if apriori_llr(k) is correct then the

abs(extrinsic_llr(k))>abs(apriori_llr(k))

2. At time \’m\’, if apriori_llr(m) is in error then the

abs(extrinsic_llr(m))<abs(apriori_llr(m))

Only in this way we can expect the over all error to go down as we iterate. This should be true regardless of whether BCJR is being used for detection or decoding purposes.

At moderately high snr, I generated a test_frame for which I was getting BER(i+1)>BER(i). I used h (input to cascade of modulator and channel) to compute the error after detection by

error_detector(i) = sum((h_p < 0) ~= h);

where h_p = h_e+h_a. h_e is the extrinsic information generated by detector by using the received stream and apriori llr h_a.

For this frame, at iteration i=1

error_detector(1)=0.

I now use h_e to compute extrinsic information by decoder. What I have noticed that although the decoder is getting correct apriori information (coming from h_e), the extrinsic information that it is computing is not correct. i-e error_decoder(1)>0.

I am using your component_decoder.m for decoding purposes. In next iteration

error_detector(2)=0

error_decoder(2)>0

This means that decoder is introducing errors that were removed by detector.

How should I interpret these observations?

Thank you

~N

February 1st, 2016 at 8:16 pm

Hi Nassi,

It is not clear to me if k and m are bit indices, or iteration indices. If they are bit indices, then I don’t think that you can expect statements 1 and 2 to be always true. The LLR for a particular bit may change sign several times on the path towards the final decision. You can only expect the LLRs to improve *on average*.

My feeling is still that EXIT charts are the best way to debug iterative decoders. If the EXIT functions pass the averaging vs histogram test. Then the next test is trajectories vs EXIT functions. If there is not a good match between the trajectories and EXIT functions, then it suggests that there is a bug in the interaction between the two decoders. There are two typical things that go wrong:

- using the wrong interleaver design or interleaving when you should be deinterleaving and vice versa

- using LLR=ln(P0/P1) in one decoder and LLR=ln(P1/P0) in the other decoder - you can fix this by multiplying the LLRs by -1 as they pass through the interleaver and deinterleaver.

Take care, Rob.

February 2nd, 2016 at 4:44 pm

Hi Rob,

Here is the link to my trajectory plot

http://wikisend.com/download/808098/02022016_TE_Trajectory.PNG

I used main_outer.m to draw exit chart for code and used both histogram and averaging method for plotting. I modified main_inner.m to include my ISI channel and corresponding MAP detector.

The channel exit function is starting from 0.6 at SNR=5dB and so does the trajectory function.

But not sure about the rest since exit cart for code reaches (1,1) point but that is not true for the trajectory. Can you please take a look and comment.

Thanks,

~N

February 2nd, 2016 at 9:58 pm

Hi Nassi,

This trajectory looks like it matches well with these EXIT functions. In order for the trajectory to reach (1,1), you need both of your inner and outer EXIT functions to reach (1,1). However, only your outer EXIT function is reaching (1,1), explaining why your trajectory is getting stuck - precoding will fix this, as we have discussed before (although I would recommend fixing your current bug before including precoding). Do you get similar plots when measuring the MIs for the trajectories using both the averaging and histogram methods? If so, then I think that your iterative decoding is working - I would suggest that the bug affecting the BERs may be in the decoding of your outer code in this case.

Take care, Rob.

February 3rd, 2016 at 4:34 pm

Hi Rob,

Thank you for explaining trajectory behavior. Here is the plot using both methods

http://wikisend.com/download/871066/02032016_System1_SNR5_FL10000.pdf

This look similar to me but would like to get your input.

To debug my system, I simulated the simplest case. I used your component_encoder.m only once to get vectors a,c,e. Muxed them and passed them (without doing any kind of interleaving) over ISI channel after doing BPSK. At the receiver end, I implemented MAP equalizer/detector and demuxed the extrinsic llrs and called them a_c,c_c,e_c for decoding purposes. Please not that I initialized a_a=0.

Since I am using only one encoder, there is no iterative decoding (turbo decoding) going on in my decoder. I use decoder once and send the computed extrinsic llrs a_e,c_e,e_e back to channel detector after muxed operation.

Since I am using your component_encoder.m and component_decoder.m without any modifications, I cant say that these two blocks have any bug. I checked my component_channel.m and component_detector.m again and found nothing wrong with them as well.

As the exit cahrt and trajectory tests are coming out to be fine, this is making it really hard to find what is wrong that is causing this BER issue.

Any suggestions?

Thanks,

~N

February 10th, 2016 at 11:43 am

Hi Nassi,

These trajectories look sufficiently similar to me - I think that your iterative decoding is working correctly.

You should be using an interleaver after the turbo encoder, so that there is an interleaver and deinterleaver between your turbo decoder and your equaliser. I suspect that this might be causing the problem.

Take care, Rob.

February 20th, 2016 at 8:54 pm

Hi Rob,

I have a couple of questions:

1. How do we find the free distance of a turbo code? I came across this report “Weight Distributions for Turbo Codes Using Random and Nonrandom Permutations”

http://ipnpr.jpl.nasa.gov/progress_report/42-122/122B.pdf

In section II, a turbo code structure is show that comprises of identical encoders. In section III, the authors mention that the free distance of component encoders is 5,2, and 2. The part that I don’t understand is that why the component encoders have different free distance when all have identical structure.

Moreover how do we find free distance of Serially concatenated convolutional codes?

2. My second question is about the role of INTERLEAVER in Parallel concatenated convolutional codes (PCCC) (Turbo Codes) and Serially Concatenated convolutional codes (SCCC).

In my understanding, for PCCC, the interleaver scrambles the input stream so that the parity bits from the component encoders are randomized. This ensures that probability of getting low-weight codes simultaneously at the output of the two encoders is very low. This increases the free distance of Turbo Code. At the receiver side, the action of interleaver takes care of burst errors caused by individual MAP decoder.

What exactly is the role of interleaver incase of SCCC? Is it there to take care of burst errors only? Or is it more than that?

Thank you,

~N

February 25th, 2016 at 11:43 am

Hi Nassi,

1. It is not straightforward to find the free distance of a turbo code. It depends very much on the design of the interleaver. Brute force methods have very high complexity, but can reveal the full distance spectrum of the turbo code (this quantifies how many pairings of possible turbo code outputs are separated by each possible Hamming distance). Some reduced-complexity efforts have concentrated on only identifying the free distance, or on identifying bounds on the free distance. Perhaps the component encoders have different free distances because the systematic bits are included with only one of them? The paper that I recommend for free distances of parallel and serial concatenations is “Coding theorems for ‘turbo-like’ codes”. This paper treats the interleaver as having a random design in every frame, rather than a fixed design. This allows the ‘average’ free distance to be computed, without the complication of the interleaver design.

There are a number of roles to the interleaver - the design of an interleaver must do well at all of them:

- deal with burst errors, as you say

- increase the free distance

- mix up the LLRs so that adjacent LLRs do not depend on each other, as is assumed by the component decoders

- make the inputs to parallel concatenated codes different to each other.

Take care, Rob.

March 7th, 2016 at 5:32 pm

Hi Rob,

I want to raise the error floor of your rate 1/3 turbo code. For this I need to puncture the parity bits. I understand that if rate1>rate2, then for a given SNR BER(Code with rate2) < BER(Code with rate1). I have a couple of questions in this regard :

1. For a given rate 1/n code, is it possible to achieve any rate through puncturing.For example with your rate 1/3 code, can I achieve 2/3,4/5,…..,m/n for any m<n ?

2. Consider two turbo code TC1 and TC2 that have same rate but differ in length of memory elements in the encoder i-e #memory(TC1)BER(TC2). Also the error floor for TC1 (code with less memory elements in encoder) should have a higher error floor at lower SNR as compared to the error floor of TC2. Is this intuition correct?

3. I want to change the rate of your turbo code. I want to consider a fixed puncturing pattern to achieve this. How can I modify your code, both decoder and encoder, to do this? Which matlab commands will do the job if I want a code rate 2/3.

Thank you,

~N

March 7th, 2016 at 5:37 pm

Dear Rob,

I noticed a typo in bullet 2. I am rewriting it :

2. Consider two turbo code TC1 and TC2 that have same rate but differ in length of memory elements in the encoder i-e #memory(TC1)<#memory(TC2). I puncture both of them to achieve same rate m/n and send them over AWGN channel.

Now, Intuitively for a given SNR, BER(punctured TC2)<BER(punctured TC1). Also the error floor for TC1 (code with less memory elements in encoder) should have a higher error floor at lower SNR as compared to the error floor of TC2. This should be true for both punctured and non-puncture scenarios.

Is this intuition correct?

~N

March 8th, 2016 at 9:05 pm

Hi Nassi,

1. You can use puncturing to achieve any coding rate greater than the starting coding rate. The LTE puncturer supports this, as you can see in my code from…

http://users.ecs.soton.ac.uk/rm/wp-content/get_LTE_puncturer.m

2. Increasing the memory of the turbo code potentially allows a better distance spectrum to be achieved, provided that a good interleaver design can be found. If so, then the error floor will be improved. Note though that increasing the memory of the turbo code may change the shape of the EXIT functions and may change the turbo cliff threshold SNR.

3. I would suggest starting with my LTE puncturer code, as I have linked to above.

Take care, Rob.

March 8th, 2016 at 11:16 pm

Hi Rob,

Thank you for your reply. Actually I want to raise the error floor of encoding scheme to see the effect of precoder. One can say that I am looking for a bad encoding scheme so that I can see what gains I should expect with precoder.

Since codes in published work are all about improving the coding gain and interleaver gain, I thought that puncturing might help me.

I simulated the BER vs SNR for rate 1/3 convolutional code (54,64,74)(in octal) and then punctured to make its rate 1/2 code. The curve for punctured code is merely a shifted version of original rate 1/3 code. So far I have simulated 10^6 bits and don not see error floor region.

So what do I need to do to raise error floor? Is puncturing not an answer to this? Or should I change encoder polynomials so that dmin is reduced. Can you suggest any convolutional code that has an error floor region at BER 10^-4 or 10^-5.

Thank you

~N

March 8th, 2016 at 11:35 pm

Hi Rob,

I forgot to mention that the BER vs SNR for convolutional code (54,64,74) is over AWGN channel and I am using Viterbi decoder to compute no. errors. No nothing fancy is going on here.

This is what I plan to do:.

1. First establish the turbo cliff and error floor region of my encoding scheme. I will use Viterbi Decoding in this step.

2. Introduce ISI channel and use turbo equalization in at receiver to see the effect of non-recursive rate 1 inner code i-e ISI channel.

3. Cascade Precoder to ISI channel and then see what happens to the error floor when compared to one achieved in step 2.

I need to start will a not-so-good encoding scheme that has a high error floor. This will reduce the simulation time.

Thank you

~N

March 9th, 2016 at 9:01 pm

Hi Nassi,

Error floors are caused by two effects and I think that you might be confused between them:

1) A high error floor will occur If the iterative decoding scheme has an EXIT chart tunnel that does not allow convergence to the (1,1) point, no matter how high the SNR is. For example, this can be observed in serial concatenation of a convolutional decoder with an equaliser, since the equaliser does not have an EXIT function that reaches the (1,1) point.

2) If the iterative decoding scheme does allow convergence to the (1,1) point, then the error floor will be low and will depend on the free distance of the code. This is the case for the LTE turbo code, since its both of its component decoders have EXIT functions that reach the (1,1) point. This will also be the case for a serial concatenation of a turbo decoder with an equaliser.

A precoder may be employed to solve the first type of error floor. I think you may be thinking of this…

There is one other thing that you could do to make the turbo code have the first type of error floor - you could replace its recursive convolutional codes with non-recursive ones. However, this would damage the performance without offering any advantage. At least with replacing the turbo code with a convolutional code you would benefit from a reduced complexity.

Take care, Rob.

March 9th, 2016 at 9:48 pm

Hi Rob,

Since error floor depends on two factors 1)dmin and 2) Multiplicity of codewords that have dmin also known as interleaver gain, I was thinking of changing the generator polynomial of codes found in literature to raise the error floor. I think replacing recursive code with non-recursive one will effect the increase the multiplicity factor that will cause the error floor rise. So this also sounds like a valid option.

Also I plan to replace turbo code with high rate convolutional code to reduce the complexity at receiver end. Right now I dont have iterative receiver. So its the dmin that needs to be reduced to raise the error floor.

I simulating a toy code. I encode my bit stream using CC (54,64,74)(in octal), multiplex the 3 streams of parity bits, modulate and send them over AWGN channel. The decoding will be done only once after which a hard decision will made.

My question is what changes i need to make to your component decoder. Notice that the multiplexed stream has no systematic bit. So far I have changed your transition matrix. I understand that there will be no uncoded_gamma computation. Should I compute 3 extrinsic information vectors separately, one for each parity bit, make hard decisions on each one of them separately and do majority vote to decide if the bit was 0 or 1?

Thankyou,

~N

March 10th, 2016 at 4:49 pm

Hi Rob,

You can find the details of the code at this link:

http://www.mathworks.com/matlabcentral/fileexchange/25859-convolutional-encoder-decoder-of-rate-1-n-codes

I am using this setup and trying to comeup with the MAP decoder for this code. The author has used Viterbi decoding for his simulations. Intuitively, the MAP decoding with hard decsions and Viterbi decoding should give similar curves.

One more point, The author has taken into account the code rate for computing the EbN0. This is different the way you define SNR in your code. Is it because you send streams over channel without any multiplexing?

If I follow your code and define SNR in your way, send the three parity streams separately over AWGN and use three decoders, one for each stream, quantize extrinsic llrs for each parity stream and at time=k take a majority vote amonst p1_k,p2_k, and p3_k to decide bit at time=k, then where will the curve for this scheme should stand wrt one obtained by author?

~N

March 11th, 2016 at 1:03 am

Hi Rob,

I realized that separate decoders was a wrong approach. I worked on my component_decoder. I generated some curves (linked below)

http://wikisend.com/download/739324/Vit_Map.PNG

With one time decoding and zero apriori information, MAP and Viterbi decoding should match. I think these curves second this.

Now if I puncture, these curves should move to right. But what about error floor? That will not rise with puncturing. Is this correct?

~N

March 12th, 2016 at 11:51 pm

Hi Rob,

I understand how to compute the extrinsic_encoded_llrs (required for turbo equalization setup) when coade rate =1/2. But how can we compute this when rate>1/2. To be specific, I need to compute the extrinsic_encoded_llrs for the convolutional code (54,64,74)

http://www.mathworks.com/matlabcentral/fileexchange/25859-convolutional-encoder-decoder-of-rate-1-n-codes

There is no systematic information here. Also note that encoder is generating 3 parity streams p1,p2, and p3. I will explain my attempt wrt to p1.

I computed uncoded_gammas and encoded_gammas. For uncoded_gammas I used soft information of p2 and p3, whereas for encoded_gammas I used p1. computed alphas and betas the way you do in your component_decoder. For computing deltas I used only uncoded_gammas.

My code is uploaded at

http://wikisend.com/download/927416/component_decoder_apriori.m

When I use this in my Turbo equalization setup, the BER vs SNR makes no sense. So I am suspecting that there is something wrong with the way I am computing this part. Can you please take a look and make some suggestions.

Thankyou,

~N

March 13th, 2016 at 10:54 am

Hi Nassi,

In my Matlab code, I combine the systematic LLRs with the apriori message LLRs, before providing them to the BCJR decoder. This is as opposed to providing the systematic LLRs to the BCJR decoder separately from the apriori message LLRs. If you are replacing a turbo decoder with a convolutional decoder, then you won’t have any apriori LLRs. However, you may still have the systematic LLRs, in which case you can provide these to the uncoded apriori input of the BCJR decoder - this means that you will still have the uncoded_gamma calculation.

Rather that using a majority vote, it would be better to let the BCJR combine the information from the various sources and then produce an aposteriori uncoded output - you can then take a hard decision on this. This will give much better BER performance than using three separate decoders and then taking a majority vote at the end.

In my code, I am using SNR = Es/N0, where Es is the energy per symbol. Here, Es = Eb/(R*log2(M)), where R is the coding rate and M is the number of constellation points.

I’m not entirely sure how puncturing would affect your error floor - it depends on whether your error floor is caused by an inability to reach the (1,1) point in the EXIT chart, or if it is caused by the free distance properties of the code. In the latter case, it depends on how the puncturing affects the free distance properties - I’m afraid that I don’t have any intuition for this, other than that puncturing will damage the free distance properties.

For the case where you have more than one set of encoded bits (e.g. R = 1/3), you will have more than one set of encoded extrinsic LLRs. Each additional encoded bit corresponds to an additional column in the transitions matrix and an additional set of calculations, based on a different set of encoded_gamma. To be specific, I think that your transitions matrix should have the following columns:

- from state

- to state

- uncoded bit

- encoded bit 1

- encoded bit 2

- encoded bit 3

Here is how to compute the BCJR algorithm:

- Compute uncoded_gamma

- Compute encoded_gamma1

- Compute encoded_gamma2

- Compute encoded_gamma3

- Compute alpha as a function of all gammas

- Compute beta as a function of all gammas

- Compute uncoded_delta as a function of all gammas except for uncoded_gamma

- Compute encoded_delta1 as a function of all gammas except for encoded_gamma1

- Compute encoded_delta2 as a function of all gammas except for encoded_gamma2

- Compute encoded_delta2 as a function of all gammas except for encoded_gamma2

- Compute uncoded_extrinsic_llrs as a function of uncoded_delta

- Compute encoded_extrinsic_llrs1 as a function of encoded_delta1

- Compute encoded_extrinsic_llrs2 as a function of encoded_delta2

- Compute encoded_extrinsic_llrs3 as a function of encoded_delta3

Take care, Rob.

March 14th, 2016 at 5:31 pm

Hi Rob,

I understood your comments with one exception. Its the first paragraph where you said that “If you are replacing a turbo decoder with a convolutional decoder, then you won’t have any apriori LLRs. However, you may still have the systematic LLRs, in which case you can provide these to the uncoded apriori input of the BCJR decoder - this means that you will still have the uncoded_gamma calculation “.

I understand that that I don’t have apriori LLR for my decoder. But how can I have systematic LLRs. This particular convolutional code (54,64,74) has no systematic information. The three encoders generate three parity streams p1,p2, and p3. Assuming that we have AWGN channel, I can perform soft_demodulation to get p1_c,p2-c, and p3_c.These go into apriori_encoded_llrs and used for computing the three sets of encoded_gammas.

As opposed to your code where msg stream ‘a’ is explicitly transmitted and a_c is computed, there is no systematic information transmitted in this (54,64,74) code. That is why I don’t understand what goes into uncoded_gammas for this case?

Therefore computing the three encoded_extrinsic_llrs is not obvious here to me.

Thank you,

~N

March 16th, 2016 at 12:50 pm

Hi Nassi,

Sorry, I was assuming that you are using a systematic convolutional encoder. If you are using a non-systematic convolutional encoder, then you won’t have any apriori uncoded LLRs, as you say. You can just set this input to your BCJR decoder to be a vector full of zeros.

Take care, Rob.

March 18th, 2016 at 12:33 am

Hi Rob,

Thank you for clarifying this point. I wanted to to learn how to plot free distance asymptote of convolutional code. For this I considered the example of (2,1,14) convolutional code mentioned in Liang\’s report.

I used following matlab commands:

trellis = poly2trellis(15,[56721 61731]);

n=4;

spec = distspec(trellis,n); % returns first four entries of distance spectrum

From spec(.) command I got

dfree: 18

weight: [187 0 1034 0]

event: [33 0 136 0]

spec.weight is total number of information bit error in error events enumerated in spec.event

spec.event lists number of error events for each distance between spect.dfree and spec.dree+n-1.

Does it mean that Nd=33 (number of error events for distance=dfree=18) and Wd=187. I am borrowing these notation from paper titled \"A distance spectrum interpretation of Turbo Codes\".

http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=556666

This paper uses the very same code (2,1,14) to plot free distance asymptote. They are using a W0_free = 137 for their computation. How did they come up with this number. From the description, I thought that the entries in spec.event is enumerating the number for events that result in distances enumerated in spec.dist

For example for disance=dfree=18, I would have set W0_free=33.

Can you please help me with this problem.

Thank you

~N

March 18th, 2016 at 4:36 pm

Hi Rob,

Please ignore my last comment. Generator polynomial [56721 61731] was the wrong choice.

However, when I use Generator polynomial = [4762 7542] (dfree=14) in matlab

trellis = poly2trellis(11,[4762 7542])

??? Error using ==> poly2trellis

Generator describes code of less than specified constraint length.

Can you let me know why I am getting this problem.

Thank you

~N

March 18th, 2016 at 11:57 pm

Hi Rob,

Can you please give the generator polynomial of (2,1,14) convolutional code for which free distance asymptote is plotted in Fig 2.3 of Liang’s report.

One needs to know dfree and Nfree: multiplicity of dfree codewords to plot this curve.

PFree = Nfree*Q(sqrt(dfree*EbN0))

Fig 4 in paper titled “A distance spectrum interpretation of Turbo Codes”

http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=556666

gives Pfree=137*Q(sqrt(18*EbN0))

but once I plot it in matlab, this does not match with the reported curve. Can you please clarify this point. I will be grateful.

Thank you

~N

March 21st, 2016 at 12:54 pm

Hi Nassi,

Figure 2.3 from Li Liang’s report comes from “Trellis and Turbo Coding” by Christian B. Schlegel, Lance C. Perez. I expect that you will be able to find the generator polynomials in there.

I think that you are using the poly2trellis function incorrectly. The second input should be a matrix, rather than a vector.

In the Pfree equation, note that Eb/N0 should not be provided in dB. e.g. if Eb/N0 = 3 dB, then Eb/N0 = 2.

Take care, Rob.

March 21st, 2016 at 9:58 pm

Hi Rob,

Thank you for your reply.

Regarding your comment on poly2trellis(.), I checked matlab documentation

http://www.mathworks.com/help/comm/ref/poly2trellis.html?s_tid=srchtitle

For a rate k/n non-recursive code

trellis = poly2trellis(ConstraintLength,CodeGenerator)

ConstraintLength: 1xk vector

CodeGenerator: kxn matrix

Since I am looking at (2,1,L) codes, k=1. Therefore ConstraintLength=L is a scalar and CodeGenerator is a vector instead of matrix.

Thank you for providing the reference for the (2,1,14) code. The generator polynomial is indeed [56721 61713].

To find the distance spectrum, I did following in matlab

trellis = poly2trellis(15,[56721 61713])

returns me a valid trellis structure. To find the distance spectrum

spec = distspec(trellis,1)

returns dfree=18;weight=187,event=33.

The paper that I mentioned “A distance spectrum interpretation of Turbo Codes”, used multiplicity=137 for plotting the free distance asymptote. Since Fig2.3 in Liang’s report is exact replica of Fig4 in above mentioned paper, I thought you could correct me where I am making mistake. I get the same slope for my free distance asymptote since dmin=18. Its just that using factor of 33 (that I obtained from distspec(.) command) lowers that asymptote as compared to one that is obtained when 137 is used as in case of Fig 2.3 in report.

Thank you

~N

March 22nd, 2016 at 9:21 am

Hi Nassi,

It sounds like you have figured this out - well done!

Take care, Rob.

March 24th, 2016 at 5:42 pm

Hi Rob,

Thank you for being so patient with my questions. I am very tankful to you for helping me with plotting EXIT charts. They are such a big help.

I want to compute BER estimates from my EXIT charts. This paper

S. ten Brink, “Convergence behavior of iteratively decoded parallel concatenated codes”, IEEE Trans. Commun., vol. 40, pp. 1727-1737, 2001

http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=957394

talks about it.

I was wondering if you could help me with the plotting of BER contours with exit charts? I am looking for something similar to Fig.9 in above mentioned paper.

Thank you

~N

March 26th, 2016 at 8:25 pm

Hi Nassi,

I’m afraid that I have never tried to produce plots similar to those contour plots - I have always just plotted the BER using Monte Carlo simulation.

Along those contours the mutual information of the a posteriori LLRs is constant. The a posteriori LLRs are obtained by adding the a priori LLRs to the extrinsic LLRs. The MI of the a priori LLRs is plotted on one axis of the EXIT chart, while the MI of the extrinsic LLRs is plotted on the other axis. The MI of an LLR vector that is obtained as the sum of two LLR vectors having MIs of I_1 and I_2 is given by

I_sum = J(\sqrt{[J^{-1}(I_1)]^2+[J^{-1}(I_2)]^2}),

where the J() and J^{-1}() functions are defined in the appendix of…

Ten Brink, S., Kramer, G., & Ashikhmin, A. (2004). Design of low-density parity-check codes for modulation and detection. IEEE Trans. Commun., 52(4), 670–678. doi:10.1109/TCOMM.2004.826370

Take care, Rob.

March 27th, 2016 at 5:42 pm

Hi Rob,

For computer implementation of MI, I used the approximation mentioned in appendix of Ten Brink, S., Kramer, G., & Ashikhmin, A. (2004). Design of low-density parity-check codes for modulation and detection. IEEE Trans. Commun., 52(4), 670–678. doi:10.1109/TCOMM.2004.826370, to obtain MI vs sigma curve.

Mapping between parameters sigma and MI is also depicted in fig7 of Turbo equalization: principles and new results M. Tuchler; R. Koetter; A. C. Singer, IEEE Transactions on Communications. In this paper, the authors did not mention the numerical method they used to obtain this mapping.

There is a significant difference between the Tuchler curve and the one I obtained from Ten\\\’s approximation. Can you please comment on this?

Thank you,

~N

March 28th, 2016 at 2:00 pm

Hi Rob,

You can take a look at curves here

http://wikisend.com/download/645696/Mapping_MIvsSigma.PNG

Tucher\’s curve is more stretched out in vertical dimension. For example MI=0.8,

sigma_tuchler = 10

sigma_ten = 3.2

Since these sigma\’s go into estimating the probability of error, Pb, from EXIT chart

Pb = 0.5 erfc(sqrt(sigma_a+sigma_e))

(ref: Ten Brink, 2001, Convergence Behavior of Iteratively Decoded Parallel Concatenated Codes, eq(31))

I am not sure how this difference will effect these estimates.

Thankyou,

~N

March 28th, 2016 at 3:16 pm

Hi Nassi,

Your plot of Stephan ten Brink’s curve has negative values for MI, which doesn’t seem right to me. I suspect that you may have a typo in your Matlab code for this. It is ten Brink’s curve that I have always used and I have never had reason to suspect that it is inaccurate.

Take care, Rob.

March 30th, 2016 at 8:52 pm

Hi Rob,

I fixed that typo. Now I am getting the right curve.

I want to ask about plotting an EXIT chart for a special scenario. As you know I am performing turbo equalization (TE). For my application, my channel detector (SISO) module happens to have two sets of apriori information instead of single apriori vector as it is in the case of conventional TE setups.

One apriori information vector is coming from decoder (as in conventional TE setup) while the second is coming from another source. This is unique to my application. For computing gammas in my channel_detector, I use a product of two modified softmax(.) functions, one for each set of apriori-information vector.

Now my question is given this scenario, how should I plot the EXIT function for my channel.

I am guessing that:

1. Since the channel detector has access to more information because of two apriori-information that is obtained from different sources, the extrinsic_llrs coming from channel detector would be more reliable. In terms of EXIT function, this would mean that the curve will be higher than the one obtained for conventional channel detector that has access to only one set of apriori-information coming from decoder

But I am not sure about the slope here? In other words, will the curve for this case will be a shifted version, in vertical direction, of the curve that I obtain from your inner.m code that uses one apriori-information vector or will it be something different?

Can you guide me how I should approach drawing EXIT chart for my application.

Thank you,

~N

March 31st, 2016 at 9:36 am

Hi Nassi,

I would expect your EXIT curve to be improved by the additional information, as you suggest. The key to drawing the EXIT chart is the interleaver/deinterleaver pair that passes the iteratively-exchanged LLRs. You should generate apriori LLRs as if they are provided by the interleaver/deinterleaver and you should measure the MI of the extrinsic LLRs that are provided to the interleaver/deinterleaver.

Take care, Rob.

April 4th, 2016 at 9:46 am

Hi Rob,

I am now trying to reimplement the paper, Differential Turbo coded Modulation with APP Channel Estimation by Sheryl L. Howard and Christian Schlegel. I would like to use this in OFDM system

As a first step I have done the encoding part in Matlab. But the decoding looks complicated for me. Could you suggest me how to start with. The inner differential 8 PSK decoder uses BCJR algorithm. And also I am considering the AWGN channel so channel estimation I am not considering at this level.

April 4th, 2016 at 4:32 pm

Hi Rob,

To better explain the structure of my receiver, I have uploaded a block diagram of my receiver at the following link.

http://wikisend.com/download/699064/Rob_April4.png

LLRs_1 act as one set of apriori-information. These are not updated in the interactive process. The other set is LLRs_2. These are updated in the iterative process that has a interleaver/deinterleaver module.

With this block diagram, how can I plot the exit chart for the interactive process?

Thank you,

~N

April 5th, 2016 at 8:55 am

Hi Nassi,

The EXIT chart should plot the MI of LLRs_2. The MI of LLRs 1 will be a parameter that affect the EXIT function of your channel detector - just like how the SNR of a channel affects the EXIT function of an inner decoder.

Take care, Rob.

April 5th, 2016 at 2:16 pm

Hello Shruti,

I found the following paper to be very useful for implementing SISO DPSK demodulation…

Hoeher, P., & Lodge, J. (1999). Turbo DPSK: Iterative differential PSK demodulation and channel decoding. IEEE Trans. Commun., 47(6), 837–843.

I know that Alastair Burr has a good paper on how to use this for channel estimation.

I’m afraid that I can’t find the Matlab code that I wrote for Turbo DPSK.

Take care, Rob.

April 6th, 2016 at 4:28 pm

Hi Rob,

Thank you so much for your kind answer. I have done the encoding part. Symbols I am able to get but I do not know the trellis to be plotted. Also I am unaware of BCJR algorithm for a rate 2/3.

I do not need the channel estimation at the first stage as I am considering AWGN channel. I am planning implement this along with OFDM . But in the decoding part I am totally stuck. Thank you so much Rob.

Regards, Shruti

April 7th, 2016 at 1:56 pm

Hi Shruti,

My advise would be to start without OFDM. Instead, I would concentrate on reproducing the simplest results in the Turbo DPSK paper.

Take care, Rob.

April 7th, 2016 at 3:55 pm

Hi Rob,

Thanks for the reply. My main confusion is how to create trellis table and then how to use BCJR algorithm for 8PSK symbols.

Do you have any way to solve this. The inner decoder is a D8-PSK APP Soft decision decoder. This uses BCJR or forward backward algorithm operating on the 8 state trellis of differential 8PSK Code. the outer decoder is an APP Soft decision decoder. Sorry for bothering you and thank you.

Regards,

Shruthi

April 8th, 2016 at 4:17 am

Hi Rob,

I need some trouble shooting suggestions for drawing trajectory corresponding to my channel detector.

I am getting different curves with histogram and averaging method that indicates error.

I was hoping at texting at high SNR will be informative.But this was not the case as the extrinsic_uncoded_llrs match with the uncoded_bits. So its like getting zero errors. One thing that I have noticed is that the magnitude of llrs does not increase with SNR. Should I worry about this?

Can you give any pointers that might be useful.

Also I need to compute log(a+b+c). Can you tell me how can I do this using jac(.)?

I looked into the report and it was mention there that incase of more than two arguments, jac(.) needs to be applied on successive pairs. Does this mean

jac(a, jac(b,c) )

Thank you

~N

April 10th, 2016 at 5:50 pm

Hi Rob,

Here is the trajectory plot for my channel

http://wikisend.com/download/233156/Channel_trajectory.png

How should I go about fixing it?

Thank you

~N

April 11th, 2016 at 8:36 pm

Hi Shruti,

I’m afraid that I don’t have any more that I can offer you on this. The Turbo DPSK paper that I pointed you describes how to implement this better than I could in these comments.

Take care, Rob.

April 11th, 2016 at 9:02 pm

Hi Nassi,

Yes - jac(a,b,c) = jac(a,jac(b,c)), as you say.

The magnitude of the channel LLRs should increase with SNR. I would expect this to knock on to increases in the values of the extrinsic LLRs…

Your mismatch between the averaging and histogram results suggest to me that there is a bug in your code, but I’m afraid that I can’t suggest where it might be. I would suggest comparing averaging and histogram MI for LLRs at various locations and times during the iterative decoding process - this may help you track down the problem.

Take care, Rob.

April 13th, 2016 at 10:43 pm

Hi Rob,

The exit chart trajectory for my channel detector is still problematic. I will give you an overview of what I have done.

I have uploaded my block diagram here

https://www.sendspace.com/file/av9evo

I have included dummy channel responses to make things clear.

Note that for ISI_2, ‘y’ is the uncoded input. Therefore at receiver, Detector2 computes the extrinsic_uncoded_llrs of y. Since there are four possible values that ‘y’ can take therefore, extrinsic_uncoded_llrs is a matrix of kx4, one column for each possible input.

I have defined llr(y) as:

llr(y=p) = log(Pr(y=p)/Pr(y=-1.5)), where p=1.5,0.5,-0.5,-1.5

The apriori_uncoded_llr for Detector 2 is a zero matrix of dimension kx4 which corresponds to equally likely inputs.

For ISI_1, x is the input and y is its output. The job of Detector1 is to compute the extrinsic_uncoded_llrs which correspond to x. Since x is binary, the output of Detector1 is a kx1 matrix.

It take apriori_uncoded_llrs(coming from decoder) and apriori_encoded_llrs(coming from Detector2).

Since, soft-information is exchanged between Detector1 and DECODER, I wanted to plot the trajectory for Detector1. But as you have seen before, the curves are different.

In conventional setup, detector receives noisy version of ‘y’. But here the detector (Detector1) is seeing the likelihood ratios. Can this cause a mismatch between two curves?

Regards,

~N

April 15th, 2016 at 3:16 am

Hi Rob,

Please treat it as a separate post.

In component_decoder, we compute

extrinsic_uncoded_llrs(bit_index) = prob0-prob1——-(eq a)

Since extrinsic_uncoded_llrs=log(P0/P1), if I rearrange I can compute as P1=1/(exp(extrinsic_uncoded_llrs)+1). If I now take log on both sides this becomes

log(P1) = -log(exp(extrinsic_uncoded_llrs)+1)———(eq b)

When I compare these two values log(P1) and prob1, they do not match. This is strange as I am only rearranging eq-a to get eq-b. Can you please comment.

~N

April 15th, 2016 at 3:44 am

Hi Rob,

Does it mean that prob0 ~= log (P(x=0)) in strick sense? Since

extrinsic_uncoded_llrs = prob0-prob1

prob0 and prob1 can have any value as long as the difference between them stays extrinsic_uncoded_llrs. However, all of them will not be meaningful when viewed as probabilities

Is this the reason why prob0~=log(P(x=0))?

~N

April 15th, 2016 at 4:20 am

Hi Rob,

I have not been able to figure out yet where the problem lies. Since Detector1 takes input from Detector2, I should first check if Detector2 is error-free or not. Since testing at high SNRs do no guarantee error-free code, exit chart analysis is required. But I dont understand how to generate apriori-uncoded-llrs for the non-binary case…..

Can you suggest a method for plotting exit chart trajectory for Detector2. Looking at extrinsic_uncoded_llrs(corresponding to y) generated by Detector2 have not been helpful in debugging my code.

Code information:

I am using log of softmax(.) for generating uncoded_gammas in Detector2.

apriori_uncoded_llrs is a kx4 matrix.

uncoded_gammas=zeros(size(transitions,1),size(apriori_uncoded_llrs,1));

for bit_index = 1:size(apriori_uncoded_llrs,1)

for transition_index = 1:size(transitions,1)

for uncoded_bit_index = 1:length(uncoded_bit)

if transitions(transition_index, 3)==1.5

uncoded_gammas(transition_index, bit_index) = apriori_uncoded_llrs(bit_index,1) - … logsumexp(apriori_uncoded_llrs(bit_index,:));

elseif transitions(transition_index, 3)==0.5

uncoded_gammas(transition_index, bit_index) = apriori_uncoded_llrs(bit_index,2) - … logsumexp(apriori_uncoded_llrs(bit_index,:));

elseif transitions(transition_index, 3)==-0.5

uncoded_gammas(transition_index, bit_index) = apriori_uncoded_llrs(bit_index,3) - … logsumexp(apriori_uncoded_llrs(bit_index,:));

elseif transitions(transition_index, 3)==-1.5

uncoded_gammas(transition_index, bit_index) = apriori_uncoded_llrs(bit_index,4) - … logsumexp(apriori_uncoded_llrs(bit_index,:));

end

end

end

end

where logsumexp(.) is taken from http://www.mathworks.com/matlabcentral/fileexchange/26184-em-algorithm-for-gaussian-mixture-model–em-gmm-/content/EmGm/logsumexp.m

and is an alternative to jac(a,jac(b,c))

Since I initialized apriori_uncoded_llrs=zeros(k,4), these lines play no role is my current code. But they will eventually if exit chart trajectory is plotted for this detector.

alphas, betas, deltas computations come from component_decoder.

In the end, I am computing extrinsic_uncoded_llrs like this

extrinsic_uncoded_llrs = zeros(size(apriori_uncoded_llrs,1),4);

for bit_index = 1:length(size(apriori_uncoded_llrs,1))

prob0=-inf;

prob1=-inf;

prob2=-inf;

prob3=-inf;

for transition_index = 1:size(transitions,1)

if transitions(transition_index,3) == +1.5

prob0 = jac(prob0, deltas(transition_index,bit_index));

end

if transitions(transition_index,3) == +0.5

prob1 = jac(prob1, deltas(transition_index,bit_index));

end

if transitions(transition_index,3) == -0.5

prob2 = jac(prob2, deltas(transition_index,bit_index));

end

if transitions(transition_index,3) == -1.5

prob3 = jac(prob3, deltas(transition_index,bit_index));

end

end

extrinsic_uncoded_llrs(bit_index,1) = prob0-prob3;

extrinsic_uncoded_llrs(bit_index,2) = prob1-prob3;

extrinsic_uncoded_llrs(bit_index,3) = prob2-prob3;

extrinsic_uncoded_llrs(bit_index,4) = 0;

end

April 18th, 2016 at 8:25 pm

Hi Rob,

Thank you for your shared resource.

I have a simple question. In order to simulate the BER performance of QPSK, Can I still use this encoder and decoder and just modify the modulation and soft-demodulation parts of the code? I mean do I need to change anything about the function m file of component_encoder and component_encoder?

I tried to check all the comments above to find the answer but not lucky.

Thank you very much.

Regards

Cliff

April 19th, 2016 at 4:55 pm

Hi Rob,

For detector2,

extrinsic_uncoded_llrs=LogDomain_Detector2(rx,apriori_uncoded_llrs,N0)

% apriori_uncoded_llrs: Kx4 matrix

% apriori_uncoded_llrs(k,1): apriori_llr of channelinput=+1.5 at index=k

% apriori_uncoded_llrs(k,2): apriori_llr of channelinput=-0.5 at index=k

% apriori_uncoded_llrs(k,3): apriori_llr of channelinput=+0.5 at index=k

% apriori_uncoded_llrs(k,4): apriori_llr of channelinput=+1.5 at index=k

% where k=1:K

%rx: Kx1

% FromState, ToState, ChannelInput, ChannelOutput

transitions = [1, 1, +1.5, +2.25;…

2, 3, +0.5, +0.25;…

3, 1, +1.5, +1.75;…

4, 3, +0.5, -0.25;…

1, 2, -0.5, +0.25;…

2, 4, -1.5, -1.75;…

3, 2, -0.5, -0.25;…

4, 4, -1.5, -2.25];

uncoded_bit = [1.5 -0.5 0.5 -1.5];

%————————–Gamma Computation—————————

uncoded_gammas=zeros(size(transitions,1),length(rx));

for bit_index = 1:length(rx)

for transition_index = 1:size(transitions,1)

for uncoded_bit_index = 1:length(uncoded_bit)

if transitions(transition_index, 3)==1.5

uncoded_gammas(transition_index, bit_index) = apriori_uncoded_llrs(bit_index,1) - … logsumexp(apriori_uncoded_llrs(bit_index,:));

elseif transitions(transition_index, 3)==0.5

uncoded_gammas(transition_index, bit_index) = apriori_uncoded_llrs(bit_index,2) - … logsumexp(apriori_uncoded_llrs(bit_index,:));

elseif transitions(transition_index, 3)==-0.5 uncoded_gammas(transition_index, bit_index) = apriori_uncoded_llrs(bit_index,3) - …

logsumexp(apriori_uncoded_llrs(bit_index,:));

elseif transitions(transition_index, 3)==-1.5

uncoded_gammas(transition_index, bit_index) = apriori_uncoded_llrs(bit_index,4) - logsumexp(apriori_uncoded_llrs(bit_index,:));

end

end

end

end

encoded_gammas=zeros(size(transitions,1),length(rx));

for bit_index = 1:length(rx)

for transition_index = 1:size(transitions,1)

encoded_gammas(transition_index, bit_index) = …

- ( rx(bit_index) - transitions(transition_index,4) )^2/N0;

end

end

I am computing alphas, betas, and deltas using exactly same lines as in component_decoder.m

I modified lines for computing extrinsic_uncoded_llrs

extrinsic_uncoded_llrs = zeros(size(apriori_uncoded_llrs,1),4);

for bit_index = 1:length(size(apriori_uncoded_llrs,1))

prob0=-inf;

prob1=-inf;

prob2=-inf;

prob3=-inf;

for transition_index = 1:size(transitions,1)

if transitions(transition_index,3) == +1.5

prob0 = jac(prob0, deltas(transition_index,bit_index));

end

if transitions(transition_index,3) == +0.5

prob1 = jac(prob1, deltas(transition_index,bit_index));

end

if transitions(transition_index,3) == -0.5

prob2 = jac(prob2, deltas(transition_index,bit_index));

end

if transitions(transition_index,3) == -1.5

prob3 = jac(prob3, deltas(transition_index,bit_index));

end

end

extrinsic_uncoded_llrs(bit_index,1) = prob0-prob3;

extrinsic_uncoded_llrs(bit_index,2) = prob1-prob3;

extrinsic_uncoded_llrs(bit_index,3) = prob2-prob3;

extrinsic_uncoded_llrs(bit_index,4) = 0;

end

Since I dont know how to draw exit chart trajectory for this detector, I have no way to verify if this code is bug-free or not.

Can you please comment on this issue. Right now I cant point out whether the problem lies with detector1 or is it with detector2

Thank you

~N

April 25th, 2016 at 10:13 am

Hi Nassi,

There is a particular aspect of your approach that I think is very interesting - your detector 2 outputs a kx4 matrix of extrinsic_uncoded_llrs. I suspect that this has the advantage of preserving all information that is known to detector 2. However, the analysis of this would require non-binary EXIT charts (see the work of Joerg Kliewer and Soon Xin Ng) - this analysis is tricky and the extra dimensionality is difficult to model, which typically results in giving only an approximate analysis. I suspect that you are using binary EXIT charts, which are causing the mismatch. Another problem with non-binary EXIT charts is that the histogram method becomes impractical - only the averaging method is easy to compute. This prevents making a comparison between results obtained using the averaging and histogram methods for debugging purposes.

You can also see some discussion of non-binary EXIT charts in our fully-parallel turbo code paper…

http://eprints.soton.ac.uk/383511/

I can think of two other approaches that you could have used:

- Detector 2 could output binary LLRs. However, this would lose information. This information could be restored by iterating between detectors 1 and 2, but this would only work well if you had an interleaver between them (I’m guessing that you scenario does not allow you to do that).

- You could merge detector 1 and detector 2 into a single big detector. I’m not sure if this would increase the complexity relative to your current approach. However, I think that it would be easier to analyse, since it would avoid non-binary LLRs. This would have been the approach that I would have picked - I think that I would recommend it for your next step. I guess that you can merge ISI 1 and ISI 2 by taking the convolution (or similar) of their impulse responses.

With the log(P1) thing - are you sure that you have normalised P1 before taking its log. i.e. P1′ = P1/(P1+P0).

Take care, Rob.

April 25th, 2016 at 10:14 am

Hi Cliff,

You just need to change modulator and soft_demodulator, as you suggest.

Take care, Rob.

April 25th, 2016 at 3:35 pm

Hi Rob,

You are right is suggesting that I should combine the two channels to make one bigger channel. This would require only one detector at receiver that would take channel output as input to BCJR equalizer along with apriori-uncoded-llrs from decoder and will spit out binary extrinsic-llrs for the channeled bits. Drawing exit chart trajectory for this system is straight forward and I have done this. This passes the LLR consistency test as well as the Mi through histograms vs averaging test. I did this to create the ground truth against which I can check the performance of my cascade system.

However, my application *requires* me to take a cascade approach. The purpose of Detector2 is to generate the extrinsic-llrs corresponding to the output of Detector1. These llrs are a kx4 matrix.

Detector1 takes in these llrs as apriori-encoded-llrs along with apriori-uncoded-llrs coming from decoder to generate binary extrinsic-uncoded-llrs.

I looked at the Q&A section of MATLAB Code for EXIT CHARTS and found some useful information. I will make reference to them below.

—————————————————————————–

Rob Says:

May 21st, 2009 at 9:49 am

Hmmm, when the histogram and averaging methods give different results it often (but not always) means that your decoder is producing inappropriate LLRs. In other words, the LLRs are lying by either expressing too much confidence, or too little. The averaging method assumes that the LLRs are not lying, while the histogram method avoids this assumption by checking the LLRs against the transmitted bits. You can see if your LLRs are lying by using this Matlab function…

http://users.ecs.soton.ac.uk/rm/wp-content/display_llr_histograms.m

This gives you two plots, one of the histograms and one showing the relationship between the values the LLRs have and the values they should have. This should be a diagonal line, like the one you get using the code…

bits = round(rand(1,1000000));

llrs = generate_llrs(bits, 0.5);

display_llr_histograms(llrs,bits);

Hope this helps, Rob.

—————————————————————————–

Following are the plots that compare the performance of CombinedChannel approach with CascadeChannel. (I have used same set of bits and noise for generating these plots)Frame length=10000. SNR=0dB

http://wikisend.com/download/509532/ConsistencyTest.PNG

As you can see, both systems pass consistency test. However for cascade channel, MI curves do not match.

—————————————————————————–

Rob Says:

September 19th, 2010 at 1:36 pm

Hi Devan,

You are correct, if the LLRs satisfy the consistency condition then you can use the averaging method to measure their mutual information. Otherwise, you should use the histogram method.

You can tell if your LLRs satisfy the consistency condition by using this Matlab function…

http://users.ecs.soton.ac.uk/rm/wp-content/display_llr_histograms.m

……….

Hope this helps, Rob.

—————————————————————————–

So does that mean that my simulation for CascadeChannel is correct? And that I should be using Avg method for plotting EXIT chart?

—————————————————————————–

Rob Says:

September 14th, 2010 at 8:17 am

Hi Michael,

The histogram and averaging methods make different assumptions, so sometimes one method will be more accurate, sometimes the other method will be more accurate. The histogram method works well when you input a long vector of LLRs into it - maybe 10000, 100000 or 1000000. The averaging method works well when all the components of your scheme are optimal. I suspect that there are no problems with your simulation - you can probably get the two EXIT functions to match better by running a longer simulation and inputting more LLRs at a time into the histogram method.

Hope this helps, Rob.

—————————————————————————–

I get into *out of memory issues* for frame lengths>10,000. So increasing frame length is not an option. With all these limitations and curves, would you like to comment on my CascadeChannel simulation.

Should I use consistency test as a litmus-paper test for checking my code instead of MI-test.

I apologize for this long comment but I am stuck at this bug-identification issue for last couple of weeks and it is driving me crazy.

I really appreciate your help. You have been so very kind in all this time.

Thankyou

~N

April 25th, 2016 at 4:10 pm

Hi Rob,

This is a question that I am not care about And again I will make a reference to one of the post I found on EXIT CHART page

———————————————————————

mahesh Says:

October 6th, 2009 at 8:43 am

hi rob,

Thanks for your reply,I am using bcjr as a decoder to a partial response channel so should i use the output of the channel as LLRs?do you have an example file to illustrate how to use bcjr decoder

Rob Says:

October 6th, 2009 at 9:11 am

Hello again Mahesh,

The file main_inner.m provided above gives an example of how to obtain LLRs from the output of a channel and input them into the BCJR decoder.

Hope this helps, Rob.

———————————————————————

I guess you are making a reference to following lines in mian_inner.m

% BPSK demodulator

apriori_encoded1_llrs = (abs(rx1+1).^2-abs(rx1-1).^2)/N0;

apriori_encoded2_llrs = (abs(rx2+1).^2-abs(rx2-1).^2)/N0;

I don’t understand how to extend these lines when channel introduces ISI. The reason is that as opposed to awgn channel that only adds noise to the input stream but does nothing to the size of input alphabet, computing llrs is straight forward.

But this is not true in ISI channel. For a single tap ISI channel, like [1 0.5], the output alphabet is [1.5 0.5 -0.5 -1.5]. How can we generate llrs directly for this channel. Can you write what lines of matlab code will generate llrs for this single tap channel?

~N

April 25th, 2016 at 4:41 pm

Hi Rob,

I did not understand what you meant by

*****With the log(P1) thing - are you sure that you have normalised P1 before taking its log. i.e. P1′ = P1/(P1+P0).*****

In above lines, is P0=Prob(X=0) or is it P0=log(Prob(X=0)/Prob(X=1)). Because in later case P1=0 and so is P1′=0.

Can you clarify this point.

April 28th, 2016 at 6:35 pm

Hi Rob,

I have a question related to the transition matrix that I am using for Detector2.

http://wikisend.com/download/322386/main_tex3.pdf

and here is the MI plot obatined by both

http://wikisend.com/download/875676/4.jpg

Which transition matrix should I consider for computing the extrinssic_llrs with detector 2?

Since the overall system is a cascade, I think that using shorter transition matrix makes more sense.

~N

May 3rd, 2016 at 9:02 am

Hi Nassi,

Passing the consistency test should mean that passing the averaging vs histogram test - both tests are testing the same thing - they just present the information in a different way. I am not sure that you are correct when you say that your code passes one of the tests, but not the other.

I will respond to your other questions later.

Take care, Rob.

May 6th, 2016 at 8:55 am

Hi Nassi,

With regard to…

apriori_encoded1_llrs = (abs(rx1+1).^2-abs(rx1-1).^2)/N0;

apriori_encoded2_llrs = (abs(rx2+1).^2-abs(rx2-1).^2)/N0;

The technique that I would recommend is using a turbo equaliser, which uses a trellis to consider not only the current received symbol, but also the previous interfering symbols. You can see what I mean in…

https://hal-institut-telecom.archives-ouvertes.fr/file/index/docid/703645/filename/ett95.pdf

http://www2.elo.utfsm.cl/~ipd465/Papers%20y%20apuntes%20varios/Turbo%20codes.pdf

With regard to P1′ = P1/(P1+P0), here P1 and P0 are probabilities, rather than LLRs. The conversion from LLRs to probabilities doesn’t always produce a P0 and a P1 that add up to give 1. You can fix this by normalising the probabilities using this equation.

I’m afraid that your wikisend links are not working, so I don’t know about the transition matrix.

Take care, Rob.

May 23rd, 2016 at 6:17 pm

Hi Rob,

I am still working on same problem of comparing the performance of cascaded channels with a conventional channel.

I have a question related to the noise variance computation when MAP (BCJR) detector is implemented at the receiver. In SNR definition,

SNR = 10log(signal power/noise power)

Signal power is the product of (Ea*Eh) where Ea=Avg Symbol Energy and Eh=2-norm of channel impulse response. Correct? We should be using Avg energy per symbol instead of Avg energy per bit for computing noise variance.

Can you please comment on the following computation for the two channels:

dBs = SNR; %in dB scale

SETUP#1:

=================================

Alphabet = [+1,-1]

Channel =[h0 h1 h2]=[1 1 0.25]

Avg Energy Per Symbol = Ea = 1

Avg Energy per Bit = Eb = Ea

Eh = 1^2+1^2+0.25^2 = 2.0625

Signal Energy = Ea*Eh = 2.0625

noise variance = N1 = Signal Energy/10^(dBs/10) = 2.0625/10^(dBs/10);

SETUP#2:

================================

Alphabet = [+1.5,+0.5,-0.5,-1.5]

Channel =[h0 h1 h2]=[1 0.5]

Avg Energy Per Symbol = Ea = 1.25

Avg Energy per Bit = Eb = Ea/2=0.625

Eh = 1^2+0.5^2 = 1.25

Signal Energy = Ea*Eh = (1.25)(1.25)

noise variance = N2 = (1.25)(1.25)/10^(dBs/10);

Thankyou,

~N

May 25th, 2016 at 6:23 pm

Hi Rob,

I have another question. I understand that \"true probabilities\" of bits is not important for turbo-like-processing but I was wondering if it is possible to get them?

I modified your decoder to see if I can compute y0=log(Prob(x=0)) and y1=log(Prob(x=1)).

You can find my code at

main:

http://wikisend.com/download/241166/upload_main.m

modified_decoder: http://wikisend.com/download/702286/decoder_probabilistic.m

your decoder:

http://wikisend.com/download/341668/decoder_softdemodulation.m

I am getting same extrinsic_uncoded_llrs from both decoders. I used this fact as a check.

Now the probability part. Principally, exp(y1)+exp(y2) = 1, but that is not the case. A scaling factor is required to satisfy this condition.

What I dont understand is that why dont we get true probabilities given that I have included the factor -log(sqrt(2*pi*sigma^2)) for computing the respective gammas? Is it impossible? Can you please comment on this.

Thankyou,

~N

June 2nd, 2016 at 7:58 am

Hi Nassi,

This relates to Bayes theorem. In order for the demodulator to compute the true probability, it needs some extra information about the probability distribution of the received signal. When we convert to LLRs, the terms involving this probability distribution get cancelled out, eliminating the requirement for the demodulator to estimate this distribution. As you say however, you can get the true probabilities by converting the LLRs and then normalising them, so that the probabilities add up to 1.

Take care, Rob.

June 2nd, 2016 at 3:50 pm

Hi Nassi,

I can’t spot anything wrong with your SNR calculations, but I am not 100% confident. If I were you I would try to confirm that they are correct by comparing with some results available in the literature.

Take care, Rob.

June 5th, 2016 at 2:39 pm

Hi Rob,

I want to modify your component_decoder.m to handle “correlated” inputs. I was going through previous posts and found

%==============================================

swap Says:

September 20th, 2011 at 3:48 pm

Dear Rob

Can you plase tell somthing about turbo codes, that as why is it necessary that the information feeded into encoder should be unocrrelated as much as possible.

What if they are correlated, apart from turbo decoding how does it affect. Can you please explain it

thanks

swap

————————————————————————————–

Rob Says:

September 20th, 2011 at 4:33 pm

Hi Swap,

If the occurrence of two events A and B is correlated then their joint probability is given by P(A,B) = P(A|B)*P(B). If they are uncorrelated then the joint probability is given by P(A,B) = P(A)*P(B). The second situation is much easier to work with because we only need one P(A) value for each outcome of A. By contrast, the first situation requires a P(A|B) value for each possible combination of outcomes of A and B. To save on complexity, the BCJR assumes that P(A,B) = P(A)*P(B), i.e. that the LLRs are uncorrelated. If they are correlated, then this assumption is false and the performance of the BCJR suffers.

Take care, Rob.

%==============================================

How can I should I do this?

Thank you

~N

June 5th, 2016 at 3:31 pm

Hi Rob,

Take a look at this cascaded system realization

http://wikisend.com/download/295158/2.jpg

First FSM has uncorrelated inputs x_k, so the corresponding MAP detector implementation is straight forward. However the 2nd FSM with y_k is where my problem lies.

As you can see that in transition matrix for this FSM, if y_(k-1)=1.5, then y_k can either be 1.5 or -0.5. If I consider this transition matrix for FSM2, iterate between the two detectors, The extrinsic_llrs corresponding to y’s grow in magnitude with each iteration. This does not sound right to me.

If I consider independent uncorrelated input assumption, FULL transition matrix for FSM2 has 16 rows

http://wikisend.com/download/328380/3.jpg

If I consider FULL transition matrix to implement MAP detector for FSM, the magnitude of extrinsic_llrs for y’s do not show that growth problem as observed previously.

For channels with short memory, MAP detectors using independent assumption seems to work but as I increase the length of channel, this approach breaks down.

In my view, using SHORT transition matrix is the right thing to do. But then decoder should be modified somehow to incorporate correlated inputs.

Can you comment on this issue. I will be very grateful.

Thankyou

~N

June 11th, 2016 at 4:29 pm

Hi Nassi,

Correlated inputs can be handled by designing a trellis to consider the correlation. This is what happens in turbo equalisation. The equations for each transition in the trellis can use conditional probabilities based on the state that it is coming from.

This is a confusing issue because there are two types of correlation:

1) correlation that is accommodated by the design of the trellis (e.g. caused by ISI in a turbo equaliser)

2) correlation that is mitigated by the interleaver (e.g. in an interleave decoding process)

The first type of correlation is useful, because it helps with the decoding, although it increases the complexity of the trellis. The second type of correlation would be bad, if it wasn’t mitigated by the interleaver - it would degrade the decoding.

Take care, Rob.

June 30th, 2016 at 9:49 pm

Hi Rob,

I am implementing the Non-Binary Turbo Code from section 8.4 of Non-Binary Error Control Coding for Wireless Communication and Data Storage, Author(s): Rolando Antonio Carrasco, Dr Martin Johnston (http://onlinelibrary.wiley.com/book/10.1002/9780470740415). This would need some major modifications in your code but I think I can do it….thanks to all the help that I have got from you and previous posts. However, I have no idea how to plot exit chart for this setup.

All the exit charts that I have seen so far are based on binary alphabet. So I am finding it very difficult to extend it to other alphabets. How should I modify generate_llrs.m and other m.files to draw exit charts?

My ultimate objective is to plot trajectory for turbo equalization setup given non-binary alphabets are transmitted over ISI channel.

Can you please help me with that.

Thank you,

~N

July 5th, 2016 at 6:43 pm

Hi Nassi,

This is the same thing that Ahmed is asking about in his message near the bottom of…

http://users.ecs.soton.ac.uk/rm/resources/matlabexit

You may like to search through all the discussions on that page for symbol-based EXIT charts. In particular, you may like to look at the papers that were co-authored by Joerg Kliewer and Soon Xin Ng.

Note that if you are looking at a duo-binary turbo code, then you can still use binary EXIT charts. You only need symbol-based EXIT charts if the symbols cannot be separated into distinct bits.

Take care, Rob.

October 9th, 2016 at 11:47 pm

Hi Rob,

My question is about implementing MAP-equalizer. Suppose that ISI channel is h=[h0 h1 h2 h3 h4 h5]. If max(h)=h0 and noise is independent, then optimal implementation of MAP equalizer can be done by BCJR algo.

Let us now suppose that max(h)=h2. At time=n, y(n)=<[x(n) x(n-1) x(n-2) x(n-3) x(n-4)],[ho h1 h2 h3 h4]> and r(n)=y(n)+w(n).

In normal mode: MAP equalizer will compute llr corresponding to x(n) but I want to compute llrs for bits x(n-2) that are convolved with h2 since these are the ones that have maximum contribution in state-transitions in BCJR algo. This can be interpreted as introducing a \"DELAY\" in BCJR algorithm

For this I added a 5th column to transition matrix that contains the 3rd bit of state definition. So if FromState=[0 0 1 0] and ToState=[1 0 0 1], the transition matrix 3rd column contains 1 since that is the input bit that caused state transition and 5th column contains bit 0 as this corresponds to bit x(n-2) in inner-product expression mentioned above.

Then my component_equalizer is

%——————————————————————————–

delay = find(Channel==max(Channel));

RX=rx(delay:end);

AP_UC_llrs = apriori_uncoded_llrs(1:length(apriori_uncoded_llrs)-(delay-1));

uncoded_gammas_delay=zeros(size(transitions_long,1),length(apriori_uncoded_llrs));

for bit_index = 1:length(AP_UC_llrs)

for transition_index = 1:size(transitions_long,1)

if transitions_long(transition_index, 5)==0

uncoded_gammas_delay(transition_index, bit_index) = -log( 1+ exp(-AP_UC_llrs (bit_index)) ); %log(P(x=0))

else

uncoded_gammas_delay(transition_index, bit_index) = -log( 1+ exp(+AP_UC_llrs (bit_index)) ); %log(P(x=1))

end

end

end

for bit_index = 1:length(AP_UC_llrs)

for transition_index = 1:size(transitions_long,1)

encoded_gammas_delay(transition_index, bit_index) = - abs( ( RX(bit_index)-transitions_long(transition_index,4) )^2 )/N0;

end

end

(No change in alpha,beta,and deltas computation. Note initial conditions for alphas and betas.

alphas_delay(:,1)=0; % First state uncertain

betas_delay(:,length(RX))=0; % Last state uncertain)

extrinsic_uncoded_llrs_delay = zeros(length(AP_UC_llrs),1);

for bit_index = 1:length(RX)

prob0_delay=-inf; prob1_delay=-inf;

for transition_index = 1:size(transitions_long,1)

if transitions_long(transition_index,5) == 0

prob0_delay = jac(prob0_delay, deltas_delay(transition_index,bit_index));

end

if transitions_long(transition_index,5) == 1

prob1_delay = jac(prob1_delay, deltas_delay(transition_index,bit_index));

end

end

extrinsic_uncoded_llrs_delay(bit_index,1) = prob0_delay-prob1_delay;

end

%——————————————————————————–

I did following tests to check validity of this idea

1. I plotted the EXIT chart, I get same result with histogram and averaging method. This confirms that code is error free.

2. I plotted the EXIT chart for both Map_Equalizer with and without delay. In this case I am getting exact same curves. When I looked into the extrinsic_llrs from two codes, I found them to be exactly same.

My question is:

1. Why am I getting exactly same extrinsic llrs for codes which are using different conditions and apriori llrs for computing extrinsic llrs.

2. Is this even possible to introduce a delay like this? If yes then please comment on the changes that I made.

Thankyou,

~N

October 29th, 2016 at 5:52 pm

Hi Nassi,

My feeling is that it is unnecessary to introduce a delay. The Log-BCJR approach does not require the first tap to be the strongest. It will work perfectly happy even if any other tap is the strongest. You just have to use the correct coefficients in your calculations of the gammas. My suspicion is that introducing a delay to the Log-BCJR makes no different because it is unnecessary - it makes no difference to the capabilities of the Log-BCJR, so it makes no difference to the results.

Take care, Rob.

December 21st, 2016 at 9:48 pm

Hi Rob !

i am working on resource allocation in lte and not have much understanding and facing trouble in allocation resources using greedy algorithm. i need matlab code for resource allocation.

kindly help me out

thanks

January 23rd, 2017 at 9:23 am

Hi Ammara,

I’m afraid that I don’t have any Matlab code for resource allocation, so I can’t help you with this.

Take care, Rob.

February 27th, 2017 at 8:52 am

Hi Rob,

I\’m working on implementing turbo code to space-time block codes. Your decoder supports soft input and give hard output. But since I used ML detection after STBC, is there any way to obtain a hard-in-hard-out turbo decoder?

Thanks!

February 27th, 2017 at 10:50 am

Hi Oliver,

You can convert your bits to inputs for my code by just replacing them with LLRs having the value +10 or -10. But you will get much better error correction if you can convert your code to give a soft output.

Take care, Rob.

March 5th, 2017 at 4:18 pm

Hi Rob,

I am testing a simple system. I am transmitting a vector x of uncoded binary bits {+1,-1} over an ISI channel h=[1 alpha] with AWGN noise. At receiver I have a trellis based implementation of MAP detector and using a threshold decision device to compute x_hat. Also SNR = 10 log ((1+alpha^2)/N0).

For a fixed SNR, I change alpha=0.1:0.1:1 and see that BER increases as alpha increases.

My question is why is this happening? I understand that as we increases alpha we are increasing interference but we are also increasing signal power i-e 1+alpha^2. So this should be helpful and result in lower BER. But I am getting the exact opposite of it.

Is there something that I am missing here?

Thankyou,

~N

March 6th, 2017 at 4:09 pm

Hi Nassi,

I think that if you can iterate between your turbo equaliser and a channel decoder, then you will be able to converge to a better BER than if you didn’t have the ISI. But I think this only works for iterative decoding. This is similar to Gray mapping vs natural mapping in QAM - natural mapping gives better BER, but only if you iterate with an inner code. You can investigate this using EXIT charts.

Take care, Rob.

March 6th, 2017 at 4:47 pm

Hi Rob,

I am afraid I did not understand your comment. I am only using one block in my receiver and that is channel detector. You may call it channel decoder as well. I did not get the turbo equalizer part. I am treating ISI channel as a convolutional code and using trellis based MAP detector to estimate the input bits.

Also I am not trying to compare ISI vs No-ISI case. I understand that the performance of system with ISI channel is lower bounded by the performance of same system in AWGN channel. We cannot do any better than that. My objective is to see how BER changes when we increase interference. So I am not changing the length of channel. My toy channel is just a two-tap channel h=[h0 h1]=[1 alpha]. The only change is made to h1 coefficient.

Did you mean that for non-iterative receiver, similar to my setup, BER will not reduce as interference increases?

Also can you suggest some test inputs to check my detector. So far I have been using 1) comparing EXIT charts via histogram and averaging. If I get matching plots, I assume code is fine. 2) Compute BER of detector at very high SNRs for randomly generated binary sequences . I expect to get zeros errors in high SNR region.

Thank you,

~N

March 16th, 2017 at 10:40 pm

Hi Rob,

This question is “MAXIMUM LIKELIHOOD BOUND” for ISI channels. (ref: “Maximum-likelihood sequence estimation of digital sequences in the presence of intersymbol interference,” in IEEE Transactions on Information Theory, vol. 18, no. 3, pp. 363-378, May 1972)

For my ISI channel h=[1 alpha], where 0<alpha<=1, the performance of maximum likelihood sequence detector is bounded by Pe=K.Q(dmin/sigma_n). Pe is probability of symbol error and K is some constant that is not very important here.

For h=[1 alpha], dmin^2=1+alpha^2. SNR=10log(Ea*Eh/sigma_n^2) and Eh=sum(hi^2) for i=1:2. Here in this case Eh=1+alpha^2. Ea is avg. energy of data alphabet and is 1 for alphabet=[+1,-1]. Therefore sigma_n^2 = (Ea*Eh)*(10^-(SNR/10)).

When I compute (dmin/sigma_n), it is coming out to be a constant=10^(SNR/10). This implies that the Pe is independent of channel response. Therefore, for a given SNR, Pe is a constant for all 0<alpha<=1.

I am not sure if this is correct or not. What is confusing to me is the fact that this bound is independent of channel response. This means Pe will be same regardless of alpha. Why is that the case? Am I missing something here? Can you please take a look and comment.

~N

March 20th, 2017 at 8:48 am

Hi Nassi,

This seems strange to me too. Unless the Eb/N0 is adjusted to consider the additional power delivered by the second tap in the channel impulse response. If so, then the equation above suggests that the SER depends only on the amount of power delivered, not how it is distributed between the first and second tap - this makes more sense to me.

Take care, Rob.

March 20th, 2017 at 11:27 pm

Hi Rob,

That correct. Since Pe = Q(sqrt(SNR/Constant)) and SNR definition absorbs Eh, it makes sense that Pe will be constant at a given SNR for different channels. If I fix target BER and plot Noise Variance No Vs alpha, then it will be decreasing curve from left to right.

One more question, I am comparing the performance of my detector against the theoretical estimate of ML bound for an ISI channel. My BER vs SNR curve is way left to that the ML Bound…..suggesting my detector is performing better that ML detector. Since That s not possible, I am guessing that there is some issue with my simulation setup. What should I do to fix this issue. (Please note that detector passes EXIT chart test)

Thank you for your time.

~N

April 12th, 2017 at 7:38 am

Hi Nassi,

If your result is to the left of the ML bound and if your EXIT charts look good, then I would suggest that there is a problem in the way that you are calculating Eb/N0. Perhaps you are comparing your results for a more favourable (higher SNR) channel with the ML results for a less favourable (lower SNR) channel.

Take care, Rob.

April 27th, 2017 at 9:45 pm

Hi Rob,

Thank you for your reply.

I have a quick question regarding path pruning in trellis. I will use your decoder to phrase it.

Assume that \’x\’: a binary sequence of length N goes in the encoder and \’y\’, sequence of length N, is the encoded sequence. This is further perturbed by awgn noise to make \’r\’. Let us assume that at some 1<k<N, we 1) receive noise free samples i-e r(k)=y(k) AND 2) we are told, by some genie, x(k) as well.

Now based on this information, I use this information to initialize apriori_uncoded_llrs. if x(k)=0, apriori_uncoded_llr(k)=G0 and if x(k)=1, apriori_uncoded_llr(k)=-G0 (for some reasonable choice of G0.)

Now instead of searching over N-dimensional space, my decoder should be searching in (N-1)-dimensional space using this apriori information about x(k).

What modification do I need to make in decoder.m to implement this reduced space search.

Note that I gave an example when size of set {k} is only one. This was just an example and set{k} can have any size and can be as large as N-1 (only one input bit in \’x\’ is unknow in this case)

Thankyou,

~N

May 5th, 2017 at 9:46 pm

Hi Nassi,

I think you are saying that you have perfect knowledge of only one of the N information bits and of the corresponding one of the encoded bits. A simple way to implement this is to do as you suggest - use a high magnitude for the corresponding LLRs. As a step further, you could remove any transitions from the trellis that do not agree with these known bit values - this way, you would save on complexity as well. This is fairly easy to implement - when you calculate alphas and betas, you just need to exclude the removed transitions from the maxstar computations - these are computed over fewer transitions.

Take care, Rob.

June 13th, 2017 at 9:17 am

how to implement a downlink scheduling algorithm in matlab?

June 13th, 2017 at 9:49 pm

Hello Satheesh,

I’m afraid that I don’t have any Matlab code for scheduling, so I can’t help you with this.

Take care, Rob.

June 16th, 2017 at 5:07 pm

Hi Rob,

I need your input in following matter.

I have a collection of channel responses of equal length and have same channel energy. However they different minimum distances.

I implemented a SISO detector and am plotting its performance here for a fixed SNR.

http://wikisend.com/download/919128/RC_ML_Performance.jpg

As you can see, as minimum distance increases (along x-axis), the ML performance improves. However the performance of my detector does not follow the same trend.

My detector does pass the histogram/avg MI test. So now I am wondering what should I do next.

I would appreciate if you can give some input.

Thank you,

~N

June 18th, 2017 at 8:11 pm

Hi Nassi,

That is very interesting - I’m afraid that I don’t have a good idea of what might be going on. If your detector is passing the histogram/averaging test, then this indicates that your LLRs are self-consistent. My suggestion would be to plot the MI of each set of extrinsic LLRs in each iteration against the same x axis. If you get the same u shape - then it suggests that your detector is giving LLRs that are self-consistent, but that your detector is not exploiting all available information. There may be something about your detection that means that it does not exploit the available information very well when the minimum distance is high.

You may also like to plot the same plot for even higher minimum distances - it looks like your turbo detector BER is about to start decreasing, at the right-hand side of your plot. I have seen this kind of thing before when the detector works on the basis of probabilities, rather than LLRs. Even double-precision floating point numbers do not have enough precision to implement turbo detection processes using probabilities. LLRs give much better numerical stability.

Take care, Rob.

June 20th, 2017 at 4:47 pm

Hi Rob,

Thank you for your comments. I was finding it hard to work with LLRs because I have non-binary inputs. With further testing, I found that my detector does not have hist/avg test so that explains why I am having the performance that does not follow ML curve.

Since my application is a bit different and does not fall into conventional framework of communication theory, I am trying a couple of things to modify my detector.

I am experimenting with making my detector work for “Super Trellis”. Can you suggest what changes I need to do to component_decoder to make it work when trellis is like one below.

Let me explain how I am making this super trellis.

Let us consider all binary combinations of length 3

x1 x2 x3

0 0 0

1 0 0

0 1 0

1 1 0

0 0 1

1 0 1

0 1 1

1 1 1

Now x2 is associated with FromState (FS) and x3 is asociated with ToState (TS). Also Input (IP) is x3 is in a particular Binary combination. My Channel response is a 3×1 vector [1 1 1]. There for with this info, resulting trellis is (I am showing binary combinations on left to clarify)

BinaryCom Trellis

————- —————-

x1 x2 x3 FS TS IP OP

0 0 0 1 1 0 +3

1 0 0 1 1 0 +1

0 1 0 2 1 0 +1

1 1 0 2 1 0 -1

0 0 1 1 2 1 +1

1 0 1 1 2 1 -1

0 1 1 2 2 1 -1

1 1 1 2 2 1 -3

Although x3 contributes to channel output, it has no effect on state-definition. Therefore we have multiple paths between initial and final state. Each path will have a different path metric.

So for a trellis that is defined like this, what changes do I need to make to component_decoder.m. I am asking because without any change the decoder/detector does not pass hist/avg test.

Thank you

~N

June 21st, 2017 at 4:24 pm

Hi Rob,

I did some modification to component_decoder to make it work for super-trellis that I mentioned in my last post. I modified encoded_gamma computations. Since there are two possible paths, because x3 can either be 0 or 1, between each state transition; my encoded_gamma computation involves selecting max of gamma for both possible paths.

Rest of the component_decoder remains same.

Now here is what I am getting for my test input when noise variance is very low.

http://wikisend.com/download/540134/LLR_SuperTrellis.jpg

I have a couple of observations.

1) Extrinsic llrs do change much as apriori information improves. This shows that Apriori LLrs have almost no impact on the extrinsic llrs.

2) Extrinsic LLrs for input=-1 are much lower that for input=+1.

What do you think?

ThankYou

~N

June 29th, 2017 at 9:49 am

Hi Nassi,

This is interesting. One thing that I would note is that a soft-in soft-out demapper for Gray-coded 16QAM has an EXIT function that is nearly horizontal - it has a slight upwards gradient, which implies that the extrinsic MI increases only slightly with the apriori MI. Perhaps you are observing something similar to this.

On the other hand, I can see that you have many more positive LLRs than negative LLRs, even when the apriori MI is high. Unless you have non-equiprobable bit values, i.e. Pr(bit = 0) != Pr(bit = 1), then I would suggest that something has gone wrong. This would be detected by a histogram vs averaging test - I would suggest this as the next step, if you haven’t done it already.

Take care, Rob.

July 4th, 2017 at 2:06 am

Hi Rob,

Here is an exit chart for my modified-MAP equalizer for three different ISI channels

http://wikisend.com/download/448580/ExitChat_3Channels_3Sigmas.PNG

All these channels have same length and have equal energy. I am plotting exit chart for all these for three different noise variance 0.1,0.5,1.

I have two observations here.

1. For a fixed noise variance, all three channels have almost overlapping curves obtained via hist and avg method. Based on this observation, what can we say about the minimum distance of these ISI channels? Is it right to conclude that they have similar minimum distance?

2. When noise variance is increased from 0.1 to 1, hist and avg curve start to move away from each other. This behavior is consistent for all three test channels. Does it mean that there is a bug in my code?

I am more concerned about my second observation. Note that these exit charts are for modified-MAP detector operating on reduced state transition matrix and modified gamma computation as I mentioned in my previous post.

What do you think?

Thankyou

~N

July 8th, 2017 at 10:38 pm

Hi Nassi,

Let me comment on your two observations:

1. This might be because the MI depends only on the energy delivered by the channel, rather than on the distribution of that energy across the taps. I recall that you mentioned this observation in one of your previous messages above.

2. A mismatch between averaging and histogram can indicate a bug, or it can indicate a missed opportunity to give better performance. The latter happens when using the Max-Log-MAP or the min-sum algorithms, instead of the Log-MAP or sum-product algorithms, for example. Since you are getting increasing functions and a consistent gap between averaging and histogram, I would suggest that you have a missed opportunity (e.g. an over approximation in your calculations), rather than a bug in your code. Because your averaging curves are below your histogram curves, it seems that something is causing your LLRs to have lower amplitudes than they deserve to have.

Take care, Rob.

September 19th, 2017 at 10:27 pm

Hi Rob,

I understand that for a soft output detector, you suggest using exit charts via hist/avg method. But what about a hard output detector? I have implemented a local Maximum Likelihood (ML) detector that makes a hard decision about the current bit by considering a local neighborhood of that bit.

I have a series of ‘N’ channels indexed by n=1,2,…,N (Minimum distance increases as ‘n’ goes from 1 to N. Therefore, ML performance vs ‘n’ gets better for a fixed noise variance).

The problem I have is that BER vs ‘n’ of my local ML detector turns out to be U-shaped curve. I think this is wrong but I dont know how to find bug in my code.

I would appreciate your comments

Thankyou

~N

September 25th, 2017 at 8:04 am

Hi Nassi,

That is an interesting question - I haven’t come across a way of checking the self-consistency of hard decisions, in the same way that the self-consistency of soft-decisions can be checked by comparing the averaging and histogram methods. A ‘u’ shaped curve does sound incorrect to me - all I can suggest is breaking the problem down into smaller parts and testing them individually.

Take care, Rob.