## Matlab EXIT charts

This Matlab code draws EXIT charts for convolutional codes employed as outer, middle and inner codes. Functions are provided to perform encoding and BCJR decoding. Furthermore, functions are provided for generating a priori LLRs having a particular mutual information, as well as for measuring the mutual information of some LLRs.

**main_outer.m** draws an EXIT chart for a convolutional code used as an outer code.

**main_middle.m** draws a pair of 3D EXIT charts for a convolutional code used as a middle code.

**main_inner.m** draws an EXIT chart for a convolutional code used as an inner code.

**convolutional_encoder.m** provides an encoder function for a convolutional code.

**bcjr_decoder.m** provides a soft-in soft-out decoder function for a convolutional code.

**jac.m** provides a function for performing the exact, lookup-table-aided and approximate Jacobian logarithms.

**generate_llrs.m** provides a function for generating Gaussian distributed a priori LLRs.

**measure_mutual_information_histogram.m** measures the mutual information of some LLRs using the histogram method.

**measure_mutual_information_averaging.m** measures the mutual information of some LLRs using the averaging method.

You can download the Matlab code here.

Copyright © 2008 Robert G. Maunder. This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

September 8th, 2008 at 11:30 am

I would like to learn about EXIT chart. So could you give the example code to me, please?

September 8th, 2008 at 11:36 am

The example code can be downloaded from http://users.ecs.soton.ac.uk/rm/wp-content/exit.zip

February 20th, 2009 at 2:52 pm

Are there any explanation or document for the EXIT code?

Since some of the code can not be understood as M function “generate_llrs” and so on.

In function “generate_llrs”, “sigma = (-1.0/0.3037*log(1.0-mutual_information^(1.0/1.1064))/log(2.0))^(1.0/(2.0*0.8935));” and why the number in the line as 0.3037 and so on. If there is document, the code is easily understood.

Thank you very much!

February 23rd, 2009 at 10:21 am

I didn’t get around to writing any documentation I’m afraid. The generate_llrs function uses a Gaussian distributed random number generator to provide a sequence of LLRs. A mean of 0.5sigma^2 will be used for the 0-valued bits and a mean of -0.5sigma^2 will be used for the 1-valued bits. In both cases, a standard deviation of sigma is used. The mutual information of the resultant LLRs is an increasing function of sigma. The line of code you refer to models this function and selects a value for sigma based upon the desired mutual information. Hope this helps, Rob.

April 8th, 2009 at 8:00 am

Are there any matlab code for turbo encoder and its decoder based on BCJR and iterative decoding?

Thanks.

April 8th, 2009 at 8:40 am

Hi Zeina,

You can get Matlab code for a turbo encoder and decoder from http://users.ecs.soton.ac.uk/rm/resources/matlabturbo/

Hope this helps, Rob.

April 17th, 2009 at 11:51 pm

Hello Rob,

I can’t figure out how the averaging method for mutual information works. Can you explain more or give me a reference please?

Thanks

April 20th, 2009 at 8:28 am

The equation for the averaging method uses the average of the entropy of the probabilities indicated by the LLRs. This equation is given just below Figure 4 in…

http://scholar.google.co.uk/scholar?hl=en&lr=&cluster=9622901384089680641

April 27th, 2009 at 10:52 am

Dear Rob,

For the function “generate_llrs”, “sigma = (-1.0/0.3037*log(1.0-mutual_information^(1.0/1.1064))/log(2.0))^(1.0/(2.0*0.8935));”

Is it only applicable to BPSK only?

Many thanks!!

April 27th, 2009 at 10:53 am

Hi Rob,

Thanks for the Histogram script.

Woould you please to let me know, why in the definition of bin width you used 3.49*sqrt(LLR) then the number -1/3 ?

Do you have any proof that this defintion is the best/optimum?

Thanks

April 27th, 2009 at 6:29 pm

Hello Kwang,

This applies in general. The generate_llrs function gives Gaussian distributed LLRs. Even though you may observe a different LLR distribution in practice, this typically doesn´t matter, since EXIT charts are not very sensitive to the LLR distribution.

Hope this helps, Rob.

April 27th, 2009 at 6:34 pm

Hello Zamkofa,

These values come from the reference…

Scott, D. 1979.

On optimal and data-based histograms.

Biometrika, 66:605-610.

You can find out more about optimal histogram bin widths at…

http://www.fmrib.ox.ac.uk/analysis/techrep/tr00mj2/tr00mj2/node24.html

Take care, Rob.

April 27th, 2009 at 9:54 pm

Dear Rob,

Many thanks for your reply. I have one more question. Is there any modification to generate_llrs function over flat Rayleigh fading channel?

Many thanks again for your help.

April 28th, 2009 at 1:58 pm

Dear Rob,

Thank you very much.

Your reply is very helpful.

I enjoyed this bin explanation.

Many thanks

April 28th, 2009 at 7:38 pm

Hello again Kwang,

No modification is needed for Rayleigh fading channels - I use this function with Rayleigh fading channels all the time.

Take care, Rob.

May 8th, 2009 at 8:55 am

Dear Rob,

how can i get the numerical approximation of I(A;X)=J(sigma) or sigma=J_inv(I(A;X)) in ten Brink’s paper, which is in the form of

“sigma = (-1.0/0.3037*log(1.0-mutual_information^(1.0/1.1064))/log(2.0))^(1.0/(2.0*0.8935))” in your code

thanks

May 8th, 2009 at 9:48 am

Hi Terence,

I’m afraid that I can’t remember where that expression came from! It is an approximation for sigma_A = J^-1(I_A), where I_A = J(sigma_A) = 1-g_A(sigma_A) and g_A(sigma_A) is plotted in Figure 3 of…

http://scholar.google.co.uk/scholar?hl=en&lr=&cluster=13457646656598220553&um=1&ie=UTF-8&ei=n_gDStXbPNm4jAeZnIzQBA&sa=X&oi=science_links&resnum=1&ct=sl-allversions

To see how accurate the approximation is, you may like to compare the plot of g_A(sigma_A) in Figure 3 of this paper with the plot you get using the Matlab code…

sigma_A=[0:0.01:7];

I_A = (1.0-2.0.^(-0.3037*sigma_A.^(2*0.8935))).^1.1064;

g_A=1-I_A;

semilogy(sigma_A,g_A);

Hope this helps, Rob.

May 8th, 2009 at 1:11 pm

PS, another method for approximating the I_A = J(sigma_A) function is provided in the appendix of…

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1291808

May 9th, 2009 at 5:38 am

Dear Rob,

I appreciate you reply, the references are very useful. Thanks a lot!

Terence

May 21st, 2009 at 8:51 am

Dear Rob,

Thanks for the well written code.

I am trying to plot the EXIT curve for a Soft In Soft Out Sphere Decoder. The histogram method of finding Mutual Information works well but i observed that the results of Averaging method does not match with that of histogram method.

The channel model used being y = sqrt(1/Nt)*H*s + sqrt(1/SNR)*n where Nt being the number of Transmit antennas and is equal to Number of Receive antennas, H being channel matrix and s being symbol vector.

Kindly help me out.

Many Thanks,sham

May 21st, 2009 at 9:49 am

Hmmm, when the histogram and averaging methods give different results it often (but not always) means that your decoder is producing inappropriate LLRs. In other words, the LLRs are lying by either expressing too much confidence, or too little. The averaging method assumes that the LLRs are not lying, while the histogram method avoids this assumption by checking the LLRs against the transmitted bits. You can see if your LLRs are lying by using this Matlab function…

http://users.ecs.soton.ac.uk/rm/wp-content/display_llr_histograms.m

This gives you two plots, one of the histograms and one showing the relationship between the values the LLRs have and the values they should have. This should be a diagonal line, like the one you get using the code…

bits = round(rand(1,1000000));

llrs = generate_llrs(bits, 0.5);

display_llr_histograms(llrs,bits);

Hope this helps, Rob.

May 22nd, 2009 at 4:37 pm

Dear Rob,

Thanks for your suggestion. I will try it out.

sham

October 6th, 2009 at 12:48 am

Hi Rob,

In the bcjr.m file ,the input required are the apriori_uncoded_llrs, apriori_encoded1_llrs do we need to manually calculate them or we should use generate llrs.m,

October 6th, 2009 at 8:28 am

Hello Mahesh,

You can use generate_llrs.m to generate some random LLRs, or you could interleave the extrinsic LLRs that are output by a concatenated decoder or demodulator.

Hope this helps, Rob.

October 6th, 2009 at 8:43 am

hi rob,

Thanks for your reply,I am using bcjr as a decoder to a partial response channel so should i use the output of the channel as LLRs?do you have an example file to illustrate how to use bcjr decoder

October 6th, 2009 at 9:11 am

Hello again Mahesh,

The file main_inner.m provided above gives an example of how to obtain LLRs from the output of a channel and input them into the BCJR decoder.

Hope this helps, Rob.

November 5th, 2009 at 10:51 am

Hi Rob

I have been working on BICM-ID. Now, I would like to learn about EXIT Charts. It will be great if you can suggest some good papers or tutorials regarding EXIT Charts. I have the original Letter published by Tenbrink but it is not that easy to follow.

Thanks.

Chintan Shah

November 5th, 2009 at 10:55 am

Hi Chintan,

Joachim Hagenauer wrote a really good tutorial paper, you can get it from…

http://scholar.google.co.uk/scholar?cluster=9622901384089680641&hl=en

Take care, Rob.

November 5th, 2009 at 11:08 am

Thanks Rob

August 3rd, 2010 at 5:58 am

Hi Rob,

If I’m not mistaken, your example is for serial concatenated codes, right?

Can it be applied into parallel concatenated codes?

Hope to hear from you.

Annena

August 3rd, 2010 at 7:32 am

Hi Annena,

You are correct, main_outer, main_middle and main_inner draw the EXIT functions for a CC used as the outer, middle and inner code in a serial concatenation, respectively. However, the simulation for drawing the EXIT function of a CC used in a parallel concatenation is the same as main_inner. Alternatively, you can take a look at the simulation I wrote explicity for a parallel concatenation. It’s called main_exit and you can get it from…

http://users.ecs.soton.ac.uk/rm/resources/matlabturbo/

Hope this helps, Rob.

August 11th, 2010 at 2:58 am

Hi again Rob,

TQ so much for your explanation.

One more question, in turbo codes, usually we simulate in large number of bits to get a better performance. In a frame basis transmission, what is the suitable value of the frame length? should we increase the frame length to 10k or increase the number of frame or both? I understand it will consume time running in computer, but if I’m using as in your example (frame length=40, no of frames=1000), is it enough?

Annena.

August 11th, 2010 at 7:27 am

Dear Rob,

I found your references are very good. I am new in this field please can you tell a bit how to plot the exit curve for the demapper input and output in matlab? Assuming a simple AWGN channel. If possible please provide me a simple matlab code. Thanks

August 11th, 2010 at 8:43 am

Hello Annena,

In the UMTS turbo code, the input frame length has to be between 40 and 5114 bits. I always think that an uncoded frame length of the order of 100 bits is ’short’, a length of the order of 1000 bits is ‘medium’, 10000 bits is ‘long’, 100000 is ‘huge’ and 1000000 is ‘impractical’. The thing to keep in mind is that the receiver cannot start decoding until it has received the entire frame. If the frame is very long and takes a long time to transmit, then the receiver has to wait a long time before it can start decoding. This delay imposes a latency upon on communication schemes that would otherwise be real-time. The effect on duplex voice communication for example would be a delay between the time when you finish talking and the time when your friend on the other end of the line appears to start talking.

Despite all this, the frame length typically has no effect on the EXIT function (although it does have an effect on the EXIT bands of http://users.ecs.soton.ac.uk/rm/resources/matlabturbo/ ). However, the smoothness of the EXIT function will depend upon the total number of bits you simulate. For this reason, you can get a smoother EXIT function by increasing either the frame length or the number of frames. As I recall, simulating 1000 frames, each comprising 40 bits should give you a fairly smooth EXIT function.

Hope this helps, Rob.

August 11th, 2010 at 9:50 am

Hi Ideal,

You can get some Matlab code for plotting the EXIT function of a demapper from http://users.ecs.soton.ac.uk/rm/wp-content/QPSKEXIT.zip . This draws the EXIT function for QPSK using natural mapping. If you would like to use a different modulation scheme, all you need to do is change constellation_points and bit_labels in modulate.m and soft_demodulate.m, as well as bit_count in main_exit.m.

Hope this helps, Rob.

August 12th, 2010 at 1:39 am

Dear Rob,

I am really thankful to you for your reply. It looks fine for me. If possible please can you tell me how can I mathematically analyse the QPSKEXIT code , I mean I want to do mathematical analysis so that I can understand the code very well. Hope you will provide me some good reference for such mathematical analysis.

Again I am thankful to you for your help. Thanks

August 12th, 2010 at 5:43 am

Dear Rob,

Thanks again for your reference code. Its even more helpful if you please tell me mathematical analysis of the whole program so that I can understand very well. Actually by Mathematical derivation it will be more easy for me. please tell me about some good mathematical analysis examples.

Thanks again for your help.

August 12th, 2010 at 8:27 am

Hi Ideal,

You can read about the mathematics of soft demodulation in these papers:

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=682910

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1025496

My Matlab code works with the logarithm of probabilities, rather than with the probabilities themselves. This improves the precision of the calculations and actually simplifies them. Therefore, the multiplication of probabilities in equation 7 of the second paper are replaced with the addition of log-probabilities. Also, the summation in equation 7 is replaced with the Jacobian logarithm. In my code, ln[P(y_t|x_t)] is calculated in the line:

symbol_probabilities = -abs(rx-channel*constellation_points).^2/N0;

This can be derived from the PDF of the Gaussian distributed noise.

You can read about the EXIT functions of soft demodulation in this paper:

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1258849

Hope this helps, Rob.

August 13th, 2010 at 7:41 am

Thanks Rob for your great help,

I understand the working of exit curve for demapper but I need to ask one question.

we know that I(A;X) = J(sigma) and for sigma = inv(J)(I(A;X)).

I have confusion that for calculation of sigma we take the inverse of Mutual information between the a priori LLR and bits. But to generate a priori LLR we also need sigma. then how can we generate sigma for the apriori LLR to calculate mutual information. please can you briefly explain me or just you can send me the matlab program by which we can generate the a priori LLR and then find sigma by sigma = inv(J)(I(A;X)).

Hope a favourable response thanks.

August 13th, 2010 at 9:12 am

Hi Ideal,

I’m afraid I’m not quite sure what you’re getting at. In these simulations the function I=J(sigma) is not needed at all. The function sigma=J^-1(I) is used to generate a priori LLRs. More specifically, the generate_llrs function has a parameter called mutual_information. The value of this parameter is used as the parameter of the function sigma=J^-1(I), in order to obtain a value for sigma. This is then used as the standard deviation, when generating a priori LLRs having a Gaussian distribution. You can see all of this in the code that can be downloaded from above.

Take care, Rob.

August 15th, 2010 at 2:39 am

Thanks Rob,

please tell me one thing sigma=J^_1(mutual information), can you please tell me how can we calculate mutual information without a priori llr since mutual information = I(X;a priori llr), I am really confused that to mutual information between a priori llr and bits, we need sigma and sigma=J^_1(mutual information),.

How can we really connect these two inter-related quantities. sorry for these questions but I am really confused. Thanks

August 16th, 2010 at 8:08 am

Hi Ideal,

The function sigma=J^-1(mutual information) is used in the generate_llrs function. Basically, we are asking this function to generate some random LLRs that have a mutual information equal to the value that we specify. The function sigma=J^-1(mutual information) is required to work out the standard deviation of the Gaussian distribution that corresponds to the mutual information we have asked for.

Hope this helps, Rob.

August 16th, 2010 at 1:09 pm

thanks for fruitful comments. I appreciate your feedback.

August 20th, 2010 at 12:55 am

Dear Rob,

Do you have matlab code for LDPC EXIT curve or (the EXIT curve for ON-OFF keying modulation in optical communication)? Thanks

August 20th, 2010 at 8:21 am

Hi Ideal,

I’m afraid I don’t have any code for LDPC EXIT functions - although I do intend to write some when I get around to it. You can simulate ON-OFF keying by downloading http://users.ecs.soton.ac.uk/rm/wp-content/QPSKEXIT.zip and changing the constellation points and bit labels in modulate.m and soft_modulate.m to…

constellation_points = [0;1];

bit_labels = [0;1];

Take care, Rob.

August 23rd, 2010 at 2:29 am

Dear Rob,

Thanks, Do you have code for capacity calculation for binary pulse position modulation(PPM) or M-ary PPM or any other help in this regards?

Thanks

August 23rd, 2010 at 8:56 am

Hi Ideal,

I’m afraid that I don’t have any of that stuff.

Take care, Rob.

September 7th, 2010 at 4:43 am

Hi Rob,

In your main_exit.m file, output from component_decoder.m, which is y_e is measure as extrinsic information of the decoder. As I know, after MAP algorithm, we will get L_map. Thus, for L_extr, it is given as

L_extr= L_map -L_Appr-L_c*y where y is the systematic bits received.

My question is, do we measured the mutual information using L_extr or L_map?

September 7th, 2010 at 8:07 am

Hi Annena,

I think this question refers to the UMTS turbo code that is available at…

http://users.ecs.soton.ac.uk/rm/resources/matlabturbo/

You measure it using the extrinsic information L_extr. My component_decoder.m outputs this directly, rather than outputting the a posteriori information L_map. For this reason, there is no need to use the calculation L_extr= L_map -L_Appr-L_c*y.

Hope this helps, Rob.

September 14th, 2010 at 7:51 am

Hi Rob,

In my simulation, the results of histrogram method and average method are not same. The line of the histrogram method is little higher than that of the average method. I want to know which method is more accurate, or there are some mistakes in my programme,because I rewrote your programme by C language.

Thank you very much.

Best regards.

September 14th, 2010 at 8:17 am

Hi Michael,

The histogram and averaging methods make different assumptions, so sometimes one method will be more accurate, sometimes the other method will be more accurate. The histogram method works well when you input a long vector of LLRs into it - maybe 10000, 100000 or 1000000. The averaging method works well when all the components of your scheme are optimal. I suspect that there are no problems with your simulation - you can probably get the two EXIT functions to match better by running a longer simulation and inputting more LLRs at a time into the histogram method.

Hope this helps, Rob.

September 15th, 2010 at 5:15 am

Hi Rob,

Thank you for your answer. you said”The averaging method works well when all the components of your scheme are optimal”, I have a question about “the components of your scheme are optimal”. What circumstances can the components regard as optimal? When the decoding methods of the inner and outer decoder are optimal, or the channel state information is known at receiver, or other condition?

Thank you very much.

Best regards.

September 15th, 2010 at 8:09 am

Hi Michael,

You’ve got it. Examples of when a decoder will be sub-optimal are when the channel estimation is imperfect or when a low-complexity approximation of the decoder is used, such as the min-sum or the max-log-map. If you are using the optimal decoding algorithm and all of your channel state information is correct then the averaging method of measuring mutual information is appropriate.

Take care, Rob.

September 17th, 2010 at 3:49 pm

Dear Prof,

On May 21st, 2009, Sham asked an inconsistancy with averaging vs. histogram with regard to sphere decoder in which he was using a real H matrix instead of a single tap h as is done by Steven Ten Brink and Hagenauer in their papers.

In the papers they prove that L_a is consistant gaussian (eq-11) of Hagenauer paper that you are refering. (that is variance = 2* mean and the distribution is gaussian). Eq 18 of his paper which is the basis for averaging works only if eq-11 is true. Otherwise you have to use the formula 12 and 18 of S Ten Brink (you need histogram techinque to do this). Do you agree?

If so,

The question I have is: Once you use a non-single tap H (like with ISI or time varying H) how can you be sure that distribution of L_a is gaussian with variance 2* mean. (I kind of buy it’s Gaussian, but how do we prove variance = 2*mean)

regards

devan

September 19th, 2010 at 1:36 pm

Hi Devan,

You are correct, if the LLRs satisfy the consistency condition then you can use the averaging method to measure their mutual information. Otherwise, you should use the histogram method.

You can tell if your LLRs satisfy the consistency condition by using this Matlab function…

http://users.ecs.soton.ac.uk/rm/wp-content/display_llr_histograms.m

This gives you two plots, one of the histograms and one showing the relationship between the values the LLRs have and the values they should have. This should be a diagonal line, like the one you get using the code…

bits = round(rand(1,1000000));

llrs = generate_llrs(bits, 0.5);

display_llr_histograms(llrs,bits);

The more LLRs you input into this function, the higher the resolution of the histograms will be and the better it will work.

Hope this helps, Rob.

September 20th, 2010 at 10:43 am

Thanks Prof. I’ll try that.

I plan to use the exit chart analysis to get the convergence behaviour of turbo-equalizer when the channel is varying in time (rayleigh) and frequency. It’s hard to work out the probability density functions of eualizer output LLRs in this case, but it’s hard to believe it’s going to be consistant gaussian distributions. The dilema I have is: The whole exit chart literature makes consistant-gaussian assumption with a passing mention that, even if it’s not so, the method is found to work fine.

Now the question is: What are the scenarios when the exit chart analysis is not applicable? For eg: If my channel gain is non-unity, if I estimate the channel on the fly, .., …, Can you pl. suggest me a reference on this ?

warm regards

devan

September 20th, 2010 at 11:45 am

Hmmm, in my experience, the distribution of the LLRs is not very important - all that matters is that the LLRs satisfy the consistency condition. I would say that EXIT chart analysis is applicable whenever you use iterative decoding. However, the EXIT function may not always accurately predict the path of the trajectories. This depends on how you model the LLRs. I’m afraid I can’t think of any references about when EXIT charts fail.

Hope this helps, Rob.

September 20th, 2010 at 4:06 pm

Thanks for your informative comments.

warm regards

devan

September 27th, 2010 at 7:40 pm

Hi,

I would like to learn about the zigzag-hadamard (ZH) code based on EXIT chart. So could you give the example code to me, please?

Best

September 28th, 2010 at 8:12 am

Hi Salim,

I’m afraid I don’t have any Matlab code for the ZH.

Sorry about that, Rob.

October 17th, 2010 at 12:53 am

Dear Rob,

Can we use averaging mutual information function to find EXIT chart of BPSK Demapper. If so please can you provide me any refrence please. thanks

October 17th, 2010 at 1:27 pm

Hi Ideal,

You can, but since the BPSK demapper is unable to exploit any a priori information, its EXIT function is a horizontal line. I’m afraid I can’t find any references that show this.

Take care, Rob.

October 17th, 2010 at 11:54 pm

averaging mutual information mean Ie = 1 - 1/N(Hb(p(x)) and

p(x=0) = exp(llr(x))/(1 + exp(llr(x))) and p(x=1) = exp(-llr(x))/(1 + exp(-llr(x)))

and Hb = -p(x=0)log2(p(x=0)) - p(x=1)log2(p(x=1))

Am I right and I can use this to find the demapper exit curve for any mapping scheme?

October 18th, 2010 at 8:19 am

Hi Ideal,

Yep - you can use the averaging method to find the EXIT curve for any mapping scheme.

Take care, Rob.

October 20th, 2010 at 12:16 am

Dear Rob,

Thanks for the reply, please can you just suggest me some reference for the histogram method because I want to understand the basics of this method. Thanks

October 20th, 2010 at 8:28 am

Hi Ideal,

The histogram method is equation 19 in…

http://scholar.google.co.uk/scholar?cluster=15764897033271139190&hl=en&as_sdt=2000

Hope this helps, Rob.

November 1st, 2010 at 6:20 am

Dear Rob Thanks for the reply,

I have two questions.

1. I am using you matlab code for mutual information by histogram method. I am getting the slightly different answer from the averaging function at point when the a priori mutual information is 0.8 the extrinsic mutual information becomes one i,e (Ia,Ie)=(0.8,1) for OOK scheme. I don’t know why?

2. I also don’t understand one line in your code, ie.

bin_width = 0.5*(3.49*sqrt(llr_0_variance)*(llr_0_noninfinite_count^(-1.0/3.0)) + 3.49*sqrt(llr_1_variance)*(llr_1_noninfinite_count^(-1.0/3.0)));

I mean how you define it ? Thanks for help

November 1st, 2010 at 10:25 am

Hi Ideal,

1) This does not surprise me - the averaging and the histogram methods will typically give different results. This is because they make different assumptions. The averaging methods assumes that the LLRs satisfy the consistency condition. The histogram method assumes that you have provided it with a sufficiently high number of LLRs.

2) This line is trying to determine the best bin width to use for the histogram. The constants in this equation come from the reference…

Scott, D. 1979.

On optimal and data-based histograms.

Biometrika, 66:605-610.

You can find out more about optimal histogram bin widths at…

http://www.fmrib.ox.ac.uk/analysis/techrep/tr00mj2/tr00mj2/node24.html

Take care, Rob.

November 2nd, 2010 at 12:35 am

Dear Rob,

I really appreciate your response, It means the averaging method is applicable if the LLRs satisfy the consistency conditions and if the distribution of the LLRs is different for bit 0 and bit 1 then we can’t really use the averaging method. Am I right?

In that case, its better to use histogram method with large number of LLRs. Am I right?

If not please can you suggest me some advice to work out for the case of non symmetric distribution?

November 2nd, 2010 at 11:09 am

Hello again,

That’s right. Please note that both methods assume that 0s and 1s occur equally often in the transmitted bit sequence, but I think that is probably the case for you.

Take care, Rob.

November 11th, 2010 at 9:09 am

Hi Rod,

can I ask you one question?

usually we calculate LLR for the suboptimum system like If i say BPSK LLR=2/sigma^2)*receive signal; and OOK(LLR=(2*rec signal -1)/2*sigma^2)

can you please just tell how can we derive the LLR for suboptimum system? or its straight forward? thanks

November 11th, 2010 at 12:15 pm

Hi Ideal,

I’m not sure what you mean by a suboptimal system. I always obtain the LLRs in a manner similar to the one you describe.

Take care, Rob.

December 20th, 2010 at 5:42 am

Hi Rob,

I want to calculate the performace of the system for unknow channel state information at the receiver. Can you please refere me any helpful document in this regards. Who calculate the LLR without CSI, simple derivation and explanation. Thanks

December 20th, 2010 at 9:34 am

Hi Ideal,

If you want to have unknown channel state information at the receiver, then you will need to use a non-coherent modulation scheme, such as Differential Phase Shift Keying. Here is a reference that I found on the soft-demodulation of DPSK…

http://scholar.google.co.uk/scholar?cluster=11051789109371871926&hl=en&as_sdt=2000

Take care, Rob.

January 24th, 2011 at 7:44 pm

Hi Rob,

Thank you for your great programs. It is a big help for me to learn how EXIT chart work. However, I still don\’t understand how to apply EXIT chart for higher modulation. Since mapping bits in higher-modulated symbols are unequal protected, LLRs may not satisfy consistency conditions (pls correct me if I am wrong). For example, in Ten Brink paper of \"Design of low density parity check codes for modulation and detection\", he explained how to compute sigma_ch for BPSK in equation (3), but didn\’t explain how to get it for higher modulation. As my understanding, deviation of LLRs is most important factor to get EXIT chart, is it right?

Can you please help me with that or provide any reference? Many thanks.

January 25th, 2011 at 9:37 am

Hello Thuy,

You are quite right, the equation provided by ten Brink is only valid for BPSK. In fact, for higher order demodulators it is necessary to use a different set of equations to obtain the LLRs. Furthermore, higher order demodulators can accept a priori LLRs and engage in iterations with the decoder. You can read about this in the following papers:\

http://scholar.google.co.uk/scholar?cluster=12245801284537151734&hl=en&as_sdt=0,5

http://scholar.google.co.uk/scholar?cluster=5800185907046683278&hl=en&as_sdt=0,5

http://ieeexplore.ieee.org/search/srchabstract.jsp?tp=&arnumber=765396&queryText%3DBit-Interleaved+Coded+Modulation+with+Iterative+Decoding%26openedRefinements%3D*%26searchField%3DSearch+All

Here’s some Matlab code that I wrote to perform modulation and soft demodulation for arbitrary QAM/PSK/ASK constellations…

http://users.ecs.soton.ac.uk/rm/wp-content/modulate.m

http://users.ecs.soton.ac.uk/rm/wp-content/soft_demodulate.m

All you need to do is change the constellation_points and bit_labels matrices to match your constellation diagram.

Hope this helps, Rob.

January 26th, 2011 at 11:41 pm

Hi Rob,

Thank you very much for your quick reply. However, can you please explain to me the difference between generate_llrs in QPSKEXIT and generate_llrs in EXIT(BPSK)? Since I don\\\’t see the difference in term of mutual information in case of BPSK and QPSK. I also think that prior LLRs need to feed from decoder, so why do you need generate it randomly?

Sorry for my silly questions

Thanks. Thuy

January 27th, 2011 at 9:40 am

Hello again,

generate_llrs should be the same regardless of what you are using it for - all it does it generate Gaussian-distributed random LLRs having a particular mutual information. Random LLRs should only be used when drawing an EXIT chart. As you say, the a priori LLRs come from the channel decoder in a real scheme.

Take care, Rob.

March 20th, 2011 at 3:11 pm

Hi,

The code is very easy to read and understand. I found out that the CC is (3,2) code. Then I ran the program and get a picture of EXIT chart, which unfortunately does not agree with the curve is figure 3 of “Stephan ten Brink, Convergence Behavior of Iteratively Decoded Parallel Concatenated Codes,IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 49, NO. 10, OCTOBER 2001″ . I mean the IE is much larger than it is in Figure 3 even I set Eb/N0 to be the same?

Did I make something wrong?

March 21st, 2011 at 10:56 am

Hello Pop,

As you say, my code is a recursive systematic convolutional code, having the code polynomials (G_r,G) = (3,2). However, my code has a coding rate of 1/2. By contrast, Stephan ten Brink’s code has a coding rate of 2/3 because he punctured half of the parity bits. You should be able to get the same plot as Stephan ten Brink by puncturing some of the parity bits and by converting from SNR to Eb/N0 according to Eb/N0 = SNR - 10*log10(0.5).

I hope this helps, Rob.

April 22nd, 2011 at 3:30 am

Hi Rob,

I’m using your measure_mutual_information_histogram.m file to apply into my system. I noticed that in your simulation, your extrinsic value is , IE(x,LLR), where x is binary bit (0,1).In case, I want to input x as (1, -1), and LLR is the LLR value, not demodulated BPSK, in which part should I change your program?

April 26th, 2011 at 10:20 am

Hi Mila,

You can achieve this without modifying the function. More specifically, you can use

measure_mutual_information_histogram(LLRs, (1-x)/2);

I hope this helps, Rob.

May 12th, 2011 at 7:34 am

Hi rob,

Thank you for the feedback.

I want to know, why your EXIT performance is so good at low SNR? If I run at SNR=2dB for example in the main_inner.m, the EXIT chart is almost flat to 1. Another thing is that , is your BCJR algorithm using log MAP or Max-logMAP? just want to confirm with you.

regards,

Mila.

May 12th, 2011 at 9:26 am

Hi Mila,

Perhaps you are getting confused between Es/N0 and Eb/N0. My simulation uses SNR=Es/N0, which depends on the energy per symbol, rather than the energy per bit. You can convert from one to the other according to

Eb/N0 = Es/N0 - 10*log10(R*log2(M))

where R is the coding rate of the channel code and M is the number of constellation points used by the modulator.

By default, my code uses the log-MAP. However, you can change it to use the Max-Log-MAP by setting mode=2 in jac.m.

Take care, Rob.

May 22nd, 2011 at 1:40 am

Hi,

I want to know, in case I want to apply code rate 1 where the encoder only transmit the encoded bits, how can I modified the bcjr_decoder.m?

Anne

May 23rd, 2011 at 8:30 am

Hi Anne,

You just need to delete all instances of apriori_encoded1_llrs and aposteriori_encoded1_llrs, as well as any blocks of code that operate on these vectors.

Hope this helps, Rob.

June 6th, 2011 at 11:43 pm

Hi Rob,

I have read all the questions and comments. I really appreciate you provide so much informative stuff to us. Now I am trying to reproduce a result based on the paper Achieving near capacity on a multiple-antenna channel by S. ten Brink.

However, I have a problem that the curve of list sphere decoder I plot in the EXIT chart by averaging method is going to drop down a bit. In other word, the curve cannot keep going up. There must be something wrong. I have ran a lot of simulations, but the results are always same.

Do you have any idea about that? Do I need to set some thresholds on the LSD output LLRs?

Thanks a lot. I am looking forward to your response

June 9th, 2011 at 10:29 am

Hi Alex,

It sounds like there is a bug in your simulation. There are two ways of confirming this:

1. You can compare the EXIT function that is obtained using the histogram method of measuring mutual information with that obtained using the averaging method. If the two EXIT functions are significantly different, then you know there is a bug in your simulation.

2. You can see if the extrinsic LLRs produced by your inner decoder are telling lies using this Matlab function…

http://users.ecs.soton.ac.uk/rm/wp-content/display_llr_histograms.m

This gives you two plots, one of the LLR histograms and one showing the relationship between the values the LLRs have and the values they should have. You know that you have a problem if the second plot is not a diagonal line, like the one you get using the code…

bits = round(rand(1,1000000));

llrs = generate_llrs(bits, 0.5);

display_llr_histograms(llrs,bits);

Of course, I’ve got no idea what the bug might be. A good way to track it down would be to manually look at the extrinsic LLR values that are produced by your inner decoder when it is provided with high-quality a priori LLRs and when the SNR is high.

Take care, Rob.

July 4th, 2011 at 10:35 am

Hello,

I have a problem in downloading some of documents from google scholar,

Can anyone tell me why,

for exemple: http://scholar.google.co.uk/scholar?hl=en&lr=&cluster=9554164565061926428

Thanks

July 4th, 2011 at 10:49 am

Hello Rob,

I have another question for you, but now it’s about the main_middle.m code, I’am trying to use it in order to reproduce Two dimensionnel curves and not 3D. But unfortunately, I have surprizing result, what I changed is only the last part, which is plotting figures.

surf(IA_uncoded,IA_encoded,IE_encoded) —>plot(IA_encoded,IE_encoded)

July 4th, 2011 at 11:57 am

Hello Sonia,

Here is an IEEE Xplore link to that paper…

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1291808

Take care, Rob.

July 4th, 2011 at 12:04 pm

Hi Sonia,

The trouble you have when plotting two dimensional EXIT functions is caused because IA_encoded and IE_encoded are both matrices. In order to plot a two-dimensional EXIT function you need to project the three-dimensional functions into two dimensions. You can see how to do this in…

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1499068

Take care, Rob.

July 4th, 2011 at 7:16 pm

Hello Bob,

Thank you a lot for your help, I will try to find the results based on the document,

Thanks a lot.

July 9th, 2011 at 6:46 pm

Hello Rob,

in the file main_inner.m I can\’t understand why you put

apriori_encoded1_llrs = (abs(rx1+1).^2-abs(rx1-1).^2)/N0;

Really, it\’s urgent …………….

July 9th, 2011 at 7:05 pm

Hi Sonia,

This is the BPSK demodulator. It converts the received signal into LLRs.

Take care, Rob.

July 10th, 2011 at 6:31 am

Hi Rob,

Thank youfor your help,But, if I’m not mistaken,you have the inner and the outer main M-files which corresponds to the inner and teh outer encoder repectively in the concatenated coding, but why do you use the middle_main.m file ?

Awating for your response

Sonia

July 11th, 2011 at 2:19 am

Dear Prof

I have read several of your posts, and these are extremely helpful in understanding iterative coding and exit charts.

I shall really appreciate your help if you can help me as to how should I confirm whether my information coming after each iteration by means of turbo decoding is being properly decoded or not.

Thanks

July 11th, 2011 at 10:25 am

Hello again Sonia,

In some schemes, there is a serial concatenation of three component codes, namely the outer, middle and inner codes. This simulation is for the case where a convolutional code is used as the middle code. You can find out more about three-stage concatenations in…

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1188420&tag=1

Take care, Rob.

July 11th, 2011 at 10:28 am

Hello Swap,

The way to confirm that your decoder is operating correctly is to use both the histogram and averaging methods to measure the mutual information of its output LLRs. Both of these methods should give similar measurements. If they don’t, then you know there is something wrong with your decoder. Another test that you can do is described in this comment above…

http://users.ecs.soton.ac.uk/rm/resources/matlabexit/#comment-246

Take care, Rob.

July 11th, 2011 at 11:04 am

Hello Rob,

thank you a lot, but i am trying to simulate a transmission chain using two concatenated code, inner , outer, so I am trying to plot BER = f(SNR) for alla the chain, after decoding , so I have only one main, where I put an inner, an outer code, and a modulator, but how can I introduce SNR, in order to have BER = f(SNR) and not BER = f(Ia)

Awaiting for your help

Sonia

July 11th, 2011 at 12:47 pm

Hi Sonia,

You will need to program an iterative decoder, which operates the inner and outer decoders alternately. You can see an example for a parallel concatenation in main_ber.m of…

http://users.ecs.soton.ac.uk/rm/resources/matlabturbo/

Take care, Rob.

July 11th, 2011 at 6:14 pm

Thank you Rob

Just for the sake of clearance, As in your program main_ber. You have measured I(a_a) in order to know the performance of your decoder and compared with best_IA.

However I am wondering why haven\\\’t you considered a_p since a_p is final output LLR to see the decoder performance. why a_a (apriori LLR) is considered though definitely it is an output of one decoder but it is not the final output.

Thanks

July 11th, 2011 at 6:23 pm

Um, I can’t remember why I programmed it this way. As you say, it might be better to use a_p to make the decision. Perhaps I chose to do it this way to avoid having to add a_a to a_e until the very end of the decoding process. However, avoiding this addition doesn’t really save much complexity…

Take care, Rob.

July 13th, 2011 at 11:03 am

Again thanks for your previous reply.

According to your previous post on, May 21st, 2009 at 9:49 am

You have mentioned about function display_llr_histogram and the LLRs that are present and what they should be.

If by using this function I get a very small diagonal line However by directly using the function generate_llrs and the same mutual information value as used in the previously mentioned function I get a long diagonal then, does it indicates that the other LLRs have bee lost in the channel, or I am doing something wrong.

July 13th, 2011 at 12:28 pm

Hi Swap,

The length of the line will depend on the mutual information of your LLRs. The higher the mutual information, the greater the magnitude of the LLRs and the longer the diagonal line will be. Since the line is diagonal, you can have some confidence that your decoder is working well.

Take care, Rob.

July 14th, 2011 at 3:05 pm

Hi Rob

Thanks once again

I wanted to know about the significance of tunnel in exit charts, as I am trying to plot them.

Firstly I want to know, does number of iterations have any significant effect on the tunnel width. Next If my tunnel closes at some mid value before mutual information turns 1. Then what should I infer in physical sense

July 14th, 2011 at 4:20 pm

Hi Swap,

The tunnel width depends on the channel SNR, not on the number of iterations you perform. The number of iterations affects how far through the tunnel that the trajectory gets. If the trajectory gets to the right hand side of the EXIT chart, then a low BER results. If the tunnel is closed, then the trajectory is prevented from reaching the right hand side of the EXIT chart, no matter how many iterations are performed. This results in a high BER.

Take care, Rob.

July 28th, 2011 at 1:54 pm

Hello Rob

Thanks for your kind support, since I am a beginner therefore at times would sound odd, sorry for that

I wanted to know the significance of peaks of histogram in “display_llr_histogram”, as while plotting the curves I am getting multiple histogram peaks

Does this peak has some relation with frame length or SNR ??

Apart from this can you also provide me some good pdfs which would be helpful in understanding about exit charts and trajectories.

swap

July 28th, 2011 at 2:08 pm

Sorry Rob

I forgot to include one ore query in my previous post, why does the diagonal in display_histogram_llr appear to be kinky at the edges.

Thanks once again

swap

July 29th, 2011 at 8:30 am

Hi Swap,

I think that the shape of the histograms depends on the characteristics of your code and channel - different codes and channels give different shapes. I don’t think it depends so much on the frame length and SNR, although I haven’t looked into this in detail before. Multiple peaks in the histograms are not a problem, so long as the LLRs satisfy the consistency condition - i.e. you get a diagonal line in the second plot generated by display_llr_histograms.m. To answer your second question, this diagonal line will have spurious results at its two ends because LLRs having these high magnitudes are rare and so there is less data to average over.

Take care, Rob.

August 4th, 2011 at 12:57 pm

Dear Rob

Can you please provide the reference for measure_mutual_histogram and measure_mutual_averaging method.

Though the reference link you have provided is not working

http://scholar.google.co.uk/scholar?cluster=15764897033271139190&hl=en&as_sdt=2000

August 4th, 2011 at 1:29 pm

Hi Swap,

Here is the paper…

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.142.291&rep=rep1&type=pdf

The averaging method is the equation immediately below figure 4. The histogram method is equation 1.

Take care, Rob.

August 30th, 2011 at 6:54 pm

Hello Rob

I am again back to your function display_llr_histogram. If I want to test the output of my modulator and de-modulator after I pass bits through channel and generate the LLR.

Can I test the LLRs for the performance of MOD/DEMOD even by display_llr_histogram or is it only valid in case of decoders.

August 30th, 2011 at 7:04 pm

Hi Swap,

It can be used to verify the correct operation of soft demodulators.

Take care, Rob.

October 4th, 2011 at 1:09 am

Hi,

I need simple wireless network code. Where I can download it? thanks

October 4th, 2011 at 8:19 am

Hi Rob,

I want to know if I apply Mapper with Doped Accumulator as inner code. As you know, it used 1/2 rate conv.code but at every nP bits, it will replaced with parity bits so it become rate 1. My questions are:

1. If I applied your bcjr_decoder.m file, I have to change the trellis where there will be only uncode_bits and encode_bits parameters. Is it correct to used the calculation of gamma only up to “transitions(transition_index, 4)==0″ and the rest is followed?

2. can you advise me how to control the input to the bcjr_decoder as it act as doped de-accumulator.

Regards,

AnneNa

October 4th, 2011 at 8:44 am

Hello Aini,

I’m afraid that I don’t have any stuff on network coding.

Take care, Rob.

October 4th, 2011 at 8:48 am

Hello Anne,

As you say, my convolutional_encoder.m is a half-rate recursive systematic convolutional code. You can convert it into a rate-one accumulator by simply not transmitting the systematic bits. In the receiver, you just need to use a vector of zero-valued LLRs for the corresponding apriori input.

Take care, Rob.

October 8th, 2011 at 3:19 pm

Hi Rob,

I want to write Monte Carlo equation of LLR calculation for each bit of the 16-QAM four bits. If I just follow the basic formula of LLR calculation and I can write the LLR for bit b0 which is given at end for the received noisy signal.

where sigma = sqrt(1/Rm*Rc*Es/N0);

Es/N0 = log2(M)(Eb/No), Rm=log2(M);

I can generate 16-QAM symbols as

alpha16qam = [-3 -1 1 3]; % 16-QAM alphabets

ip = randsrc(1,100,alpha16qam) + 1i*randsrc(1,100,alpha16qam);

map = (1/sqrt(10))*ip;

noise = sigma*(randn(size(map)) + 1i*randn(size(map)));

y = map + noise;

LLR for bit b0 is

A = (real(y) + 3).^2/(2*sigma^2);

B = (real(y) + 1).^2/(2*sigma^2);

C = (real(y) – 1).^2/(2*sigma^2);

D = (real(y) – 3).^2/(2*sigma^2);

Num = exp(-A) + exp(-B);

Denum = exp(-C) + exp(-D);

L(b0) = log(Denum./Num);

Is this way of calculating LLR for b0 is write or wrong? Thanks

October 10th, 2011 at 8:51 am

Hi Ideal,

Something that looks not quite right is that your symbol energy is greater than one - you need to use…

alpha16qam = [-3 -1 1 3]/sqrt(10);

… and make corresponding updates to the equations for A, B, C and D.

Besides this, I can’t see anything that looks wrong. You can check the result by comparing the mutual information of the LLRs using both the averaging and histogram methods - they should give the same answer…

Take care, Rob.

October 10th, 2011 at 9:33 am

Dear Rob,

Thanks for the quick response. But if you look at the mapping,

I am multiplying the map with the factor 1/sqrt(10), i.e.,

ip = randsrc(1,100,alpha16qam) + 1i*randsrc(1,100,alpha16qam);

map = (1/sqrt(10))*ip;

Is that not right? and where in equations A, B, C and D, I need to multiply with 1/sqrt(10).

Thanks for your useful discussion.

October 10th, 2011 at 9:57 am

Ah yes - I didn’t notice that. You need to multiply +3, +1, -1 and -3 with 1/sqrt(10) in equations A, B, C and D.

Take care, Rob.

October 10th, 2011 at 11:26 pm

Hi Rob,

In calculating the extrinsic information, e.g., line 76 in main_inner.m, it is as

extrinsic_uncoded_llrs = aposteriori_uncoded_llrs-apriori_uncoded_llrs;

However, since apriori_uncoded_llrs is a priori without including the channel output LLRs, shouldn’t it instead be calculated by:

extrinsic_uncoded_llrs = aposteriori_uncoded_llrs-apriori_uncoded_llrs - apriori_encoded1_llrs; ?

Thanks.

October 11th, 2011 at 8:38 am

Hi Carlin,

Nope. This is because apriori_encoded1_llrs pertains to encoded1_bits. By contrast, extrinsic_uncoded_llrs, aposteriori_uncoded_llrs and apriori_uncoded_llrs all pertain to uncoded_bits. Since they pertain to different bits, these sets of LLRs cannot be included in the same calculation. Even if you could perform this subtraction, it would be removing information from the extrinsic LLRs, reducing the performance of the code.

Take care, Rob.

October 11th, 2011 at 4:52 pm

Hi Rob,

Thanks for your response. However, I thought encoded1_bits were systematic bits so would be the same as uncoded_bits. Is this wrong?

On the other hand, I think for EXIT analysis, the desired mutual information is between uncoded_bits and either a priori LLRs, or extrinsic LLRs, without including channel output LLRs, as per ten Brink\’s paper. Here I am assuming the following relationship for systematic bits:

L_aposteriori = La + Lc + Le (1)

matching to the notationss used in the code, it will be:

aposteriori_uncoded_llrs = apriori_uncoded_llrs + apriori_encoded1_llrs + extrinsic_uncoded_llrs;

From your explanation, did you mean EXIT curves are based on mutual information between uncoded_bits and (Lc + Le)?

October 11th, 2011 at 4:55 pm

The above I mean (Lc + Le) as in equation (1) …

October 12th, 2011 at 4:26 pm

Hello again Carlin,

You are correct that uncoded_bits and encoded1_bits are identical to each other. However this is a special case - in general the general case, these bits are different to each other.

In my notation, you are assuming the following relationship for the systematic bits:

aposteriori_uncoded_llrs = apriori_uncoded_llrs + extrinsic_uncoded_llrs + apriori_encoded1_llrs

The trouble with this is that extrinsic_uncoded_llrs already includes the information from apriori_encoded1_llrs. Adding it again causes it to double up. The correct relationship is:

aposteriori_uncoded_llrs = apriori_uncoded_llrs + extrinsic_uncoded_llrs

Take care, Rob.

October 13th, 2011 at 12:55 am

Hi Rob,

I guess you misunderstood the point. I didn\’t assume, and I knew you were defining

extrinsic_uncoded_llrs = aposteriori_uncoded_llrs - apriori_uncoded_llrs;

Notation itself is fine if used consistently. But the problem is how it is used later.

You used this extrinsic_uncoded_llrs to calculate EXIT output mutual information. This is not consistent with EXIT design. Please verify with (19) of ten Brink\’s original paper –Convergence behavior of iteratively decoded parallel concatenated codes; and check definition of extrinsic information in that paper.

In other words, with your definition, you need to use

extrinsic_uncoded_llrs - apriori_encoded1_llrs

to calculate the mutual information.

For systematic code, I think more people define extrinsic excluding your apriori_encoded1_llrs. For example, Hanzo\’s 2002 book on Turbo Codes and Turbo Equalization.

October 13th, 2011 at 1:15 am

Wait, I just realized that in main_inner.m you were plotting EXIT only for a inner convolutional decoder, right? If so the desired mutual information needs to use the extrinsic information as you have defined, which I agree. In that case please disregard my previous comment.

What I meant was for plotting EXIT for turbo (PCCC) decoder(s). There for each consituient decoder, extrinsic information should exclude channel output LLR.

Thanks for the discussions.

October 13th, 2011 at 1:27 am

Dear Rob,

Thanks a lot for this forum. I have a question regarding the EXIT chart.

Before I was doing guite good with the EXIT chart with single channel. Now I want to calculate the EXIT chart with hybrid channel. Let me explain.

I have a hybrid channel (RF and optical). I am doing the same thing as you explain. I calculate the LLR of each channel separately and combine the total LLR by horizontal concatenation. then I passed that LLR to the demapper along with two independent Gaussian channel LLRs.

When I perform this method for simple modulation, like BPSK and OOK, it works well but when I perform with OOK and 16-QAM, it give me wrong results.

Please can you suggest me an advise on it? why it give me wrong results with 16-QAM? Thanks

October 13th, 2011 at 8:36 am

Hi Carlin,

That’s no problem.

Take care, Rob.

October 13th, 2011 at 8:41 am

Hi Ideal,

I think that you need to verify that each set of LLRs satisfies the consistency condition. You can do this using the display_llr_histogram method that I have described above. In my experience, the most frequent reason why iterative decoding schemes fail is because a bug in the source code causes the LLRs to not satisfy the consistency condition.

Take care, Rob.

October 13th, 2011 at 9:25 am

Dear Rob,

That’s I think may be the right reason. To satisfy the consistency condition, what is the way or how to satisfy this condition for the LLRs. Is there any method to fulfill this condition. Can you suggest me any derivations or help. thanks

October 13th, 2011 at 10:42 am

Hi Ideal,

The easiest way to satisfy the consistency condition is for all your decoding components to be optimal - i.e. no programming errors, no use of approximations, use of all available information, etc. Sometimes, an optimal decoder is not practical because its complexity is too high, for example. In this case, the LLRs provided by a sub-optimal decoder can be adjusted so that they satisfy the consistency condition. Typically, this involves multiplying the LLRs by some value, which depends on the value of the LLR. My display_llr_histogram function plots a line that should be diagonal - the values used in these multiplications can be chosen such that they convert a non-diagonal line into a diagonal one.

Take care, Rob.

October 24th, 2011 at 9:18 am

Dear Rob,

Can we use your generate_llrs.m script for generating the a priori LLR, if the transmission channel use QPSK/8-PSK or 16-QAM constellation. I mean it doesn’t matter whatever will be the constellation, generate_llrs.m can be used to generate the a priori LLRs.

Please clarify me. Thanks

October 24th, 2011 at 9:46 am

Hi Ideal,

You can use generate_llrs regardless of the constellation and channel type. Although the a priori LLRs may not have a Gaussian distribution in practice, the specific distribution that you use makes very little difference to the shape of the EXIT curve.

Take care, Rob.

October 31st, 2011 at 10:54 pm

Hi Rob!

Can you give an example of how this bcjr_decoder.m changes with equalization instead of using it as a decoder?

November 1st, 2011 at 9:18 am

Hello Aitezaz,

I’m afraid I’ve never used the BCJR for turbo equalisation. However, you can read about this in the highly cited papers at…

http://scholar.google.co.uk/scholar?q=turbo+equalisation&hl=en&btnG=Search&as_sdt=1%2C5

Take care, Rob.

November 2nd, 2011 at 10:42 pm

Thanks for the link. I have another question regarding Turbo Coding if you can give me your thought on it. Right now I am implementing bcjr algorithm and almost every where i have seen people using the thresholding on the extrinsic LLRs between detector and decoder. For example, one computes the LLRs from bcjr decoder, threshold them to fall in between 50 and -50 and then pass to the ECC decoder. Right now with the change of this threshold value, I am getting very different performances. How can one come up with a safe value for this threshold?

Thanks

November 3rd, 2011 at 9:34 am

Hi Aitezaz,

People do this to avoid problems with numerical overflow, which is where a fixed point number becomes so positive that it overflows and becomes negative. You may notice that I don’t do this in my code - this is because I have taken special care to avoid these problems. In particular, it is not really a problem in Matlab because it natively supports variables having infinite values.

Take care, Rob.

November 12th, 2011 at 5:01 am

Hello Rob,

I’m having trouble using these codes for a 16-QAM turbo equalizer-decoder. The problem is that I get a “dip” at the beginning of the equalizer’s EXIT curve. I wonder if generate_llrs.m can be applied to 16-QAM case as well? or some modifications are needed?

Thanks

November 14th, 2011 at 9:05 am

Hello Ali,

There should be no problem using generate_llrs.m with 16QAM - I have done this successfully many times in the past. I suspect that there is a bug in your code - you can confirm this by comparing the EXIT charts that you obtain using the averaging and histogram methods of measuring mutual information. If these two EXIT charts are significantly different, then this will confirm that there is a bug in your code.

Take care, Rob.

November 25th, 2011 at 12:57 am

Dear Rob,

I have some silly questions

1. I make a parallel channel (one channel use BPSK and 2nd channel use OOK constellation). I divide the data bits 50%/50% in each channel. Then I calculate the LLRs of the individual channel on the receive side. Then I combine the LLR of both channel into one LLR, i.e.,

LLR_t = [LLR_OOK, LLR_BPSK]

Is this way of combining LLRs correct?

2. For the appriori channel, can I use your program (generate_llrs)?, even I am using different constellation (BPSK and OOK) on the communication channel.

3. Suppose 3000 data bits are divided into 1500 bits in each channel and the a priori llr (3000). Then can I calculate the EXIT chart of this combined LLRs (LLR_t + a priori llr) or

2nd methods.

I calculate the extrinsic mutual information of individual channel (LLR_OOK + a priori llr), (LLR_bpsk + a priori llr) and then add the total extrinsic mutual information.

According to my understanding, the first method should be OK, since LDPC decoder works only with one llr(combining the ook and bpsk llr, i.e., variable nodes are connected equally with bits coming from the individual channel)

please advise me in a correct direction. thanks

November 25th, 2011 at 4:15 pm

Hi Ideal,

1. That’s correct, assuming that this concatenation complements the way that you separated the bits into the two channels.

2. Yes. Gaussian distributed a priori LLRs work well for most modulation schemes.

3. Both methods are fine. In the first method, you plot an EXIT curve for the combined channels. In the second method, you plot separate EXIT curves for the two channels and then take the average of the two curves. These two methods are similar to how the EXIT curve of an irregular code can be drawn.

Take care, Rob.

November 27th, 2011 at 11:08 pm

Thanks very much dear Rob,

I really appreciate your quick responses.

The last silly questions, your generate_llrs.m program just generate Gaussian distributed a priori LLRs, it don’t care which modulation scheme we are considering for the communication channel. Am I right?

or we need to incorporate the modulation scheme in the generate_llrs.m.

For example, if we use 16-QAM for the communication channel, what input should be given to the generate_llrs.m program? bits=? , mutual_information=? (between(0,1))

Thanks a lot for your help

November 28th, 2011 at 9:35 am

Hello again Ideal,

Strictly speaking, different modulation schemes and different channels will give different a priori LLR distributions. However, the distribution of the a priori LLRs makes very little difference to the EXIT curves. For this reason it is fine to use Gaussian distributed a priori LLRs in most cases. You can see the difference that the a priori LLR disrribution makes in Figure 4 of…

Ashikhmin A, Kramer G, Brink S ten. Extrinsic information transfer functions: Model and erasure channel properties. IEEE Trans. Inform. Theory. 2004;50(11):2657-2673.

Take care, Rob.

November 30th, 2011 at 1:41 am

Hi Rob,

Thanks for the display_llr_histograms.m function to check consistency condition on input LLRs to a decoder. I am wondering if we can do this without knowing the information bits? For example, in case the detector is not optimal and generates LLRs not satisfying the consistency condition, can we do anything to ‘regulate’ these LLRs so that a MAP decoder following such a detector can still be optimal?

Many thanks for your kind explanations.

November 30th, 2011 at 3:15 pm

Hello Carlin,

You could conduct an off-line investigation which uses display_llr_histograms to determine exactly in what way the LLRs fail to satisfy the consistency condition. Then, in the real system, you could apply a transformation to the LLRs so that they become more consistent. This is the approach that was used in Equation 9 of this paper, for example…

http://eprints.ecs.soton.ac.uk/18569/

Take care, Rob.

December 4th, 2011 at 3:43 am

The link below cann’t be accessed, could you tell me the title of the paper ?

Thanks

The equation for the averaging method uses the average of the entropy of the probabilities indicated by the LLRs. This equation is given just below Figure 4 in…

http://scholar.google.co.uk/scholar?hl=en&lr=&cluster=9622901384089680641

December 4th, 2011 at 6:32 am

this code is really useful for me

December 5th, 2011 at 9:02 am

Hello Lingjun,

Here is a link to that paper…

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.142.291&rep=rep1&type=pdf

Take care, Rob.

December 6th, 2011 at 2:27 am

Thanks for the paper.

December 7th, 2011 at 6:40 pm

Hi Rob,

In the plot produced by display_llr_histograms, what is the definition of y-axis? Is it probability desity?

Thanks a lot

December 8th, 2011 at 9:58 am

Hi Carlin,

The x axis is the LLR value. The y axis is ln(P1(LLR)/P0(LLR)), where P1(LLR) is the PDF of the LLRs that correspond to 1-valued bits. P0(LLR) is the PDF of the LLRs that correspond to 0-valued bits. Essentially, the x axis shows what values the LLR currently have, the y axis shows what values they should have. If you get a diagonal line having a gradient of 1, then the LLRs have the values that they should have - in other words, the LLRs satisfy the consistency condition. If you don’t get a diagonal line having a gradient of 1, the then the LLRs do not satisfy the consistency condition, indicating that your receiver is sub-optimal and that it possibly contains some bugs (if you expect it to be optimal).

Take care, Rob.

December 25th, 2011 at 5:29 am

Hi Rob,

sorry for bothering you. I just want to rectify myself.

I am calculating the LLR of the joint decoder in the following way.

I am encoding the information bits using single encoder. Then I am dividing the bits into two channel with 50% ratio. Then I did the mapping of the individual channel i.e. BPSK for both channel and adding noise only (AWGN channel for simplicity) and then receiving the noisy signal of both the channels. I can easily calculate the individual channel LLR (2y/sigma^2), but I want to derive the equation for the joint LLR considering both the channel are independent.

Example, s=information bits, x1=encoded bits for channel 1, x2=encoded bit from channel 2, y1=received signal from fist channel, y2=received signal from second channel.

Then please can you advise me how can I calculate the joint channel LLR, assuming BPSK for both channels

OR

I can calculate the individual channel LLR and how can I combine into one channel LLR to give input to the LDPC decoder.

Assumption: Both channels are independent.

I will be thankful to you for your help. Thanks

December 27th, 2011 at 4:30 am

Hi Rob,

I am waiting for your reply please advise me. Thanks

December 29th, 2011 at 3:05 pm

Hi Ideal,

It’s not quite clear to me how you obtain x1 and x2. I think you are doing it using one of the following two methods.

1) You have 100 (for example) information bits in s. These are encoded using a 1/2-rate (for example) encoder to give 200 encoded bits in x. The first 100 bits of x are put into x1, while the second 100 bits are put into x2. In this case, you just concatenate the 100 LLRs corresponding to x1 with the 100 LLRs corresponding to x2, in order to obtain 200 LLRs corresponding to x.

2) You have 100 (for example) information bits in s. These are encoded using a 1/2-rate (for example) encoder to give 200 encoded bits in x. These bits are copied twice to give 200 bits in x1 and 200 bits in x2. In this case, you just add the 200 LLRs corresponding to x1 to the 200 LLRs corresponding to x2, in order to obtain 200 LLRs corresponding to x.

Take care, Rob.

December 31st, 2011 at 12:32 am

Dear Rob,

Yes, the 1st method I am doing. But I was not sure of concatenation as you mentioned. I was thinking to derive a mathematical equation for the total LLR. The setup is as below,

|—–x1–| bpsk |—- | noise |—|———| | VN |–Le

s–>| ENC |—| |Combined |—-|

| | LLR |

|—-x2–| bpsk |—–| noise |—|———|

This was the setup. I give the combined LLR as input to the variable node of the LDPC code along with two a priori LLR (not mentioned here), Now I need to derive the expression for the extrinsic LLR (Le).

I think the Le will just be the sum of the combined LLr plus the a priori LLrs, i.e.,

Le = Lch (channel LLR) + two a priori channel LLRs (3,6 LDPC code),

Then we can calculate the extrinsic mutual information as was mentioned in your simulation results and also by ten Brinks.

I think all this setup is OK?

Any comments?

Again thanks for your helpful discussion

December 31st, 2011 at 11:18 am

Hi Ideal,

I would suggest trying it and then comparing the EXIT function that you get using the averaging method with the one you get using the histogram method. If they match, then everything is okay.

Take care, Rob.

January 13th, 2012 at 8:46 pm

Hi Rob,

Following your previous feedback on \"correct\" LLR as ln(P1(LLR)/P0(LLR)), in my understanding, this expression is universal and without an assumption on LLR generated by a binary symmetric channel, is it correct?

I have read other definitions of consistency condition in the form of p(x)=p(-x)e^x for bit \"1\", or p(x)e^x = p(-x) for bit \"0\", but those require binary symmetry of the 1/0 -> LLR channel. But for example, soft demodulation of Gray mapped 4-PAM does not satisfy the binnary symmetry, so the latter form of consistency is not accurrate while yours is correct. Is this correct?

Thank you very much for your discussions.

January 15th, 2012 at 9:37 pm

Hi Rob,

I have another question about using display_llr_histograms() to find a transform of sub-optimal LLRs. I wonder, to find a LLR transform off-line for LLRs generated by a sub-optimal detector (for example a given MIMO detector), will the transform change for different channel conditions? In other words, do we have to find a transform for each channel condition?

Thanks a lot.

January 16th, 2012 at 10:02 am

Hi Calin,

p(x)=p(-x)e^x and p(x)e^x = p(-x) are just rearrangements of LLR = ln(P1(LLR)/P0(LLR)). These descriptions of the consistency condition are universal and do not depend only on binary symmetric channels.

You would have to do some experiments to see if a different transform is required for different channel conditions. I would expect that this is necessary, but you might find that the optimal transform does not vary much as the channel conditions change, allowing to you use one transform for all conditions.

Take care, Rob.

January 16th, 2012 at 9:56 pm

Hi Rob,

In my understanding it requires P1(x) = P0(-x) to obtain P1(x)=P1(-x)e^x and P0(x)e^x=P0(-x). But I think P1(x) = P0(-x) is not always true.

For example, MAP demodulation of 2nd bit of 4-PAM {11 10 00 01}, P1(x) is non-zero for all x>0, but P0(x) becomes zero for certain x<0.

Do I have any mistake in the above?

Thanks again.

January 17th, 2012 at 9:29 am

Hi Carlin,

I’m afraid that I’ve become confused by your notation. I would express this as…

x = ln(P0(x)/P1(x))

P0(x) = e^xP1(x)

I’m not sure where -x is coming from. Can you link me to a paper?

Take care, Rob.

January 18th, 2012 at 12:16 am

Hi Rob,

Sure, one I have read is a ISIT200 paper “Design of provably good low-density parity check codes”, where on the top of right column states consistency condition using x and -x.

I just think that binary symmtric is a necessary condition using the consistency condition formula in that paper, but for demodulating Gray mapped symbols this condition is not always satisfied. So I can only use what you have given x=ln(P0(x)/P1(x)).

Thanks for any comments or opinions.

http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=866497

January 18th, 2012 at 11:55 pm

Hi Rob,

This forum is really very helpful.

I am studying puncturing techniques in LDPC code. I understand the concept but I am feeling difficulty in implementing the puncturing in MATLAB. Do you have any simple example of puncturing ldpc code in MATLAB or any other tutorial will be appreciated. Thanks

January 19th, 2012 at 9:49 am

Hi Carlin,

You are right that the use of x and -x requires the LLR distribution to be symmetric. This is true in the case of a binary symmetric channel, but it is also true in binary AWGN and binary Rayleigh channels, for example. For the general case, we need to use x=ln(P0(x)/P1(x)).

Take care, Rob.

January 19th, 2012 at 9:57 am

Hi Ideal,

Puncturing is quite easy in Matlab. For example, in the transmitter, you might puncture some bits using…

a2 = reshape(a,2,length(a)/2);

b = a2(1,:);

In the receiver the inverse operation for LLRs is…

a2_tilde = [b_tilde; zeros(1,length(b_tilde)];

a_tilde = reshape(a2_tilde,1,numel(a2_tilde));

Take care, Rob.

January 20th, 2012 at 4:15 am

Dear Rob,

can you please send any simple example of ldpc code which is using puncturing technique. I have ldpc algorithm but I don’t know how to implement the puncturing in the simple communication model. any simulation model will be more helpful. Thanks again

January 20th, 2012 at 2:32 pm

Hi Ideal,

I’m afraid that I don’t have any LDPC Matlab code. However, puncturing does not go inside the LDPC encoder or decoder, so it should be easy for you to implement. In my example above, the bit sequence ‘a’ is the output of the LDPC encoder and ‘b’ is the input to the modulator. In the receiver, ‘b_tilde’ is the output of the demodulator and ‘a_tilde’ is the input to the LDPC decoder.

Take care, Rob.

January 22nd, 2012 at 11:14 pm

Dear Rob,

Thanks for the help. I now understand how to implement the above example but I am wondering how can I vary the rate using the puncturing.

i.e., using this command, a2 = reshape(a,2,length(a)/2); b = a2(1,:); I can transmit zeros half of the bit length, it means I am puncturing the total bits which doesn’t relate with varying the rate of the LDPC encoder. Like if I want to vary the rate to adapt different weather conditions?

You are right that puncturing doesn’t go inside the LDPC. I was wondering how to vary the rate of the LDPC code using the puncturing.

You give me very good command lines but I am still not sure how to vary the rate using puncturing, if I am using the mother code rate 0.5 of the LDPC code. i.e., r=k/n=0.5,

I am really very thankful for your efforts but you are really good.

January 23rd, 2012 at 9:59 am

Hi Ideal,

Here is an alternative way, which lets you choose the overall coding rate, which is given by the product of the LDPC coding rate and the puncturing rate…

interleaver = randperm(length(a));

a2 = a(interleaver);

b = a2(1:puncturing_rate*length(a));

a2_tilde = [b_tilde, zeros(1,length(a)-length(b)];

a_tilde(interleaver) = a2_tilde;

Take care, Rob.

January 27th, 2012 at 1:35 am

Dear Rob,

I learned much from your code. I am using a list sphere decoder to plot the exit chart. May I use the main_outer.m and replace the [aposteriori_uncoded_llrs, aposteriori_encoded1_llrs, aposteriori_encoded2_llrs] = bcjr_decoder(apriori_uncoded_llrs, apriori_encoded1_llrs, apriori_encoded2_llrs);

with

[aposteriori_uncoded_llrs, aposteriori_encoded1_llrs, aposteriori_encoded2_llrs] = LSD(apriori_uncoded_llrs, apriori_encoded1_llrs, apriori_encoded2_llrs);

and OK

January 30th, 2012 at 4:12 pm

Hello Dai,

You are most welcome to use my code. I’m afraid I don’t have any Matlab scripts for sphere decoders though.

Take care, Rob.

February 3rd, 2012 at 12:59 pm

Dear Rob:

I am study the exit chart ploting for the soft interference cancellation and minimum-mean squared-error filtering (SIC-MMSE) based iterative receiver [1] for coded MIMO systems. I have problem on ploting the IA for the SIC based front-end detector. The LLR of the SIC-based front-end detector ouput is very high even for the BER of 0.1. As a result, I only obtained very high IE output for the turbo decoder which is not desired? The BER/BLER results of the SIC-MMSE iterative receiver is closely macthing those of the [1]. I am wandering which part is to blame.

[1] Nokia Siemens Networks, Nokia, % TSG-RAN-1,

R1-084319, “Considerations on SC-FDMA and OFDMA for LTE-Advanced Uplink,\\"

Meeting $\\\\#$55, Nov. 2008.

February 3rd, 2012 at 1:45 pm

Hello Dai,

It sounds like you are using the averaging method of measuring mutual information. My suspicion is that you would get different results if you used the histogram method. If you do get different results, then this would suggest that there is a bug somewhere in your transmitter or receiver implementation. However, I’m afraid that the bug could be anywhere in your code and I would not be able to suggest where it might be…

Take care, Rob.

March 4th, 2012 at 12:21 am

Dear Rob,

I want to implemnet BICM-ID with FSK modulation, Basically i am implementing this paper

http://csee.wvu.edu/~mvalenti/documents/ValentiChengJSAC2005.pdf

I am having really difficulty in understanding the demodulator and implementing LLR used in this paper (Equation 6). If you could guide me about some paper where some examples are written or guide me how to implement this equation in matlab, i will be grateful.

March 5th, 2012 at 9:38 am

Hello Sher,

I’m afraid that I don’t have any Matlab code for a SISO FSK demodulator, but it would be possible to modify the code in QPSKEXIT.zip, which can be downloaded from earlier in this comment thread. Essentially, you would need to convert QPSK into M number of parallel On-Off Keying channels.

Take care, Rob.

March 7th, 2012 at 4:23 am

Dear Rob,

I just have very simple question please.

I have SNR1 = -10:10 [dB]; SNR2 = -10:10 [dB]; Now I want to calculate the Average SNR. what is the way to calculate the average.

I was thinking like the simple arithematic way but I don\’t think that would be OK.

then I think, better to convert the dB values in Linear and then take the average of the linear results (SNR_Linear1 + SNR_Linear2)./2 and convert them back into dBs. i.e 10log10(results). But the results remain the same as the simple way.

Please clarify my how to take the average of the SNRs in dBs?

Kind regards

March 7th, 2012 at 10:12 am

Hi Ideal,

I suppose the answer to this depends on what you really want to calculate - perhaps it is the ratio of (a) the average signal power to (b) the noise power. If this is the case, then you should do the averaging in the linear domain, rather than in the logarithmic domain, as you suggest.

Take care, Rob.

March 8th, 2012 at 11:06 pm

Dear Rob,

I have one more question please

I want to plot the conditional pdf of the Log-likelihood ratio for each transmitted bit, i.e input is A = +-1, then I calculate LLR = 2/sigma^2*ya,

where ya = A + n(0,sigma^2).

Now I can plot the histogram of the LLR but I want to plot the conditional pdf of the llr, i.e. pdf(llr|A = -1) and pdf(llr|A = +1).

Can you please tell me how can I plot these two values? Thanks

March 9th, 2012 at 9:12 am

Hi Ideal,

This is what my display_llr_histograms.m function does - you can find a link to this in the comments above.

Take care, Rob.

March 12th, 2012 at 3:37 am

Thanks Rob,

How can I plot the combined pdf of the llr on top of the conditional pdf? Thanks

Like If I use hist(llr), it doesn\’t give me a good result. I want to use like normpdf like function.

March 13th, 2012 at 5:34 pm

Hi Ideal,

Gaussian distributed LLRs having a particular mutual information will obey the PDF that is plotted as follows…

x = -10:0.1:10;

sigma = (-1.0/0.3037*log(1.0-mutual_information^(1.0/1.1064))/log(2.0))^(1.0/(2.0*0.8935));

y = 0.5*normpdf(x,-sigma^2/2,sigma)+0.5*normpdf(x,sigma^2/2,sigma);

plot(x,y)

Take care, Rob.

March 16th, 2012 at 10:34 am

Hi Rob

I went through all the questions and replies done by you. I appreciate your great help regarding this. I am just wondering if you have the exit chart code for LDPC also by this time ?

If not can you please give me a clue on this ? where I can get?

And also please clarify the clear usage of EXIT Chart. It is to analyze or design the code or in other way to get the design parameters ?

Thanx

Banu

March 16th, 2012 at 10:51 am

Hi Banu,

I have put some code for drawing the EXIT charts of LDPC codes up at…

http://users.ecs.soton.ac.uk/rm/wp-content/LDPC.zip

You can also draw the EXIT chart analytically, using the technique described in…

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1291808&tag=1

The EXIT chart can be used to analyse a code and it can also help to design irregular codes.

Take care, Rob.

March 19th, 2012 at 6:10 am

Dear Rob,

Can you tell me the reference of you matlab program,

“measure_mutual_information_histogram(llrs, bits)”

Thanks for all your help please.

March 21st, 2012 at 11:54 am

Hi Ideal,

This is a discrete sum implementation of the integral given in Equation 1 of…

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.142.291&rep=rep1&type=pdf

Take care, Rob.

March 29th, 2012 at 6:03 pm

Hi Rob,

In the paper you gave earlier: http://eprints.ecs.soton.ac.uk/18569/, in order to correct suboptimal LLR in iterative detection, does display_llr_histograms need to be used for every iteration, or just once?

In other words, I wonder for sub-optimal detector, is the LLR correction found for 1st iteration without a priori LLRs still be good for later iterations when a prioris are available?

Thanks a lot.

March 29th, 2012 at 6:15 pm

Hi Rob,

I just found in the paper LLR postprocessing is done only once followed by iterative decoding. In fig. 4 QPSK demapping is performed only once.

I am wondering if detection/demodulation is performed iteratively, how should LLR postprocessing be designed?

Thanks a lot

March 29th, 2012 at 6:48 pm

Hi Carlin,

Unfortunately, display_llr_histograms cannot be used “on-line”, during the decoding process. This is because it requires knowledge of the true bit values, which is not known to the receiver. For this reason, your only option is to use display_llr_histograms in a single “off-line” design process. You can then give the results of this design process to the receiver, for it to use in every decoding process.

LLR post processing can be applied to the extrinsic LLRs produced in each iteration by the detector/demodulator.

Take care, Rob.

April 12th, 2012 at 7:56 pm

Hi Bob,

I really appreciate all your effort on writing this wonderful package. Here is my question for you: I need to use the BCJR decoder for equalizing a single input- single output channel. Basically, the inputs to this function need to be the channel coefficients and the output of the channel (either in the form of received signal or the LLR ) and the SNR.

For my example, I consider the proakis channel B, i.e h=[0.227,0.46,0.688,0.46,0.227] and the bpsk modulation, which means the channel’s trellis will have 2^4 states. Now I understand that for other channels with different lengths, the trellis would be different and needs to be computed from scratch which is the major challenging part of this code.

Now I took a look at your bcjr_decoder code, and it is based on a systematic recursive convolutional code. I was wondering if you have a similar code for a MAP equalizer. Any other kind of feedback or resources would be greatly appreciated.

Thanks, Amir.

April 13th, 2012 at 9:33 am

Hi Amir,

I’m afraid that I don’t have any code for an equaliser. However, it should be fairly straightforward to modify this code. You just need to rearrange the transitions matrix and change the gammas so that they are a function of the received symbols, rather than the apriori_encoded1_llrs and apriori_encoded2_llrs. The alphas, betas, deltas and aposteriori_uncoded_llrs do not need to be modified. The code relating to aposteriori_encoded1_llrs and aposteriori_encoded2_llrs can be deleted.

Take care, Rob.

April 19th, 2012 at 5:55 am

Hi Rob,

I am writing code for cooperative diversity for free space optical communications, proposed scheme and numerical results were successfully completed but in comparison 2×1 miso q-ary ppm is given, I didn’t find q-ary ppm in the matlab, Can u please help me

April 19th, 2012 at 8:58 am

Hello Kiran,

I’m afraid that I don’t have any Matlab code for PPM, so I can’t help you.

Take care, Rob.

April 23rd, 2012 at 4:32 am

Hi Rob,

I evaluated the EXIT chart for a concatenated code system over AWGN channel. The inner decoder is a DBPSK modulator, while the outer decoder is a LDPC decoder. The LDPC codes used in the simulation is a regular rate-1/2 (3, 6) LDPC code with length 1008. The interleaver is omitted due to the inherent interleaving nature of LDPC codes.

The simulated EXIT chart was used in mine paper and submitted to a Journal. But reviewers asked the same question about ”EXIT chart analysis is valid only for an LDPC code ensemble of infinite length”. I got the EXIT chart by using 10000 LDPC frams (LDPC code length 1008). From the BER simulation results, I think that the simulated EXIT chart is valid.

This is my fist paper, I do not know how to answer the reviewers\’ comments. Can you give me some advices?

Thank you very much.

Best regards.

April 23rd, 2012 at 11:32 am

Hello Ynng Yu,

I would advise you to update your EXIT charts to include the EXIT bands, as described in…

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1258600&tag=1

The EXIT bands show the variation that the decoding trajectory will have, when the frame length is not infinite. You can see examples of EXIT bands in my figures at…

http://users.ecs.soton.ac.uk/rm/resources/matlabturbo/

Take care, Rob.

April 29th, 2012 at 9:45 am

Dear Rob,

I was introducing the puncturing in regular structure LDPC code. I did it in half rate code and increase the code rate upto 0.8 and 0.9. but the problem is when I increase the rate the variable node curve (extrinsic mutual information curve) doesn’t vary (starting point) even by increasing the SNR and the tunnel doesn’t open.

I don’t understand this situation. Normally the variable node curve varies as we vary the SNR but with puncturing, it doesn’t vary. Is there any problem or can you give me any suggestion please. Thanks

April 30th, 2012 at 9:55 am

Hi Ideal,

I would expect the variable node EXIT function to change as you alter the puncturing rate or as you alter the SNR. I’m not sure why this is not the case - I can only think that there must be a bug in your code…

Take care, Rob.

April 30th, 2012 at 12:56 pm

Dear Rob,

Variable node EXIT function varies by altering the puncturing but the curve cannot shift upwards by increasing the SNR. It can go down and the tunnel closes, then I try to increase the SNR but it stops going up after certain level even by increasing the SNR.

I tried to see the code carefully but couldn’t find any bug. Let me check again. Thanks

April 30th, 2012 at 4:02 pm

Hi Ideal,

I see what you are describing now. This is the correct behaviour - when you use puncturing, the area beneath the variable node EXIT function will never be greater than (number of bits after puncturing) / (number of bits before puncturing).

Take care, Rob.

May 1st, 2012 at 1:37 pm

Dear Rob,

Thanks for discussion.

Can you suggest me any idea about the puncturing in irregular LDPC code?

Suppose we optimize the LDPC degree distribution for one particular rate and then we want to introduce puncturing. How can we introduce that puncturing for the same optimized degree distribution. Thanks

May 1st, 2012 at 2:16 pm

Hello Ideal,

I’m not sure about this - I suspect that you may need to re-optimise the degree distributions if you change the puncturing rate. You can find out by seeing how the variable node EXIT function changes with the puncturing rate.

Take care, Rob.

May 3rd, 2012 at 2:06 am

Dear Rob,

I have question

you mention that “using puncturing, the area beneath the variable node EXIT function will never be greater than (number of bits after puncturing) / (number of bits before puncturing).”

It means lets number of bits before punc = 4000 (half rate code) and after puncturing = 2240 to get approximately rate = 2000/2240 = 0.9, then the area is defined as (area=2240/4000=0.56). The starting point of the variable EXIT curve never go further from 0.56, no matter how large the SNR is. In this I am getting the same simulation results. Thanks

But how can I explain this thing in a technical way?

May 4th, 2012 at 4:19 pm

Hi Ideal,

The explanation for this is that even if the channel was perfect, your 4000 LLRs would only contain 2240 bits of information. No matter how high you increase the SNR, the information will never increase above 2240/4000=0.56 bits per LLR. This is what limits the area under the variable node EXIT function. You can find out more about this in…

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1023387&tag=1

Take care, Rob.

May 9th, 2012 at 7:00 am

Hi Dr. Maunder

In your source code of EXIT Chart，the function ‘generate_llrs’ is to generate a priori LLR having a priori mutual information considered.

I’m a bit confused with the follow line:

sigma = (-1.0/0.3037*log(1.0-mutual_information^(1.0/1.1064))/log(2.0))^(1.0/(2.0*0.8935));

the coefficients in this expression, e.g. 0.3037,1.1064,0.8935, What’s the meaning of these coefficients?

I hope to get your help.

Best regards

May 9th, 2012 at 3:31 pm

Hello Aaron,

Please see the following comments for a discussion of this…

http://users.ecs.soton.ac.uk/rm/resources/matlabexit/#comment-58

http://users.ecs.soton.ac.uk/rm/resources/matlabexit/#comment-205

http://users.ecs.soton.ac.uk/rm/resources/matlabexit/#comment-206

Take care, Rob.

May 19th, 2012 at 2:14 pm

Hi Rob,

I want to judge whether the output LLRS of the LDPC decoder satisfies the consistency condition for a regular rate-1/2 (3, 6) LDPC code with length 1008. As the code length is short, how should I do?

The consistency condition of LLRs should be evaluated from the point of view of one block of the short LDPC code, or from the point of view of many blocks of the short LDPC code?

In addition, if the input LLRs of LDPC decoder for a short LDPC code does not satisfy the the Gaussian distribution, can I still think that the output LLRs of LDPC decoder for a short LDPC code(using sum-product algirthm) satisfy the the Gaussian distribution and the consistency condition ?

can you give me some advices?

Thank you very much.

May 20th, 2012 at 12:55 pm

Hi Rob,

In the above answer, you said “if the LLRs satisfy the consistency condition then you can use the averaging method to measure their mutual information.”

But in the reference: J. Hagenauer, “The EXIT Chart - introduction to extrinsic information transfer in iterative processing,” the formula of the averaging method is derived based on the LLRs satisfying the symmetric and consistent conditions.

Why do you only emphasize the consistent condition for the averaging method?

In addition, in the above mentioned reference, “we can measure the mutual information from a large number N of samples even for non-Gaussian or unknown distributions using the averaging method,” is mentioned.

Is the averaging method accurate for the short codes?

May 21st, 2012 at 8:16 am

Hi Yang,

I would take many blocks of LLRs, then concatenate them together, and then plot their histograms. This will give you a smoother histogram plot. The distribution of the LLRs doesn’t really matter, since it has only a tiny effect on the EXIT characteristics. The only thing that matters is whether or not the LLRs satisfy the consistency condition.

I think that in the context, the ’symmetric condition’ is the requirement for the bits to take a value of 0 with a probability of 0.5, and for the bits to take a value of 1 with a probability of 0.5. Since this is nearly always the case for channel codes, I normally don’t mention it.

Short codes tend to not satisfy the consistency condition and so the histogram method is preferable in these cases. However, the averaging method typically still gives a good approximation.

Take care, Rob.

May 21st, 2012 at 1:20 pm

Hi Rob,

Thanks so much for your reply.

Based on your reply, “I would take many blocks of LLRs, then concatenate them together, and then plot their histograms,” if I use this method to obtain many blocks of LLRs, and then the method of display_llr_histograms proves that the output LLRS of the LDPC decoder (for short code) satisfies the consistency condition, can I think that the output LLRS of this LDPC decoder satisfies the consistency condition?

In addition, I still have a doubt for the ’symmetric condition’. When the input priori LLRs of a decoder of the linear code does not satisfy the symmetric condition, can the output LLRs of this decoder still be considered to satisfy the symmetric condition?

For you mentioned“However, the averaging method typically still gives a good approximation for short codes ”, can you give me some references about this advice? , or is there a theoretical basis for this advice, if I want to use this advice in my paper.

May 21st, 2012 at 1:49 pm

Hi Yang,

Yes, if you use this method to obtain many blocks of LLRs, and then the method of display_llr_histograms proves that the output LLRS of the LDPC decoder (for short code) satisfies the consistency condition, you can say that the output LLRS of this LDPC decoder satisfies the consistency condition. However, I think you’ll find that when using short codes, the LLRs won’t quite satisfy the consistency condition.

As I say, I think that in the context of that paper, the symmetric condition is referring to the bit probabilities, not to the LLR distributions.

The theory for short codes is described in…

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1258600&tag=1

Take care, Rob.

June 2nd, 2012 at 9:53 am

Hi Rob,

I’m glad to get your help. The relevant links you gave me last time are very helpful to me and I tried to plot mutual infor. characteristics of demapper & decoder.

The system I adopt is that the information bit sequences are encoded with simple Repetition Code and then interleaved before being modulated utilizing PPM。 The I_a & I_e of RC decoder, as I plotted, is a diagonal as in S. ten Brink’s paper. I_as & I_es of demapper are plotted under different Eb/N0。

But the matlab exit charts scripts you put on your website, I think, are fit for a single-user system. If the communication system is a multiple-access one, are the scripts still applicable？

Second, in my opinion, the exit charts tutorial written by Hagenauer indicated that the relevant derivation of mutual information transfer is on the hypothesis of AWGN channel. If the channel is replaced of Poisson channel, e.g. in Free-Space Optical communication, are the relevant equations and derivations in S. ten Brink’s paper still appropriate?

I hope to get your help and advice。

Thanks

June 4th, 2012 at 2:53 am

hello Rob,

I noticed that generate llr.m in matlab Turbo folder and matlab exit folder is different. Why it written in different way to generate llr between the exit and BER simulation?

Thanks

June 6th, 2012 at 6:08 pm

Hi Aaron Zhang,

In a multiple access system, you will need to consider the interference imposed by the other users. You will need a multi-user detector to separate the interfering signals. Otherwise, your receiver will incorrectly assume that there is no interference and your LLRs will not satisfy the consistency condition. This will cause the iterative decoding process to make some mistakes.

Likewise, if you use a different channel, then the demodulator will need to be updated to reflect this, otherwise the LLRs will not satisfy the consistency condition.

Note however that the EXIT function of the repetition code is not affected by what happens in the channel.

Take care, Rob.

June 6th, 2012 at 6:10 pm

Hi Zee,

The difference is that the generate_llrs.m function in the turbo folder guarantees that the generated LLRs have the requested mutual information. This is important when drawing the EXIT bands that are shown in the corresponding EXIT charts. The generate_llrs.m function in this folder does not make this guarantee. The code in this folder is older and I tend to use the code from the other folder these days.

Take care, Rob.

June 9th, 2012 at 8:13 am

Hi, Rob, thank you for your response to me.

First, you mentioned the interference imposed by other users in a multiple access communication system. The “iterative decoding of convolutional codes” written by Prof. Hagenauer introduce a concept of “channel reliability” which increase monotonically with SNR under the condition of single user. Whether it means that the coefficient — channel reliability, would somehow decrease with transmitter energy increasing in multiple-access system, for the equivalent interference (or noise) imposed by other users will rise? The more uses in multiple-access system, the more the equivalent interference is.

Second, as I’ve gathered statistics of the extrinsic LLR of demapper module under a hypothesis of Poisson atmospheric channel, the distribution is Gaussian alike as in the AWGN assumption. Induced from probability theory, the Poisson distribution can be approximated by Gaussian distribution under some circumstances, e.g. lambda parameter of Poisson is large. Whether it means that I can still utilize the basic framework of EXIT matlab scripts in your folders?

Thanks a lot.

June 11th, 2012 at 8:16 am

Hi Aaron,

The channel reliability is only valid for AWGN channels. If you are willing to model the interference as Gaussian distributed noise, then you can treat the interference channel as an AWGN channel.

The EXIT chart can be used irrespective of what the extrinsic LLR distribution is. Different distributions will give slightly different EXIT charts, but there is no underlying assumption that the LLRs obey some particular distribution.

Take care, Rob.

June 11th, 2012 at 12:10 pm

Hi Rob,

In my view, the channel reliability somewhat represents the capability of restore the signal in the receiver site. In multiple users system(multiple access), the interference imposed by other users together with the additional white Gaussian noise can be modeled as a Gaussian variable resulted from central limit theorem, so that the channel reliability for a certain user decreases.

Whether my comprehension of channel reliability mentioned above is right?

I hope to get your advice.

Thanks a lot

June 12th, 2012 at 8:11 am

Hi Aaron,

What you are saying sounds fine to me. One thing to beware of is that a Gaussian model of interference can sometimes be considered to be an oversimplification. The Gaussian model assumes that there are lots of interferers and that none of them a particularly strong relative to the others.

Take care, Rob.

June 12th, 2012 at 10:39 am

Thanks, Rob.

As Prof. Hagenauer mentioned in paper about “Turbo principles”, the iterative decoding and demapping utilizing EXIT charts to analysis the convergence behavior are possible.

Referred to the tool proposed by S. Brink, I still do not know how to draw the exact iterative trajectories along the average transfer characteristics of demapper and decoder just as S. Brink did.

So far, I just drew the corresponding extrinsic transfer characteristics of demapping(inner decoder) and outer decoder. The trajectories runs outside the range of these two average asymptotic lines.

I want to obtain the exact graphs like that in S. Brink’s paper.

thanks a lot

June 12th, 2012 at 2:00 pm

Hi Aaron,

You may like to try using a longer frame length - the longer the frame length, the better the match between the trajectories and the EXIT functions. Also, you should not average trajectories obtained from different frames together, instead you should plot trajectories for individual frames separately.

Take care, Rob.

June 13th, 2012 at 1:50 am

Hi Rob,

First I want to say thanks, I benefit a lot from learning your EXIT code, as well as some of your papers.

I have some questions about the difference between “likelihood” and “probability”, which are different concepts in statistics. For example, in your “main_inner.m” the L(c;I) is from soft demodulation, i.e. “apriori_encoded1_llrs = (abs(rx1+1).^2-abs(rx1-1).^2)/N0;” means ln[p(rx|tx=1)/p(rx|tx=-1)] or ln[p(rx|encodedbit=0)/p(rx|encodedbit=1)] according to your BPSK, and represents the log likelihood ratio. But the BCJR decoder outputs L(u;O) as “posteriori_encoded1_llrs(bit_index) = prob0-prob1;” which means ln[P(encodedbit=0|…)/P(encodedbit=1|…)] and represents the log a posteriori probability ratio. I am confused about the difference of L(c;I) and L(c;O), which in my opionion should have the same physical meaning.

Thanks a lot.

June 13th, 2012 at 3:39 pm

Hi Jing Dai,

You are talking about Baye’s theorem here. This says that P(x|y) = P(y|x)*P(x)/P(y), where x is what we transmit and y is what we receive. So, we have LLR = ln(P(x=0|y)) - ln(P(x=1|y)) = ln(P(y|x=0)*P(x=0)/P(y)) - ln(P(y|x=1)*P(x=1)/P(y)).

If we assume that P(x=0) = P(x=1), then we can simplify to LLR = ln(P(y|x=0)) - ln(P(y|x=1)).

Take care, Rob.

June 14th, 2012 at 12:04 am

Thanks, Rob,

Now I know for a channel encoded sequence we can generally assum equal probabilities for bits (and even for code symbols?).

And your reply also explains in serial concatenated systems why L(u;O) of inner code can be treated as L(c;I) of outer code, which had been a question as far as I started to study iterative decoding.

June 14th, 2012 at 10:35 am

Hi Jing,

We typically can assume that the binary values at the input and output of a channel encoder are equiprobable. This is because a well-designed source encoder should output equiprobable binary values, otherwise near-capacity operation is prevented.

Take care, Rob.

June 19th, 2012 at 6:27 am

Hello Rob,

Thanks for the response.

I am using your QPSK_exit and want to modified to 16QAM non-gray mapping.I have modified the bit labels and constellation points for 16 psk. Can I use your code for 16QAM? Is there any different between PSK and QAM bit labels? Because I noticed there is no equation related, but just a constellation point considered. for 16 psk, the constellation point is divide by sqrt(10) right?

June 19th, 2012 at 8:21 am

Hi Zee,

This code will work for 16QAM - as you say, the equations consider the position of the constellation points and there is nothing that is specific to PSK. To normalise the symbol energy of 16QAM you can divide by sqrt(10), as you say.

Take care, Rob.

June 21st, 2012 at 7:28 am

Hi Rob,

Another question for you. There are difference between QPSK and 4QAM, and 16PSK and 16QAM performance although the position of the constellation point are same, right? How we can determine the labeling pattern in each point? because at your code, the labeling pattern is determined by yourself. Correct me if I was wrong.

Thank you

June 21st, 2012 at 7:58 am

Hi Zee,

QPSK and 4QAM have the same performance, since their constellation diagrams are the same. 16PSK has a different performance and a different constellation diagram to 16QAM. The labelling pattern is a choice that the code designer must make - different labellings will give different EXIT function shapes. However, the area beneath the demodulator’s EXIT function is independent of the labelling. Some different labellings that you may like to look up are Gray coding, set partitioning, natural mapping, etc.

Take care, Rob.

June 23rd, 2012 at 7:41 pm

Dear Rob,

First I would like to say thanks for your significant help. I need to use Monte Carlo simulations to calculate the mutual information of a 2×2 correlated MIMO system with the following channel model: y=HPx + n , where x is the vector of discrete inputs (BPSK,QPSK), H is the channel matrix, P is the precoding matrix and n is the Gaussian distributed noise. Using fmincon in Matlab I have found the the optimal precoding matrix for different input combinations and now I want to do Monte Carlo simulations to estimate mutual information for the optimal precoding matrices I have found within a range of SNRs -10 dB to 30 dB. Can you provide some help or a matlab code for this?

June 25th, 2012 at 10:39 am

Hi John,

If your information is in the form of LLRs in the receiver, then the measure_mutual_information functions that I have provided here will let you do what you are asking for.

Take care, Rob.

June 25th, 2012 at 6:12 pm

Hi Rob,

I have a question regarding the code you provided for plotting EXIT charts of LDPC codes (posted on March 16th, 2012). My question: what is the reference you used for the equation below that is used in the function ‘generate_llrs’

% Approximate the standard deviation that corresponds to the MI in the middle of the range

sigma = (-1.0/0.3073*log(1.0-((lower+upper)/2)^(1.0/1.1064))/log(2.0))^(1.0/(2.0*0.8935));

Another thing is that I compared the results of your code to the analytical EXIT chart obtained in Fig 2 of this paper (Design of LDPC Codes: A Survey and New Results, http://elib.dlr.de/47266/ ) which have the same equations obtained by Ten Brink, but they didn’t match, can you tell me if I’m missing something?

I really appreciate your efforts Rob, thanks.

June 27th, 2012 at 9:30 am

Hi John,

I’m afraid that I can’t remember where that expression came from! It is an approximation for sigma_A = J^-1(I_A), where I_A = J(sigma_A) = 1-g_A(sigma_A) and g_A(sigma_A) is plotted in Figure 3 of…

http://scholar.google.co.uk/scholar?hl=en&lr=&cluster=13457646656598220553&um=1&ie=UTF-8&ei=n_gDStXbPNm4jAeZnIzQBA&sa=X&oi=science_links&resnum=1&ct=sl-allversions

To see how accurate the approximation is, you may like to compare the plot of g_A(sigma_A) in Figure 3 of this paper with the plot you get using the Matlab code…

sigma_A=[0:0.01:7];

I_A = (1.0-2.0.^(-0.3037*sigma_A.^(2*0.8935))).^1.1064;

g_A=1-I_A;

semilogy(sigma_A,g_A);

I got a very good match when I compared the simulated EXIT functions to those obtained using ten Brink’s equations. Something to remember is that a variable node having a degree d has d+1 ports - one is used for the channel LLRs…

Hope this helps, Rob.

June 27th, 2012 at 5:03 pm

Thanks for you reply Rob, but can you give me the title of the paper or another link to it because the link above is not working.

Thanks again.

June 27th, 2012 at 5:58 pm

Hi John,

Ah - that link has expired and I can’t remember which paper I was talking about when I first saved it. In any case, you can read about another method for approximating the I_A = J(sigma_A) function in the appendix of…

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1291808

Take care, Rob.

July 27th, 2012 at 12:50 am

Hello Rob;

Thanks for the info previously.

I want to ask about your qpsk exit code. In the demapper code, you have set some variables to infinity. May I know, why the value is infinity? Sorry for the silly question :)…

thanks

July 27th, 2012 at 6:33 pm

Hello Zee,

I think you are referring to p0=-inf and p1=-inf. The values of these variables get accumulated in a for loop, using the jac function using code similar to…

p0=-inf;

for i=1:10

p0 = jac(p0,some_value(i));

end

The reason why p0 starts from -inf in this example is the same reason why total starts from zero in this example…

total = 0;

for i = 1:10

total = total + some_value(i);

end

…namely, because starting the variable with this value means that it doesn’t affect the final result. More specifically, just like how 0+some_value = some_value, we have jac(-inf,some_value) = some_value.

Take care, Rob.

July 31st, 2012 at 4:25 am

Dear Rob,

Thank you so much for the explanation

It is very helpful tips because normally I just using ‘0′ not ‘inf’. value.

thanks

August 7th, 2012 at 4:25 pm

hi I downloaded your matlab code I found very interesting I must then simulated BICM added an interleaver between the modulation and turboenco then a de-interleaver between turbo demodulation and decoder?

I want to introduce the UMTS interleaver, i must change something or just put it and that’s all

August 7th, 2012 at 6:50 pm

Hi Kadi,

I would recommend putting an interleaver between the turbo encoder and the modulator, in the way you suggest. However, it is unwise to use a particular interleaver design in more than one place in the same system. I would suggest using the UMTS interleaver within the turbo code and using some other design for the interleaver between the turbo code and modulator.

Take care, Rob.

September 6th, 2012 at 12:32 pm

Hi Rob,

kindly, Can I use this matlab code for Hamming code (7,4) ? Also, please I do not have any a good idea about the EXIT chart, in your opinion what should I read first ?

Thank you

Sara

September 7th, 2012 at 12:35 pm

Hi Sara,

It is possible to describe the Hamming code using a trellis, allowing this Matlab code to act as its decoder. However, there are simpler ways to implement the Hamming decoder, although I don’t have any Matlab code for them. You should search for Joachim Hagenauer’s tutorial paper on EXIT charts.

Take care, Rob.

October 24th, 2012 at 8:13 am

Dear Rob,

How to use the EXIT chart to obtain an estimate on BER after an arbitrary number of iterations?

I know that

Pb=0.5*erfc((8*R*Eb/No+J-1(Ia)^2+J-1(Ie)^2)^0.5/(2*2^0.5)). With this formula and EXIT chart, one can calculate the BER of 1st decoder (outer code) and 2nd decoder (inner code) of each trajectory pass.

[this is the equation (31) of \’Convergence Behavior of Iteratively Decoded Parrallel Concatenated Codes\’]

However, how to calculate the BER of the concatenated code?

Thanks a lot!

Yue

October 24th, 2012 at 5:51 pm

Hello Yue,

Unfortunately, the EXIT chart doesn’t really tell us much about the BER of a serially concatenated code. If you could plot the BER of the outer code as a function of the MI of its apriori LLRs, then you could use this to convert the EXIT chart into a BER…

Take care, Rob.

January 21st, 2013 at 6:22 am

Dear Rob,

I think the demapper EXIT line for gray mapping scheme is always a straight line. I try it for QPSK and (3,6)-regular ldpc code. The line was straight. Then I try to plot the demapper exit curve just by changing the constellation points and bit labelling for 8-PSK and 16-QAM for the same (3,6) regular ldpc code, in this case the line doesn’t remain straight but it gives a little bit slope.

can you suggest me any advice please? what can be the reason. Thanks

January 21st, 2013 at 5:01 pm

Hi Ideal,

This sounds correct to me. The EXIT curve of BPSK and Gray-mapped QPSK are horizontal lines. The EXIT curve of Gray-mapped 8PSK and 16QAM are almost horizontal lines. This is because Gray-mapping is the design that gives the best performance without iterative decoding. Therefore iterative decoding cannot offer much improvement. To get a demodulator EXIT function having a greater slope, you can consider set partitioning or natural mapping (among lots of other options).

Take care, Rob.

January 22nd, 2013 at 4:37 am

Dear Rob,

Thanks but I think I cannot explain my question. The question was about the straight line. As you said, the EXIT curve for 8PSK and 16QAM should be almost horizontal. In my case the EXIT cruves for 8PSK and 16QAM are not almost horizontal. They have little slope, for example. at SNR=2 dB, demapper (I_dm) EXIT line vary from 0.420 to 0.455 for apriori MI, I_a = 0 to 1.

Similarly for 4-PAM gray mapping, at same SNR and I_a, I_dm vary from 0.454 to 0.484. Not straight lines in both cases above.

while the same script gives me the exact straight line for QPSK gray mapping. My question was that what can be the reason behind this (not straight line)? Thanks

January 22nd, 2013 at 6:50 pm

Hi Ideal,

The EXIT curves that you are describing sound correct to me. The reason why they are not perfectly horizontal is because the different constellation points of 16QAM have different relationships to the other constellation points - some of the constellation points are surrounded by other constellation points on all four sides - by contrast, some constellation points are at the edge of the constellation diagram. It is because the constellation points are not ‘equal’ that the EXIT curve is not horizontal.

Take care, Rob.

January 25th, 2013 at 12:21 am

Thanks dear Rob,

It looks fine to me but how can I use the equal constellation points. At the moment I am using the following constellation points and bit labeling,

For 8-PSK,

constellation_points = [+1; sqrt(1/2)*(+1+1i); +1i; sqrt(1/2)*(-1+1i); -1; sqrt(1/2)*(-1-1i); -1i; sqrt(1/2)*(+1-1i)];

bit_labels = [0,0,0; 0,0,1; 0,1,1; 0,1,0; 1,1,0; 1,1,1; 1,0,1; 1,0,0];

For 16-QAM

modulation = sqrt(1/10)*[-3+3*i, -1+3*i, +1+3*i, +3+3*i, -3+1*i, -1+1*i, +1+1*i, +3+1*i, -3-1*i, -1-1*i, +1-1*i, +3-1*i, -3-3*i, -1-3*i, +1-3*i, +3-3*i];

bit_labels = [1,1,1,1; 1,0,1,1; 0,0,1,1; 0,1,1,1; 1,1,1,0; 1,0,1,0; 0,0,1,0; 0,1,1,0; 1,1,0,0; 1,0,0,0; 0,0,0,0; 0,1,0,0; 1,1,0,1; 1,0,0,1; 0,0,0,1; 0,1,0,1];

Or is there any way to ensure that the constellation points are equal? Really appreciate your help. Thanks

January 28th, 2013 at 1:27 pm

Hi Ideal,

I’m afraid that I don’t think it is possible to make the 8PSK and 16QAM EXIT curves become perfectly horizontal - I’m not 100% sure of this though…

Take care, Rob.

February 8th, 2013 at 11:02 am

Hi Rob,

Do you have any idea how to do the BCJR or its LOG version for symbol based case? In other words, the uncoded bits and encoded bits are not 0/1, now it’s a 2×2 matrix?

Thanks.

February 8th, 2013 at 5:54 pm

Hi Sara,

I have looked at the Log-BCJR for symbol-based schemes in the past, although I don’t have any Matlab code for them I’m afraid. The calculation of alphas, betas, deltas is very similar to the binary case. The main difference is calculating converting the a priori information into gammas and converting the deltas into extrinsic information. Rather than using one a priori LLR and one extrinsic LLR for each bit, you will need to use M a priori log-probabilities and M extrinsic log-probabilities for each symbol - here, M is the number of possible values for the symbols.

Take care, Rob.

February 8th, 2013 at 7:23 pm

HI Rob,

If the trellis is given (symbol-based schemes), do you think the LOG-BCJR decoding program can be easily modified based on the bit-based LOG-BCJR code you published here?

Thanks,

Sara

February 11th, 2013 at 6:03 pm

Hi Sara,

Yes, I think so. Like I said about, you would only need to change the conversion of the a priori information into gammas and the conversion of the deltas into extrinsic information.

Take care, Rob.

February 20th, 2013 at 12:14 am

Dear Rob,

One more question please,

Stephen ten Brinks derive sigma_ch^2 = 4/sigma_n^2 in the appendix of his IEEE transaction vol. 52, no. 4, April 2004, “Design of low density parity check code for modulation and detection”.

He assumes signal model, y=x+n, x=[+1,-1], n~(0,sigma_n^2) and then it is straight forward to derive sigma_ch^2 = 4/sigma_n^2 using channel LLR.

I use the same procedure for y = sqrt(P)*x + n, x=[+1,-1], n~(0,1) and the channel LLR = 2*sqrt(P)*y and in this case the sigma_ch^2 can be derived as

sigma_ch^2 = 4P, using the variance formular sigma_ch^2 = E(LLR^2) - E^2[LLR].

Please can you advise me that I did right or wrong. Thanks for the suggestion.

February 20th, 2013 at 12:57 pm

Dear Rob,

You use generate_llrs function to model the random LLR sequence based on the mutual information, assuming that the LLR follows Gaussian distribution.

When I calculate the thresholds of LDPC coded ISI channel, I need to model the a prior information of BCJR detector, that is the output LLR of the LDPC decoder. we assume it follows the Gaussian distribution (variance = 2 mean),can I still using this function ? How to ?

Thanks .

BR

LJ

February 20th, 2013 at 6:59 pm

Hi Ideal,

I’m afraid that I’m not sure what you mean by E(LLR^2) - E^2[LLR]. I can’t think of an occasion when we would want to square an LLR value…

Take care, Rob.

February 20th, 2013 at 7:01 pm

Hi Lingjun,

The generate_llrs function generates LLRs that have a Gaussian distribution, with variance = 2*mean, as you say. So you should be able to use it directly.

Take care, Rob.

February 25th, 2013 at 5:06 am

Dear Rob,

Ok that’s fine but for the channel model

y = sqrt(P)*x + n, x=[+1,-1], n~(0,1)

Can you please let me know what will be the sigma_ch^2 ?

I calculate sigma_ch^2 = 4P.

February 25th, 2013 at 6:52 pm

Hi Ideal,

Let me refer to you to equations 1, 2, 3 and 4 in my paper…

http://eprints.soton.ac.uk/272096/1/System_model.pdf

In this paper, I use E_s to mean the same thing that you call P. You can use d_{s,r}=1 and h_{s,r}=1 to get the same AWGN channel as yours. Here, sigma_{s,r} is the standard deviation of the noise.

Take care, Rob.

February 26th, 2013 at 12:34 am

Dear Rob,

I think I cannot explain my question very well. Let me explain in other way,

You use reference 16 in your paper which is “Design of low density parity check code for modulation and detection”. They calculate the EXIT function of a degree d_v variable node using equation (4) which is,

I_EV(I_A,d_v,Eb/No,R)=J(sqrt((d_v - 1)[J^-1(I_A)]^2 + \sigma_ch^2))

If you see in the above equation sigma_ch^2 they call it the variance of LLR and derived it for BPSK AWGN channel as,

sigma_ch^2 = 4/sigma_n^2 = 8*R*(Eb/No), where sigma_n^2 is the noise variance and R is the rate, they consider the signal model y = x + n(0, sigma_n^2)

Now my question is to derive the same sigma_ch^2 considering the channel model, y = sqrt(P)x + n, where sigma_n^2 = 1 and x=+1,-1 and SNR = P

I told you that the LLR of this channel = 2sqrt(P)y and the sigma_ch^2 = 4*P.

I am unsure about the sigma_ch^2 derivation and wanted to ask you about it. Thanks

February 27th, 2013 at 6:27 pm

Hi Ideal,

I see what you mean now. I think that you need…

sigma_ch^2 = 4P/sigma_n^2,

…where sigma_n is the standard deviation of the real part of your noise.

My suggestion is to try this and to plot your EXIT functions using both the averaging and histogram methods. If the resultant EXIT function agree with each other, then you can be confident that the equation I’ve provided above is correct.

Take care, Rob.

March 3rd, 2013 at 12:42 pm

Dear Rob,

Ok, but How can I calculate the standard deviation of the real part of noise?

Or you mean that sigma_n^2 = 1/2 ?

Please rectify me. Thanks for the suggestion

March 4th, 2013 at 6:23 pm

Hi Ideal,

For complex noise, we set sigma_n = sqrt(N0/2).

For purely real noise, you can use signal_n = sqrt(N0).

Take care, Rob.

April 9th, 2013 at 2:19 pm

Hi Rob,

First thanks for your significant job, I appreciate it a lot!

I’m working with symbol-based BCJR algorithms recently, and I want to analyse the EXIT characteristics. Would you please give me some advice on non-binary EXIT charts from symbol-based APPs? Or do you have any related codes for symbol-based EXIT charts?

Thanks!

April 11th, 2013 at 3:40 pm

Hello Cecilia,

I’m afraid that I don’t have any Matlab code for symbol-based EXIT charts. However, the averaging method for measuring MI can be reformulated for symbols, rather than bits quite easily. The equation becomes…

MI = log2(M) - mean(entropy(symbol_probabilities))

Here, M is the number of symbol values that each symbol can take (e.g. M=2 for binary). symbol_probabilities includes a vector of length M for each symbol in the symbol sequence, providing the a priori or extrinsic probabilities for each possible symbol values.

Take care, Rob.

April 15th, 2013 at 6:30 am

Hi Rob,

Thanks for your answer. But there is still a question.

sigma_A = J^-1(I_A), here the approximate function J^-1 for symbols is different from which used for bits, so far as I understand.

Is that right? If so, how to approximate J^-1 for symbols?

Thanks a lot!

April 15th, 2013 at 7:46 pm

Hi Cecilia,

I’m afraid that I don’t know about this. I’m sorry that I can’t be of more help to you…

Take care, Rob.

June 17th, 2013 at 7:36 pm

Hi Rob,

Regarding the LLR post-processing developed in this paper http://eprints.ecs.soton.ac.uk/18569/,

do you know how to design the LLR correction formulas for different system? Equ (9) is specifically designed for the proposed kth user’s LSSTS-aided generalized MC DS-CDMA system. K, V, Ne are parameters for proposed MC DS-CDMA system. For different system, for example, single antenna single user, concatenated coding system, how to design the LLR correction factor?

June 17th, 2013 at 10:09 pm

Hi Rob,

We know that when detector/decoder is sub-optimal, histogram is a better approach to measure mutual information than averaging.

Here is the exit curve for using CC as an outer code and averaging algorithm is used.

https://www.dropbox.com/s/ksyg9cm841m5s9u/exit.png

Because of the max-log-map approximation, the exit curve is not accurate as we expected.

However, I notice that this less accurate exit curve indicates that the max-log-map could work better that exact-log-map, since for a given I_A, max-log-map would give a larger I_E, which suggests max-log-map could work better and converge faster. (Note that now the x-axis is I_E, y-axis is I_A)

June 18th, 2013 at 6:04 pm

Hi Peter,

You can measure the required function using my display_llr_histograms function that I have provided in the comments above. This will produce a plot of the value that the LLRs *currently* have versus the values that the LLRs *should* have.

Take care, Rob.

June 18th, 2013 at 6:09 pm

Hi Ideal,

I would expect the inverted EXIT function of the Log-MAP decoder to have an area beneath it of 0.5 - it looks like this is the case in your plot, so I think that your Log-MAP EXIT function simulation is working well.

However, I would expect the inverted EXIT function of the MAx-Log-MAP decoder to have an area beneath it of more than 0.5 - in your plot, it looks like this area is less than 0.5. This suggests to me that there is a problem with your Max-Log-MAP EXIT function simulation. Have you compared the function obtained using the histogram method with the function obtained using the averaging method?

Take care, Rob.

June 18th, 2013 at 7:06 pm

Thanks for your reply, Rob.

1) I already understand, theoretically, that when detector/decoder is sub-optimal, histogram is a better approach to measure mutual information than averaging.

2) Simulation results prove that the above argument is true because

histogram approach does give more accurate EXIT curve than averaging for MAX-LOG-MAP detector/decoder.

3) What confuse me now is why the EXIT curve in my plot for MAX-LOG-MAP decoder using averaging approach is different from our expectation.

I also would expect the inverted EXIT function of the MAx-Log-MAP decoder to have an area beneath it of more than 0.5. Furthermore, I don’t think it’s a programming problem. I use the EXIT code you provided in this post , only change the jacobian log type in jac.m.

June 19th, 2013 at 11:00 am

“You can measure the required function using my display_llr_histograms function that I have provided in the comments above. This will produce a plot of the value that the LLRs *currently* have versus the values that the LLRs *should* have. ”

Thanks, Rob. Could you tell me a bit more how to use display_llr_histograms to test my system? My understanding is, first, generate some a priori llr associated with some random bits. Then feed this a priori llr to the MAP decoder for decoding. Then replace “llrs” in display_llr_histograms(llrs,bits);

with the actual extrinsic llr generated by the decoder after the decoding. The second input “bits” of display_llr_histograms is still the one we used for generating a priori llr. Is that right?

June 19th, 2013 at 6:23 pm

Hi Ideal,

As you say, when the detector/decoder is sub-optimal, the histogram method is a better approach to measure mutual information than averaging. Regardless of this, I think that you should compare the result that you get using the histogram method with the result you get using the averaging method. My suspicion is that this will show that there’s something going wrong somewhere…

Take care, Rob.

June 19th, 2013 at 6:24 pm

Hi Peter,

That sounds correct to me. The llrs that you provide to display_llr_histograms should pertain to the bits you provide it with.

Take care, Rob.

June 19th, 2013 at 7:15 pm

Hi Rob,

As expected, histogram method gives more accurate result, which is on top of the exact log-map curve. But I still couldn’t understand why averaging method will give that bizarre result for a max-log-map decoder.

June 20th, 2013 at 6:02 pm

Hi Ideal,

It sounds like you are saying that the histogram and averaging methods are giving you significantly different results. This implies to me that there is a bug in your simulation…

Take care, Rob.

June 20th, 2013 at 6:38 pm

I don’t think it’s a programming problem, Rob.

I’m computing the EXIT chart for CC using as an outer code. I simply use the EXIT code you provided in this post (main_outer.m) , the only modification is the jacobian log type in jac.m. If choose max-log-map, then histogram and averaging methods are giving significantly different results. Histogram method would give more accurate result, as I said earlier.

June 21st, 2013 at 12:13 pm

Hi Rob.

Could you please explain more about display_llr_histograms? This function plots two figures, which are first row of a matrix called “results” against the second, third and fourth row of the “results”. Could you tell me what does the values in each row stand for?

June 21st, 2013 at 3:13 pm

Hi Rob,

I tried display_llr_histograms and I do get a diagonal line. But this line is running from the top-right corner to the bottom-left corner (minor diagonal). It’s different from your example. Does it mean the decoder is not working properly?

June 21st, 2013 at 5:11 pm

Hi Ideal,

That seems strange to me. When you switch to Log-MAP, do the histogram and averaging methods give the same result?

Take care, Rob.

June 21st, 2013 at 5:23 pm

I guess “Log-MAP” you referred to is exact log-map.

Yes, in this case, histogram and averaging methods give the same result. We also have similar results for look-up table log-map.

For max-log-map, only histogram would work, averaging method would fail (as we expected). What we cannot understand now is why the area under the inaccurate exit curve obtained by averaging method is less than 0.5.

June 21st, 2013 at 5:24 pm

Hi Peter,

The first column is the LLR value. The second column is the probability that a zero-valued bit will adopt this LLR value. The third column is the probability that a one-valued bit will adopt this LLR value. The fourth column is log of column 3 divided by column 2.

The first column is the value that the LLR currently has. The fourth column is the value that the LLR should have.

The direction of the diagonal line will depend on whether you use LLR=ln(P1/P0) or LLR=ln(P0/P1). I guess you are using the latter because my display_llr_histograms assumes the former. It doesn’t matter too much because you can just multiply your LLRs by -1 if you want to change the direction of the diagonal line.

Take care, Rob.

June 21st, 2013 at 5:26 pm

Hi Ideal,

This seems very strange. If you like, you could send me your Matlab code so that I can try to reproduce the problem.

Take care, Rob.

June 21st, 2013 at 5:33 pm

Hi Rob,

I’m using the EXIT code you provided on this page (main_outer.m) , the only modification is the jacobian log type in jac.m.

You can reproduce the problem by choosing max-log-map in jac.m and make sure you use averaging method in main_outer.m.

June 24th, 2013 at 3:26 pm

Hi Ideal,

I have just run this code myself. I changed to mode=2 in jac.m and histogram=1 in main_outer.m. This gives me an area beneath the inverted EXIT function of 0.5, as we may expect.

It is when I change to histogram=0 that I get an area of 0.46. The explanation for this is that the averaging method is not accurate, when using the sub-optimal Max-Log-MAP decoding algorithm.

Take care, Rob.

June 24th, 2013 at 9:17 pm

Hi Rob,

Thanks for your time.

Now we know that histogram method is more suitable for Max-Log-MAP decoding algorithm. But what causes the area of 0.46, it should be larger than 0.5 rather than smaller than 0.5, right?

June 25th, 2013 at 9:32 am

Hi Ideal,

My guess is that when using the Max-Log-MAP algorithm, some of the LLRs have magnitudes that are higher than they should be. This causes the averaging method to overestimate the MI of those LLRs.

Take care, Rob.

June 27th, 2013 at 11:55 pm

Hi Rob,

Thanks for your reply. It’s interesting to know how to compute the EXIT function for a concatenated coding system, for example, FEC-1 encoder followed by another FEC-2 encoder and then a mapper. If I want to compute the EXIT inner curve for FEC-2 and the mapper as one block, how should I do it? I’m totally confused right now. Could you please write a few lines of code for demonstration using the matlab code provided in this page, like convolutional code->convolutional code->BPSK. Thank you very much.

June 28th, 2013 at 4:47 pm

Hi Peter,

My approach to this is to generate some a priori LLRs having the required MI for the uncoded input of FEC-2. Then I operate FEC-2 and the demapper repeatedly, until a high number of iterations have been completed. Finally, I take the extrinsic LLRs provided uncoded output of FEC-2 and measure the MI. I can then plot this MI versus the a priori MI, in order to obtain an EXIT function for the combination of FEC-2 and the demapper. I’m afraid that I don’t have any Matlab code for this to hand, but I’m sure you could manage it without much trouble…

Take care, Rob.

June 28th, 2013 at 6:33 pm

Hi Rob,

Thanks for your reply. Your answers raise two new questions.

1) According to your reply, you start with the FEC-2 and then demapper. I’m confused why you do it this way? At the receiver, the first component should be the demapper, then FEC-2 decoder. Why they are activated in the reversed order?

2) Why measure the extrinsic LLRs after a few iterations between the FEC-2 and demapper? If the receiver operates in the following way, demapper -> FEC-2->FEC-1 ->FEC-2->demamper, do I have to make a few iterations between the FEC-2 and demapper?

June 29th, 2013 at 9:53 am

Hi Rob,

I try to write some matlab code based on what you said, generate some a priori LLRs having the required MI for the uncoded input of FEC-2, then do FEC-2 decoding then demapper.

I just realize that I guess we also need a priori LLRs of FEC-2 codeword for FEC-2 decoding. And a priori LLRs of FEC-2 codeword is actually the output of the demapper. But now we start with the FEC-2 decoder, and activate the demapper after FEC-2 decoding. So how can I obtain this a priori LLRs of FEC-2 codeword? Or I just set them to zeros?

June 29th, 2013 at 11:41 am

Hi Rob,

One more question, how to compute the trajectory for such system. Like you said, when computing the inner curve for the combination of FEC-2 and the demapper, we operate in this way: FEC-2->demapper->FEC-2. When computing the trajectory, do I have to stick to the same order, such like

demapper->FEC-2->FEC-1(measure MI of outer code)->FEC-2->demapper->FEC-2(measure MI of combination of FEC-2 and the demapper),and so on….

July 1st, 2013 at 5:31 pm

Hi Peter,

To answer your queries in turn:

1) It doesn’t really matter whether you operate FEC-2 or the demodulator first, since you are going to iterate them many times anyway.

2) If you don’t iterate FEC-2 and the demodulator many times, then the EXIT chart will not demonstrate the full potential of your scheme - it will suggest that your scheme is worse than it really is.

3) You should just use a vector of zeros for the missing input at the start of the first decoding iteration.

4) For the trajectory to match the EXIT chart, you need to use the fllowing decoder activation order:

demod, FEC-2, demod, FEC-2, demod, …, FEC-2, demod, FEC-2, FEC-1, FEC-2, demod, FEC-2, demod, FEC-2, demod, …, FEC-2, demod, FEC-2, FEC-1, …

In practice, you wouldn’t use this decoder activation order, since it overuses FEC-2 and demod. However, any other decoder activation order will give you a trajectory that doesn’t match your EXIT chart.

Take care, Rob.

July 2nd, 2013 at 5:52 pm

Hi Rob,

Thanks for your reply. Why my inner EXIT curve starts to decrease quickly when Ia is increasing to a high value (around 0.9). I use your histogram method. And my EXIT calculation is based on imperfect (estimated) channel information. Why BER simulation indicates the system is working ok, while EXIT chart fails to predict the system behaviour?

July 3rd, 2013 at 6:31 pm

Hi Peter,

EXIT functions should normally be increasing functions. If you have a non-increasing function, then this may be due to a bug in your code, or it may be due to the imperfect channel estimation. My suggestion would be to try using perfect channel estimation to see if the EXIT function becomes an increasing function. If it does, you will know what the cause is…

Take care, Rob.

July 3rd, 2013 at 6:46 pm

Hi Rob,

With perfect channel estimation, everything is working well. Also, averaging and histogram method give same results.

Why imperfect channel estimation could cause EXIT curve starts to decrease quickly when Ia is increasing to a high value?

July 4th, 2013 at 4:40 pm

Hi Peter,

I’m not sure why imperfect channel estimation is causing the EXIT curve to decrease quickly when Ia is increasing to a high value. I suppose that this depends on how imperfect the channel estimation is. Are you sure that there are no bugs in your channel estimator?

Take care, Rob.

August 19th, 2013 at 8:32 pm

Hi Rob,

i wanted to plot EXIT charts for complete decoder for different code rates(CQI) for LTE. I made some matlab functions and got results but i am not sure that they are correct. Do u have some sample curves. For my case i assume the equalizer output is noise free so IEs are only transfer function of IAs(not the IAs and Eb/No). How does the output should look like?

August 20th, 2013 at 4:20 pm

Hi Ahmed,

You can be confident that you EXIT charts are correct by plotting them using both the averaging and the histogram methods. The resultant EXIT charts should be very similar to each other. If not, then this suggests that something is wrong.

Take care, Rob.

September 14th, 2013 at 9:42 pm

Hi Rob,

I want to Calculate, using Matlab, the BICM rate for 4PAM in the Gaussian channel.For integrating a noise variable by using Monte-Carlo sampling.

I donot have matlab code for this.Could you please provide me the code.

Best regards,

Sai kumar

September 17th, 2013 at 8:57 am

Hello Sai Kumar,

It is not clear to me what you are trying to calculate. Do you mean the DCMC capacity of the Gaussian channel when using 4PAM? If so, you can find Matlab code for this at…

http://users.ecs.soton.ac.uk/rm/resources/matlabcapacity/

Or, are you trying to build a BICM scheme? If so, you can find some Matlab code for the convolutional code at the top of this page. Also, you can find some Matlab code for the demodulator at…

http://users.ecs.soton.ac.uk/rm/resources/matlabexit/#comment-1545

Take care, Rob.

September 17th, 2013 at 4:22 pm

Hi Rob,

Thanks for your reply.

Yes, I am building BICM Scheme for Gaussian Channel when using 4PAM.

Take care,Rob.

October 10th, 2013 at 7:55 pm

Hi rob,

Just want to know in \’generate_llrs.m\’, the calculation of sigma and llr is based on which literature? Thanks

October 11th, 2013 at 6:00 pm

Hi Ideal,

You can find this in equations 9 and 10 of…

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=957394

Take care, Rob.

October 11th, 2013 at 7:21 pm

Thanks, But I think equations 9 and 10 only answer part of my question. Where are all those constants in \’generate_llrs.m\’ coming from? I mean where can I find the explanation of sigma = (-1.0/0.3037*log(1.0-mutual_information^(1.0/1.1064))/log(2.0))^(1.0/(2.0*0.8935));

October 12th, 2013 at 7:54 pm

Hi Rob,

Could you re-post the reference you mentioned in this reply

“Hi Terence,

I’m afraid that I can’t remember where that expression came from! It is an approximation for sigma_A = J^-1(I_A), where I_A = J(sigma_A) = 1-g_A(sigma_A) and g_A(sigma_A) is plotted in Figure 3 of…

http://scholar.google.co.uk/scholar?hl=en&lr=&cluster=13457646656598220553&um=1&ie=UTF-8&ei=n_gDStXbPNm4jAeZnIzQBA&sa=X&oi=science_links&resnum=1&ct=sl-allversions

To see how accurate the approximation is, you may like to compare the plot of g_A(sigma_A) in Figure 3 of this paper with the plot you get using the Matlab code…

sigma_A=[0:0.01:7];

I_A = (1.0-2.0.^(-0.3037*sigma_A.^(2*0.8935))).^1.1064;

g_A=1-I_A;

semilogy(sigma_A,g_A);

Hope this helps, Rob.

“

October 30th, 2013 at 9:21 am

Dear Rob,

Could you explain the difference when we use the conlutional code as inner and outer code? For each pairs of encoder and decoder we just have only one exit char, isn’t it?

Thank you for your reply.

October 30th, 2013 at 3:22 pm

Hi Ideal,

I’m afraid that I can’t remember which paper that is. However, you can find another approximation of the J and the J^-1 functions in the appendix of this paper…

http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1291808

Take care, Rob.

October 30th, 2013 at 3:24 pm

Hello Juan,

An EXIT chart typically characterises one pair of decoders. A convolutional code can be used as either the inner or the outer code. Which you use depends on what you want to iterate with. A source decoder is normally an outer code, so the convolutional code becomes the inner code in this case. A demodulator is normally an inner code, so the convolutional code becomes the outer code in this case.

Take care, Rob.

October 31st, 2013 at 10:37 am

Dear

could you explain why in the main outer we do not know the apriori_uncoded_llrs but we do have it in the main inner?

Thank you very much.

October 31st, 2013 at 3:32 pm

Hi Juan,

When the convolutional code is an inner code, the apriori uncoded LLRs are provided by the concatenated outer code. However, when the convolutional code is an outer code, we typically don’t have any source of apriori uncoded LLRs - in this case, the concatenated inner code provides apriori encoded (rather than uncoded) LLRs.

Take care, Rob.

October 31st, 2013 at 3:40 pm

Thank you very much Rob, I got the idea now.

Your page is very helpful, you did a great contribution to the research community.

@ Ideal: you can find the approximation by monte carlo simulation. The values of Rob’s code was taken from this paper, i think : http://aphrodite.s3.kth.se/~lkra/publications/05/BraRas05IT.pdf

hope this help

regards

November 1st, 2013 at 6:09 pm

Hi Juan,

Thanks for this.

Take care, Rob.

November 4th, 2013 at 6:26 pm

Dear sir,

Could you explain why, the bcjr_decoder here you obtain the aposteriori LLR after decoding, but in your decoder in UMTS Turbo code it is the extrinsic LLR ? I could you please point out the difference for the algorithm that you used in component_decoder.m file, is there any reference for the equation 2.19 in Liang Li\’s nine month report ?

Thank you in advance sir.

Juan

November 5th, 2013 at 6:07 pm

Hi Juan,

There are two choices when programming the BCJR decoder:

1) have it output the aposteriori LLRs, then have the turbo decoder subtract the apriori LLRs, in order to obtain the extrinsic LLRs;

2) have the BCJR decoder output the extrinsic LLRs directly.

I guess that you are trying to compare two versions of the BCJR decoder that use a different choice here.

I’m afraid that I can’t find a reference for how to calculate the extrinsic LLRs directly - I can’t remember if we read it somewhere, or if we came up with it ourselves…

Take care, Rob.

November 6th, 2013 at 8:22 am

Dear sir,

Thank you very much for your explanation.

I have one more question, as I see in your code, is it your example here for the exit chart is for Serial concatenated code (SCC) and exit chart for Turbo is Parallel concatenated code (PCC). If so, in the SCC, the extrinsic LLR = aposteriori LLR - apriori LLR. But, in the PCC, the extrinsic LLR = aposteriori LLR - apriori LLR - channel LLR ?

Please correct me if I misunderstand the problem.

Thank you very much.

November 6th, 2013 at 6:44 pm

Hi Juan,

This is because a turbo code has the systematic bits, which correspond to the channel LLRs. In a serial concatenated code, there are no systematic bits and so we don’t have any channel LLRs to subtract.

Take care, Rob.

November 7th, 2013 at 1:25 pm

Dear Sir,

I still wondering, I we consider the innder decoder for SCC scheme. We still have the channel LLR received after demodulation, and the apriori LLR feedback from the outer decoder. Thus, we still have the channel LLR for the inner decoder of SCC. am I right?

Please help, sir.

November 8th, 2013 at 5:15 pm

Hi Juan,

If the inner code is non-systematic, then the channel LLRs will pertain to the inner encoder’s output bits, but it will not be relevant to the inner encoder’s input bits. In this case, the channel LLRs should be kept separate from the apriori and extrinsic LLRs.

Take care, Rob.

January 26th, 2014 at 7:12 am

Dear Rob,

Thank you for providing this useful information on Turbo code. The simulation results are for AWGN channel. Could you please tell me where can I find the simulation results for i.i.d fading channel where LTE defined Turbo code is applied?

Thank you for your kind help.

January 27th, 2014 at 3:32 pm

Hi Kaiser,

You can download a simulation for the LTE turbo code from…

http://users.ecs.soton.ac.uk/rm/resources/matlabturbo/

You can modify this to use a Rayleigh fading channel by using code like the following…

a_tx = -2*(a-0.5);

a_n = sqrt(N0/2)*(randn(size(a_tx))+i*randn(size(a_tx)));

a_h = sqrt(1/2)*(randn(size(a_tx))+i*randn(size(a_tx)));

a_rx = a_h.*a_tx + a_n;

a_c = (abs(a_rx+a_h).^2-abs(a_rx-a_h).^2)/N0;

Here, ‘a’ is a bit vector to be BPSK modulated and ‘a_c’ is a vector of corresponding demodulated LLRs.

Take care, Rob.

January 28th, 2014 at 3:37 am

Dear Rob,

Thank you for your kind help. Your comments is very valuable to me.

Some results can also be found in the following paper. M C Valenti, J Sun, The UMTS Turbo Code and an Efficient Decoder Implementation Suitable for SoftwareDefined Radios, International journal of wireless information network, vol. 8, no. 4, pp.203-215, Oct. 2001.

Hope this will help other people who need performance of Turbo code over AWGN channel and Rayleigh fading channel.

This website is very useful for the researching of Turbo code since you can obtain the valuable information you need.

Kaiser

January 28th, 2014 at 5:32 pm

Thanks for this Kaiser. Take care, Rob.

March 19th, 2014 at 12:23 am

hello

How can I find the determinant for trellis code (space time trellis code)

March 19th, 2014 at 7:53 pm

Hello Thaar,

I’m afraid that I can’t help you with this, since I haven’t looked closely at space time trellis codes.

Take care, Rob.

March 21st, 2014 at 8:30 pm

Hi Rob,

I am trying to implement Turbo Equalization. Linear MMSE filter for equalization part with MAP decoder. MMSE equalizer is from this paper titled “Turbo Equalization: Principles and New Results” by Michael Tuchler. My problem statement requires me to perform self-iterations in the equalizer block and then send the extrinsic information to the decoder.This means thats I have inner iterations within my equalizer and decoder both.

The problem I am facing is that the extrinsic llrs coming out from the mmse equalizer are very high like in order of hundreds. Because of this, I don’t get much gain while inner-iterations through the equalizer block. I have tried to first scale down the extrinsic llrs before sending them back to the equalizer but its not of much help. Can you suggest some solution to this.

Thankyou.

March 22nd, 2014 at 5:08 pm

Hi Goshal,

I would be suspicious of LLRs that have values in the hundreds. I think that you should check the legitimacy of these LLRs by measuring their mutual information using both the averaging and histogram methods. If these methods give the same mutual information, then your LLRs are legitimate. If not, then I suspect there is a bug in your MMSE equaliser.

Take care, Rob.

March 23rd, 2014 at 9:25 pm

Hi Rob,

Thank you for your reply. I measured the mutual information as you suggested and the two methods give different measure. So that confirms your suspicion.

For locating the bug, I initialed LLRs=0 for all bits. This reduces the mmse filter to the time invariant filter.This is a pretty straight forward result. To keep things simple, I am keeping the noise_var=1.

(Ref: Turbo Equalization: Principles and new Results (Section-4A))

I am looking at this setup again and again and am not really able to locate the problem. Can you suggest something here

March 23rd, 2014 at 10:21 pm

Hi Rob,

I was exploring your main_exit.m to see how mutual information should come out to be.

i replaced

% Measure the mutual information

if histogram_method

IEs(frame_index) = measure_mutual_information_histogram(a_e,a);

else

IEs(frame_index) = measure_mutual_information_averaging(a_e);

end

with

ie_hist = measure_mutual_information_histogram(a_e,a)

ie_avg = measure_mutual_information_averaging(a_e)

but in simulations, ie_hist~=ie_avg.

What I understood from your last post was that the two should be equal all the time. Is so, then why they don\’t come out to be equal in main_exit.m

Am I missing something here.

I would appreciate your help.

March 24th, 2014 at 6:36 pm

Hi Goshal,

I’m afraid that I’ve never worked on a soft-decision MMSE equaliser before, so I can’t really help with debugging it. Please note that the two methods of measuring mutual information will never give identical answers because they make different assumptions about the LLRs and bits. However, when comparing the EXIT functions or MI measurement by eye, they should look very close to each other.

Take care, Rob.

April 24th, 2014 at 6:26 am

Dear Bob,

There is a problem with the my EXIT chart using the histogram method.

I am now implementing a suboptimal MIMO iterative decoder, which uses max-log approximations of LLR. And as you’ve said, using approximations will cause differences between the average method and the histogram method.

But my problem is so strange…With the verification code that you have provided, that is…

bits = round(rand(1,1000000));

llrs = generate_llrs(bits, 0.5);

display_llr_histograms(llrs,bits);

the diagonal line is from the top left to the bottom right.

But with my decoder implementation, the diagonal line is from the bottom left to the top right.

Besides, my EXIT chart with the histogram method is going downwards…while that with the average method is going upwards but the values are not very satisfactory.

In addition, while the 2 EXIT charts are both not quite right, the BER performance of my decoder is pretty close to the optimal. So the soft decisions are not completely wrong…

Would you please suggest some possible mistakes of my problem?

Regards,

Lewen Zhao

April 24th, 2014 at 10:54 am

Dear Bob,

The problem is solved!

I added a negative sign to the LLR generated by the \\\’generate_llrs\\\’ function and it\’s done.

Actually, I made a mistake in mentioning that I am implementing a decoder… it\’s actually a detector…

You mentioned that the direction of the line generated by \\\’display_llr_histograms\\\’ is determined by if LLR is defined by \\\’LLR=Ln(P1/P0)\\\’ or the inverse.

But this line of code in \\\’generate_llrs\\\’ …

llrs = randn(1,length(bits))*sigma - (bits-0.5)*sigma^2;

you see, there is a negative sign before \’bits\’, does this mean that you are assuming \\\’LLR=Ln(P0/P1)\\\’ in \\\’generate_llrs\\\’ ?

Regards,

Lewen Zhao

April 24th, 2014 at 1:29 pm

Hi Lewen,

That’s right. Some people assume LLR = ln(p0/p1), whereas others assume LLR = ln(p1/p0). When joining two pieces of code that make different assumptions here, you just need to multiply the LLRs by -1.

Take care, Rob.

May 6th, 2014 at 12:45 pm

Hello Mr.

I work on turbo-convolutional encoder, and i want to use turbo decoder based on Max-Log-map algorithm to decode information.

I want a code matlab about this to understand how they do

Thank you for help

best regards

May 6th, 2014 at 4:27 pm

Hello Yazbek,

You can download my Matlab code for turbo decoder from…

http://users.ecs.soton.ac.uk/rm/resources/matlabturbo/

Take care, Rob.

May 20th, 2014 at 12:50 pm

Hello Mr.Rob

Thank you, Now i understand well the turbo decoder

Take care, YAZBEK

November 16th, 2014 at 1:28 pm

Hello BoB,Please do you have some codes on MIMO channels with Fading objectives?

November 17th, 2014 at 9:35 am

Hello Joy,

I’m afraid that I don’t have any Matlab codes for MIMO.

Take care, Rob.

December 6th, 2014 at 10:44 am

Dear Sir,

Could you please tell me the system model that you used in the exit chart that CC used as middle code?

Thank you very much

Take care,

December 7th, 2014 at 12:35 pm

Hello Juan,

This is a recursive systematic convolutional code, having a coding rate of 1/2 and two states. Therefore, the systematic bits are equal to the message bits, while the parity bits are equal to the accumulation (modulo-2 cumulative sum) of the message bits.

Take care, Rob.

December 7th, 2014 at 3:04 pm

Thank you for your reply, Sir.

However, what I mean is that: how the convolutional code is used as middle code? How does it connect to outer and inner code?

Thank you very much,

Take care Sir.

December 7th, 2014 at 3:16 pm

Hi Juan,

The bits (a) output by the outer encoder are interleaved and provided as the input to the middle encoder. Likewise, the bits (b) output by the middle encoder are interleaved and provided as the input to the inner encoder. The middle decoder iteratively exchanges extrinsic information pertaining to the bits of (a) with the outer decoder. Likewise, the middle decoder iteratively exchanges extrinsic information pertaining to the bits of (b) with the inner decoder. I hope that makes sense.

Take care, Rob.

December 7th, 2014 at 8:08 pm

Dear Sir, It’s very clear to me. Thank you very much.

I am very appreciate and admire your support for all of people here. It’s wonderful and amazing volunteer works. It was a big hand for the beginer like me. Once again thank you very much. Sir Rob.

Take care.

December 17th, 2014 at 7:29 pm

hello i just need a simulation code radio resource management in LTE

December 18th, 2014 at 10:25 am

Hello Mohammed,

I’m afraid that I don’t have any simulations of radio resource management in LTE.

Take care, Rob.

December 21st, 2014 at 7:09 pm

Dear Sir,

I am designing a soft-demapper which is iteratively concatenated with a BCJR decoder. In my demapper, I need both extrinsic LLR and posteriori LLR from BCJR.

Thus, when I plot EXIT function for BCJR, I also plot the mutual information for aposteriori. I store two vectors IE (for extrinsic) and the corresponding IP (for aposteriori) mutual information.

Then, when I plot EXIT function for my demapper, thanks to your function generate_llrs, I generate extrinsic LLR from IE and posteriori LLR from IP.

However, my problem is that when I measure the extrinsic mutual information of the demapper, it do not match with the value when I do the real simulation (which means I put two block together) and tracking the trajectory.

Could you please give me some advices? can we do the exit function of a block by adding both extrinsic and posteriori of the other block?

Thank you very much Dr. Rob.

December 23rd, 2014 at 10:06 am

Hi Juan,

I would suggest that by generating separate extrinsic and aposteriori LLRs, you are not representing the dependencies between these - they are not independent of each other. My suggestion would be to generate separate apriori and extrinsic LLRs, then obtain the aposteriori LLRs by adding the apriori and extrinsic. You can use the J function to determine the apriori MI that you need, in order to give the desired aposteriori MI…

IP = J[sqrt([J^-1(IA)]^2 + [J^-1(IE)]^2)]

Please see the following paper for more information on the J function…

ten Brink, S, Kramer, G, Ashikhmin, A, Design of low-density parity-check codes for modulation and detection, EEE Trans. Commun., vol. 52, no. 4, pp. 670-678

Take care, Rob.