I need to fill up a matrix with 1 and 0. Data provided are matrix' dimensions and 1's rows and columns distributions (through two different vectors, as they can be different for each case). So we have two vectors, v
and h
, whose elements are defined as:
v_i = i-th element of vector v.
Indicates the fraction of columns of weight i
(fraction of columns over total which contain i
1's).
fh_i = i-th element of vector h.
Indicates the fraction of rows of weight i
(fraction of rows over total which contain i
1's).
The procedure taken into account is meant as MacKay Neal algorithm. Consists of filling up the matrix in a column basis, from left to right. The weight of each column is chosen to obtain a valid distribution of 1's. The location of these 1 in each file is chosen randomly between those which are not yet full.
Please find attached my code right below. You may also find in this webpage the PDF document I am consulting right now to perform this part of my project. Everything I mentioned beforehand can be found in pages 10 and 11. MacKay algorithm is implemented in pseudocode in page 14.
function H = MacKayNeal(N,r,v,h)
H = zeros(N*(1-r),N);
a = [];
for i = 1:length(v)
for j = 1:(v(i)*N)
a = [a,i];
end
end
b = [];
for i = 1:length(h)
for j = 1:(h(i)*(N*(1-r)))
b = [b,i];
end
end
for i = 1:N
c = datasample(b,length(a(i)),'Replace',false);
for j = 1:a(i)
H(c(j),i) = 1;
end
a = a - c;
end
% while 1 % cycles removed
% for i = 1:(N-1)
% for j = (i+1):N
% if 1 %
% end
% end
% end
% end
end
Note that values a
and b
are respectively referring to alpha
and beta
variables used in attached text's pseudocode. Their values indicate the number of 1 per column and file in each case.
In my case everything went alright until I performed substraction a = a - c
in line 20. I spent a whole afternoon racking my brains with it, and two questions ended up coming out:
Why is this substraction
a = a - c
and notb = b - [one unity in each corresponding position from c]
? It would make much more sense for me with respect to what does appear in theory explanation prior to pseudocode in the PDF attached in the link.What would happen if you choose a subset of b which contains at least two equal values to perform the following assignation of 1's in
H
matrix? To me it seems as a perfectly feasible situation in whicha_i
1's would not be added, but less of that.
If anyone may also help me out with solving the last part of the task, the one alluding in pseudocode the erasure of length 4' cycles (that simply means that there cannot be in two separate columns two pairs of 1's occupying the very same positions) I would also be really really thankful.
In my case I am working with the following data:
N = 100
k = 0.5
v = [0 0.17 0.03 0.2 0 0 0.17 0.3 0.03 0.1]
h = [0 0.1 0 0 0 0.7 0.2]
For both vectors v
and h
, their elements can take any value from 0 to 1 as long as their sum equals 1 in both cases, and first element equals 0 also in both cases. Also, k
tan take any value between 0 and 1.
Many thanks in advance for your attention!
Edit on 06/01/2017
Following with the sample values above used, I encountered the following problem. There you may also look up at the update version of my code (all previous sentences remain same).
In this case, given h
distribution, there may be only three different values (2, 6, and 7) for every element from β
vector. For those cases where αi
is bigger than three (in our example it can be up to ten), vector c
will have at least two equal values in spite of trying to avoid so in the 'datasample' sentence.
I suppose that is what you were referring to previously when you mentioned that this algorithm may not always terminate -but in this case, it would never do so!
Edit on 01/03/2017
I now have another issue related to the construction of vectors a/b
. My problem is that when N
is high, for instance, N = 100
and consequently the dimension of our matrix is, for r = 0.5
, (50x100) we will have a vector representing the distribution of column' weights of length 1x100. Let me state one brief example. We may for instance have a column distribution that consists of v_2 = 0.8258
(let us recall that according to specifications v_1 = 0
) and the rest of values up to v_100
are much smaller, for instance around v_i = 0.0018
or so in our case.
Our code will not work out in this case, as this values will be much lower than 1 when multiplied by N = 100
(for more details please see below my proposed solution). This is, there will not be any columns of weight 1 or more, even though the summation of all of them reaches the unit according to theory. By the time the algorithm is performed, the index i
will not run up to N = 100
, but to 82 for this case (and the same issue happens with h/b vectors). For a better understanding of my problem I recommend running my code below with the following parameters:
N = 100;
r = 0.5;
v = [0 0.8258 0.0048 0.0033 0.0027 0.0024 0.0023 0.0022 0.0021 0.0020 0.0019 0.0019 0.0019 0.0019 0.0019 0.0019 0.0018 0.0018 0.0018 0.0018 0.0018 0.0018 0.0018 0.0018 0.0018 0.0018 0.0017 0.0017 0.0017 0.0017 0.0017 0.0017 0.0017 0.0017 0.0017 0.0017 0.0017 0.0017 0.0017 0.0017 0.0017 0.0017 0.0017 0.0017 0.0017 0.0017 0.0017 0.0017 0.0017 0.0016 0.0016 0.0017 0.0017 0.0017 0.0017 0.0017 0.0017 0.0017 0.0017 0.0016 0.0016 0.0016 0.0016 0.0017 0.0017 0.0017 0.0016 0.0017 0.0017 0.0016 0.0016 0.0016 0.0016 0.0017 0.0017 0.0017 0.0016 0.0016 0.0016 0.0017 0.0017 0.0017 0.0016 0.0017 0.0016 0.0016 0.0016 0.0016 0.0016 0.0017 0.0016 0.0017 0.0016 0.0016 0.0016 0.0016 0.0016 0.0016 0.0016 0.0017];
h = [0 0.0179 0.0102 0.0277 0.0392 0.0272 0.0027 0.0055 0.0148 0.0151 0.0137 0.0259 0.0103 0.0336 0.0386 0.0049 0.0065 0.0051 0.0081 0.0441 0.0298 0.0147 0.0243 0.0207 0.0198 0.0006 0.0287 0.0425 0.0088 0.0094 0.0368 0.0090 0.0181 0.0306 0.0158 0.0325 0.0108 0.0268 0.0163 0.0448 0.0395 0.0262 0.0097 0.0114 0.0283 0.0437 0.0103 0.0058 0.0249 0.0083];
Many thanks in advance, and best regards.
I believe you are right and both of your points are mistakes in the lecture notes.
I think it makes much more sense if we replace:
with
Section 3.2 of "NOVEL CONSTRUCTION OF SHORT LENGTH LDPC CODES FOR SIMPLE DECODING" by Fatma A. Newagy, Yasmine A. Fahmy, and Magdi M. S. El-Soudani contains this version of the algorithm (with
u
taking the role ofβ
)One simple way of adjusting the Matlab code is to simply repeat the datasample step until all elements are distinct.
Note that this algorithm may not always terminate so I recommend you keep track of the number of retries and start again (form the beginning or a few steps before) if this number gets too high.
Note that there appear to be two main variants of this algorithm:
Algorithm 1
(I believe this is the original MacKay Neal algorithm)
A vector with length equal to the number of rows stores the desired weight of each row.
When a random sample of rows is chosen, 1 is subtracted from the corresponding entries in the vector to mark that those rows now have a smaller desired weight.
Algorithm 2
(I believe this is a common improvement.)
In order to improve the chance of getting the right row distribution, it is better to bias the sample towards the rows which need more weight.
This is done by preparing a vector of length equal to the total number of 1's in the whole matrix. The elements of the vector correspond to row numbers.
A random sample of this vector provides a set of row numbers. These entries are deleted from the vector.
So, for example, if we want to eventually have four 1's in row 2, then the number 2 (corresponding to row 2) will appear four times in the vector. This means that row 2 will be picked four times, and therefore will end up with four 1's.
I believe the pseudocode refers to this second algorithm.
This means that if
β = [1,1,2,2,3,3]
and we choose a vectorc
of[2,3]
, then the operationβ-c
should produce the output[1,1,2,3]
.