What is going on in the following code?
function [ errors sepplane ] = perceptron ( pclass , nclass)
sepplane = rand ( 1 , columns ( pclass ) + 1 ) - 0.5;
tset = [ ones ( rows ( pclass ), 1 ) pclass ;
- ones ( rows ( nclass ), 1 ) -nclass
];
i = 1;
do
misind = tset * sepplane ' < 0;
correction = sum ( tset ( misind , :), 1 ) / sqrt ( i );
sepplane = sepplane + correction;
++ i;
until ( norm ( sepplane ) * 0.0005 ) - norm ( correction ) > 0 || i > 1000;
errors = mean ( tset * sepplane ' < 0);
dzeros = tvec ( tlab == 1 , :);
dones = tvec ( tlab == 2 , :);
perceptron ( dzeros , dones)
end
(0) Why is this code so drastically different than this one?
(1) Why are positive and negative classes separately passed in the 1st place? Then what is the point of doing a classification?
(2) What is sepplane
?
(3) What is misind
?
(4) What is the rationale behind the calculation of correction
?
(0) Why is this code so drastically different than the other one?
This perceptron can be modified to be multi-class. The other is strictly for binary classification.
(1) Why are positive and negative classes separately passed in the 1st place? Then what is the point of doing a classification?
This is the training step of the perceptron, so you need the positive and negative examples to learn the weights
(2) What is
sepplane
?The separating hyperplane which the perceptron learns.
(3) What is
misind
?Is 1 if the example was correctly classified, and -1 otherwise.
(4) What is the rationale behind the calculation of correction?
If I misclassify an example, I want to adjust the weights, so that it will be correctly classified next time I run it through the perceptron.