Does a Perceptron always converge to the same weights for a given dataset

168 views Asked by At

I have a question regarding the Perceptron algorithm. As a mathematical concept it's very closely related to a simple linear regression, however the activation function is different between these two cases. Let's say I have a linearly separable dataset, and I want to find a hyperplane that separates the dataset into 2 sets. With a perceptron I can accomplish this. My question is, in many cases there may be more than 1 hyperplane that separates the dataset, does the perceptron always converge to the same hyperplane, or if I train the perceptron with the same dataset more than once with different initial conditions on the weights, will it always converge to the same weights or will it represent a different hyperplane each time?

1

There are 1 answers

0
lejlot On

The short answer is no. Convergence point will depend on the initial set of values and the order in which you present samples. The proof is very simple: if you initialise the network in a solution (any hyperplane that already separates the data) perceptron algorithm does not move it. So for every separating hyperplane there exist conditions for the perceptron algorithm to end in it.

You can make similar argument for the ordering of points being presentend.