I am working on a perceptron problem and I have made some fake data and the perceptron algorithm does not converge when the data is linearly separable.
Here is the fake data that is linearly separable.
np.random.seed(42)
linear_df = pd.DataFrame({
'X1': np.round(np.concatenate([np.random.uniform(low = 0, high= 5, size=4), np.random.uniform(low=8, high=12, size=4)]), 1),
'X2': np.round(np.concatenate([np.random.uniform(low = 0, high= 5, size=4), np.random.uniform(low=8, high=12, size=4)]),1),
'Y': [1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0]
})
then I run perceptron on it
clf = Perceptron(verbose=1, max_iter=1000)
X = linear_df[['X1', 'X2']]
y = linear_df['Y']
clf.fit(X, y)
linear_coef = clf.coef_
linear_bias = clf.intercept_[0]
print(clf.coef_)
print(clf.intercept_)
print(clf.score(X, y))
Convergence after 8 epochs took 0.00 seconds [[ 2.3 -2.6]] [17.] 0.5
But it says it converges after 8 Epochs and it does not produce the correct output.]
Here is the plot:

Any ideas?
you would need to do some things, mainly for now, to get the best results, you can increase the sample size and that should be enough for the model to have a better dataset to train on:
This already gives the following results:
And the classes are easily separated:
I would also advise you to always scale your data, try to research
StandardScalerfromsklearn.preprocessing.Furthermore, play around with the number of samples, and see how your models get gradually worse.