I can't find the problem I tried normalizing the dataset but It didn't work too. When I work with other datasets such as
[[0,0,1],[1,1,1],[1,0,1],[0,1,1]]] as the input and [[0,1,1,0]] as the output
it works.
Also I don't want to use any other libraries than numpy and pandas.
import numpy as np
import pandas as pd
def sigmoid(x):
return 1 /(1+np.exp(-x))
def sigmoid_derivative(x):
return x*(1-x)
training_inputs = np.array([
[1, 0, 0],
[0, 1, 0],
[0, 0, 1],
[1, 1, 0],
[0, 1, 1],
[1, 1, 1],
[2, 0, 0]
])
training_outputs = np.array([[1,2,0,1,0,2,3]]).T
np.random.seed(1)
synaptic_weights = 2 * np.random.random((3,1))-1
print('Random starting synaptic weights: ')
print(synaptic_weights)
for iteration in range(2000):
input_layer = training_inputs
outputs = sigmoid(np.dot(input_layer, synaptic_weights))
error = training_outputs - outputs
adjustments = error * sigmoid_derivative(outputs)
synaptic_weights += np.dot(input_layer.T,adjustments)
print('Synaptic Weights After Training: ')
print(synaptic_weights)
print('Outputs after training: ')
print(outputs)
The predictions are
[0.99869975]
[0.98680038]
[0.00212543]
[0.99998259]
[0.13736242]
[0.99189007]
[0.9999983 ]
but the real outputs are
[[1,2,0,1,0,2,3]]
1 is smaller than 2 so 0.99<0.98 in the predictions should not be occured.