I have used the pima-indians-diabetes.csv dataset. I have built a neural network containing architecture 12-8-1 using Keras and I was able to visualize the training history perfectly. Next, I tried to implement the same model using MLPCLassifier from scikit learn. Is it possible to implement training history curves in this case like I did with Keras? I just need to visualize my training history, that is, training accuracy, validation accuracy, training loss and validation loss, like I did with Keras. My code and curves using Keras:
from keras.models import Sequential
from keras.layers import Dense
from sklearn.model_selection import StratifiedKFold
import numpy
numpy.random.seed(42)
# load pima indians dataset
dataset = numpy.loadtxt("/content/gdrive/My Drive/pima-indians-diabetes.csv", delimiter=",")
# split into input (X) and output (Y) variables
X = dataset[:,0:8]
Y = dataset[:,8]
model = Sequential()
model.add(Dense(12, input_dim=8, kernel_initializer= 'uniform' , activation= 'relu' ))
model.add(Dense(8, kernel_initializer= 'uniform' , activation= 'relu' ))
model.add(Dense(1, kernel_initializer= 'uniform' , activation= 'sigmoid' ))
# Compile model
model.compile(loss= 'binary_crossentropy' , optimizer= 'adam' , metrics=[ 'accuracy' ])
history= model.fit(X, Y, validation_split=0.33, epochs=150, batch_size=10)
import matplotlib.pyplot as plt
print(history.history.keys())
# summarize history for accuracy
plt.plot(history.history[ 'accuracy' ])
plt.plot(history.history[ 'val_accuracy' ])
plt.title( 'model accuracy' )
plt.ylabel( 'accuracy' )
plt.xlabel( 'epoch' )
plt.legend([ 'train' , 'test' ], loc= 'lower right' )
plt.show()
import matplotlib.pyplot as plt
print(history.history.keys())
# summarize history for accuracy
plt.plot(history.history[ 'loss' ])
plt.plot(history.history[ 'val_loss' ])
plt.title( 'model loss' )
plt.ylabel( 'loss' )
plt.xlabel( 'epoch' )
plt.legend([ 'train' , 'test' ], loc= 'upper left' )
plt.show()
My code using sklearn's MLP classifier:
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=42)
#Using MLPclassifier from sklearn
from sklearn.neural_network import MLPClassifier
clf1 = MLPClassifier(batch_size=10, max_iter=150, hidden_layer_sizes=(12, 8), verbose=True,
early_stopping=True, random_state=42)
clf1.fit(X_train, y_train)
from sklearn.metrics import classification_report
#Predicting y for X_val
y_pred = clf1.predict(X_test)
print(classification_report(y_test, y_pred))
I was able to generate a classification report but no graphical visualization. How can I generate curves like those attached using sklearn's MLPClassifier like I did with Keras?