How to get the highest accuracy of a model after the training

841 views Asked by At

I have run a model with 4 epochs and using early_stopping.

early_stopping = EarlyStopping(monitor='val_loss', mode='min', patience=2, restore_best_weights=True)
history = model.fit(trainX, trainY, validation_data=(testX, testY), epochs=4, callbacks=[early_stopping])

Epoch 1/4
812/812 [==============================] - 68s 13ms/sample - loss: 0.6072 - acc: 0.717 - val_loss: 0.554 - val_acc: 0.7826
Epoch 2/4
812/812 [==============================] - 88s 11ms/sample - loss: 0.5650 - acc: 0.807 - val_loss: 0.527 - val_acc: 0.8157
Epoch 3/4
812/812 [==============================] - 88s 11ms/sample - loss: 0.5456 - acc: 0.830 - val_loss: 0.507 - val_acc: 0.8244
Epoch 4/4
812/812 [==============================] - 51s 9ms/sample - loss: 0.658 - acc: 0.833 - val_loss: 0.449 - val_acc: 0.8110

The highest val_ac corresponds to the third epoch, and is 0.8244. However, the accuracy_score function will return the last val_acc value, which is 0.8110.

yhat = model.predict_classes(testX)
accuracy = accuracy_score(testY, yhat)

It is possible to specify the epoch while calling the predict_classesin order to get the highest accuracy (in this case, the one corresponding to the third epoch) ?

1

There are 1 answers

6
ML_Engine On

It looks like early-stopping isn't being trigged because you're only training for 4 epochs and you've set early stopping to trigger when val_loss doesn't decrease over two epochs. If you look at your val_loss for each epoch, you can see it's still decreasing even on the fourth epoch.

So simply put, your model is just running the full four epochs without using early stopping, which is why it's using the weights learned in epoch 4 rather than the best in terms of val_acc.

To fix this, set monitor='val_acc' and run for a few more epochs. val_acc only starts to decrease after epoch 3, so earlystopping won't trigger until epoch 5 at the earliest.

Alternatively you could set patience=1 so it only checks a single epoch ahead.