Do we always check validation accuracy and loss to determine overfitting?

274 views Asked by At

There are tons of article available describing overfitting, and how to resolve them. A general definition is

Overfitting can be identified by checking validation metrics such as accuracy and loss. The validation metrics usually increase until a point where they stagnate or start declining when the model is affected by overfitting. During an upward trend, the model seeks a good fit, which, when achieved, causes the trend to start declining or stagnate.

Question: Shall we only consider validation accuracy and loss to determine overfitting?

In my case, I am working with IEMOCAP dataset, where my final metrics is like below.

                precision    recall  f1-score   support

         ang       0.48      0.45      0.46       170
         hap       0.55      0.24      0.34       442
         neu       0.42      0.56      0.48       384
         sad       0.46      0.69      0.55       245

    accuracy                           0.46      1241
   macro avg       0.48      0.48      0.46      1241
weighted avg       0.48      0.46      0.44      1241

and confusion metrics is like this,

This confusion metrics seems good when I compare it with other experimental results. But my accuracy and loss graphs clearly show overfitting.

Model loss MODEL LOSS

Model accuracy accuracy

So how to determine overfitting? What are the other parameters we shall consider?

1

There are 1 answers

0
Gerry P On

Classically to identify over fitting you look as the TREND of the training loss and the TREND of the validation loss. If over the training epochs the training loss is decreasing and the validation loss is decreasing you are not over fitting. IF the training loss is decreasing and the validation loss oscillates around a plateau that is not over fitting. The model has just done the best it can do on the validation set. However if the training loss is decreasing and the validation loss TREND is increasing then you are over fitting. On your plots I see a very slight degree of over fitting but not much. From what I see your model is training well but is not successful in characterizing the validation data. One of the causes of this can be a difference in the probability distribution of the samples in the training set from the samples in the validation set. How was the validation samples selected? How many samples do you have in your data set? More samples canhelp the problem if you have a way to get them.