About model comparison in terms of training/validation loss

171 views Asked by At

I'm comparing two models, and want to clarify the weird results.

Model 1 achieves lower training loss than model 2, but get higher validation loss.

Because over-fitting and under-fitting are determined by comparing training/validation loss of themselves, therefore, I think it's not an issue of over-fitting.

Precisely, I'm now training with point cloud classification tasks,

got model 1 training loss : 1.51, test loss : 1.56 / model 2 training loss : 1.37, test loss : 1.58.

All other conditions are the same.

So the question is, how can this happens, test loss is lower than training loss?

it will be grateful anyone can help our problems.

0

There are 0 answers