ConvNet : Validation Loss not strongly decreasing but accuracy is improving

443 views Asked by At

Using TensorFlow I've build a simple CNN for classification. It has the following definition:

Input Tensor : 32,32,1 Grayscale Image
1 Conv Layer 3x3x32
Relu Activated
2x2 Max Pooled
128 FC1
43 FC2 # 43 classes

Full code can be found on this notebook on github

The validation loss and accuracy at Epochs 100, 1000, 2000 are

epoch 100 validation loss 3.67, validation accuracy 12.05%
epoch 1000 validation loss 3.234, validation accuracy 57.63%
epoch 2750 validation loss 3.111, validation accuracy 69.25%

Unless I've misunderstood or have a bug somewhere, the network is learning. However the validation loss is has only decreased very slightly.

What does that mean? How can I use this information to improve the network?

1

There are 1 answers

2
Olivier Moindrot On BEST ANSWER

This is a classic mistake in TensorFlow: you shouldn't apply a softmax on your output and then tf.nn.softmax_cross_entropy_with_logits.

The operation tf.nn.softmax_cross_entropy_with_logits expects unscaled logits (i.e. without softmax). From the documentation:

WARNING: This op expects unscaled logits, since it performs a softmax on logits internally for efficiency. Do not call this op with the output of softmax, as it will produce incorrect results.