im following MNIST Softmax tutorials https://www.tensorflow.org/tutorials/mnist/beginners/
Followed by the document, the model should be
y = tf.nn.softmax(tf.matmul(x, W) + b)
but in the sample source code, as u can see
# Create the model
x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.matmul(x, W) + b
softmax is not used. I think it needs to be changed
y = tf.nn.softmax(tf.matmul(x, W) + b)
I assume that, in the testing function it uses argmax so the it doesn't needs to be normalized to 0~1.0 value. But it can bring some confusion to developers.
as idea on this?
Softmax is used, row 57:
See softmax_cross_entropy_with_logits for more details.