in tensorflow MNIST softmax tutorial, softmax function is not used

701 views Asked by At

im following MNIST Softmax tutorials https://www.tensorflow.org/tutorials/mnist/beginners/

Followed by the document, the model should be

y = tf.nn.softmax(tf.matmul(x, W) + b)

but in the sample source code, as u can see

# Create the model
x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.matmul(x, W) + b

softmax is not used. I think it needs to be changed

y = tf.nn.softmax(tf.matmul(x, W) + b)

I assume that, in the testing function it uses argmax so the it doesn't needs to be normalized to 0~1.0 value. But it can bring some confusion to developers.

as idea on this?

1

There are 1 answers

0
Sergii Gryshkevych On

Softmax is used, row 57:

# So here we use tf.nn.softmax_cross_entropy_with_logits on the raw
# outputs of 'y', and then average across the batch.
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y, y_))

See softmax_cross_entropy_with_logits for more details.