tensorflow batch normalization gives doesn't work as expected when is_training flag is False

270 views Asked by At

I have a model in which I perform batch normalization after every convolutional layer expect the last one. I use the function tensorflow.contrib.layers.batch_norm function to do this. When I set the is__training flag as True the loss value that is reported seems correct. For my particular example, it starts at 60s and decreases to almost 0. When I set the is_training flag to flase I get my loss value in the order of 1e10 which seems absurd.

I have attached the snippet I use in my code.

loss=loss_func_l2(logits,y)
update_ops=tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
  optimizer=tf.train.AdamOptimizer(learning_rate=lr)
  Trainables=optimizer.minimize(loss)
#Training 
sess=tf.Session()
training(train_output,train_input,sess) # is_training is true here
#validation
validate(test_output,train_input,sess) # is_training is false here

What could be the reason?

0

There are 0 answers