In tf.slim, I have used the batch_norm.
My question is: whether I need explicitly to add the dependency to the loss?
I think, the slim knew I have used the batch_norm, whether it has automatically add the dependency to the loss? I am very confused.
Yes, you need.
Could you follow the instructions here:
Note: when training, the moving_mean and moving_variance need to be updated. By default the update ops are placed in
tf.GraphKeys.UPDATE_OPS
, so they need to be added as a dependency to thetrain_op
. For example: