OOM when using placeholder for is_training when use slim BN

229 views Asked by At

I set a placeholder to is_training params in slim.batch_norm like this:

is_traing_ph = tf.placeholder(tf.bool)
output = slim.batch_norm(
            input,
            activation_fn=activation_fn,
            is_training=is_training_ph,
            updates_collections=None,
            scale=scale,
            scope=scope)

Feed it like this:

sess.run(train_op, feed_dict={is_training_ph:False}

when I feed is_training_ph with True, the program is OK, But when I feed is_traing_ph with False, the program throws OOM error.

And, when I do not use placeholder like this:

    output = slim.batch_norm(
            input,
            activation_fn=activation_fn,
            is_training=True,
            updates_collections=None,
            scale=scale,
            scope=scope)

it is not any problems.

Here is my full test code and log trace: https://gist.github.com/xxxzhi/8fc8f840a8ec07fdbae7c2fc2c77b3da

Does anyone know the reason? Is it a bug of slim.batch_norm?

The memory of GPU is 12G. CUDA 8, tensorflow1.2, tensorflow1.3

Thanks in advance.

0

There are 0 answers