I am running tensorflow version 1.3.0 on ubuntu 16.04.I am playing with a code where my main intention is to visualise a graph on tensorboard. While running the code everything seems to be perfectly fine when the code is ran for the very first time. However after that when I ran the code for the 2nd time I get this error:

InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder' with dtype float and shape [?,784]
 [[Node: Placeholder = Placeholder[dtype=DT_FLOAT, shape=[?,784], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

Here is the traceback:

InvalidArgumentError Traceback (most recent call last)
<ipython-input-26-149c9b9d8878> in <module>()
 11             sess.run(optimizer, feed_dict={x: batch_xs, y: 
 batch_ys})
 12             avg_cost += sess.run(cost_function, feed_dict={x: 
 batch_xs, y: batch_ys})/total_batch
 ---> 13             summary_str = sess.run(merged_summary_op, 
feed_dict={x: batch_xs, y: batch_ys})
 14             summary_writer.add_summary(summary_str, 
 iteration*total_batch + i)
 15         if iteration % display_step == 0:

/home/niraj/anaconda2/lib/python2.7/site-
packages/tensorflow/python/client/session.pyc in run(self, fetches, 
feed_dict, options, run_metadata)
893     try:
894       result = self._run(None, fetches, feed_dict, options_ptr,
--> 895                          run_metadata_ptr)
896       if run_metadata:
897         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

InvalidArgumentError                      Traceback (most recent call 
last)
<ipython-input-26-149c9b9d8878> in <module>()
11             sess.run(optimizer, feed_dict={x: batch_xs, y: 
batch_ys})
12             avg_cost += sess.run(cost_function, feed_dict={x: 
batch_xs, y: batch_ys})/total_batch
---> 13             summary_str = sess.run(merged_summary_op, 
feed_dict={x: batch_xs, y: batch_ys})
14             summary_writer.add_summary(summary_str, 
iteration*total_batch + i)
15         if iteration % display_step == 0:

/home/niraj/anaconda2/lib/python2.7/site-
packages/tensorflow/python/client/session.pyc in run(self, fetches, 
feed_dict, options, run_metadata)
893     try:
894       result = self._run(None, fetches, feed_dict, options_ptr,
--> 895                          run_metadata_ptr)
896       if run_metadata:
897         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/home/niraj/anaconda2/lib/python2.7/site-
packages/tensorflow/python/client/session.pyc in _run(self, handle, 
fetches, feed_dict, options, run_metadata)
1122     if final_fetches or final_targets or (handle and 
feed_dict_tensor):
1123       results = self._do_run(handle, final_targets, 
final_fetches,
-> 1124                              feed_dict_tensor, options, 
run_metadata)
1125     else:
1126       results = []

/home/niraj/anaconda2/lib/python2.7/site-
packages/tensorflow/python/client/session.pyc in _do_run(self, handle,
target_list, fetch_list, feed_dict, options, run_metadata)
1319     if handle is None:
1320       return self._do_call(_run_fn, self._session, feeds, 
fetches, targets,
-> 1321                            options, run_metadata)
1322     else:
1323       return self._do_call(_prun_fn, self._session, handle, 
feeds, fetches)

/home/niraj/anaconda2/lib/python2.7/site-
packages/tensorflow/python/client/session.pyc in _do_call(self, fn, 
*args)
1338         except KeyError:
1339           pass
-> 1340       raise type(e)(node_def, op, message)
1341 
1342   def _extend_graph(self):

Here is the code:

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/home/niraj/Documents/artificial 
intelligence/projects/tensorboard", one_hot=True)

learning_rate = 0.01
training_iteration = 200
batch_size = 100
display_step = 2

# TF graph input
x = tf.placeholder('float32', [None, 784]) # mnist data image of shape 
28*28=784
y = tf.placeholder('float32',[None, 10]) # 0-9 digits recognition => 
10 classes

W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))

with tf.name_scope("Wx_b") as scope:
    model = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax

w_h = tf.summary.histogram("weights", W)
b_h = tf.summary.histogram("biases", b)

with tf.name_scope("cost_function") as scope:
    cost_function = -tf.reduce_sum(y*tf.log(model))
tf.summary.scalar("cost_function", cost_function)

with tf.name_scope("train") as scope:
    optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost_function)

init = tf.global_variables_initializer()
merged_summary_op = tf.summary.merge_all()
with tf.Session() as sess:
    sess.run(init)
summary_writer = tf.summary.FileWriter('/home/niraj/Documents/artificial intelligence/projects/tensorboard', graph=sess.graph)
for iteration in range(training_iteration):
    avg_cost = 0
    total_batch = int(mnist.train.num_examples/batch_size)
    for i in range(total_batch):
        batch_xs, batch_ys = mnist.train.next_batch(batch_size)
        sess.run(optimizer, feed_dict={x: batch_xs, y: batch_ys})
        avg_cost += sess.run(cost_function, feed_dict={x: batch_xs, y: batch_ys})/total_batch
        summary_str = sess.run(merged_summary_op, feed_dict={x: batch_xs, y: batch_ys})    
        summary_writer.add_summary(summary_str, iteration*total_batch + i)
    if iteration % display_step == 0:
        print "Iteration:", '%04d' % (iteration + 1), "cost=", "{:.9f}".format(avg_cost)


print "Tuning completed!"
predictions = tf.equal(tf.argmax(model, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(predictions, "float"))
print "Accuracy:", accuracy.eval({x: mnist.test.images, y: mnist.test.labels})

Remind you this code works perfectly fine when I am running it for the very first time.On the 2nd run it is giving the error.However when I close the notebook and my jupyter terminal and then reopening it and running it again,it will again run without any errors and on the 2nd run it gives the above error.

1

There are 1 answers

1
DNadler On

I'm having the same issue, and have so far found that the error does not occur when I remove the summary operation. I'll update this if I find a way to get it to work with the summary...

UPDATE:

I fixed this by following the suggestion here: Error with feed values for placeholders when running the merged summary op

I replaced tf.summary.merge_all with tf.summary.merge([summary_var1, summary_var2])

An easier way to fix this is to call tf.reset_default_graph() at the end of your loop, before you start training again.