System information
- Linux Ubuntu 16.04:
- TensorFlow Serving installed from pip (1.10.1):
- TensorFlow Serving version 1.10.1:
Describe the problem
I found a wired error message when serving my own model, I have tested the .pb file with saved_model.load and it's all good, but when I send a request through client, the following error is reported:
<_Rendezvous of RPC that terminated with: status = StatusCode.INVALID_ARGUMENT details = "Tensor :0, specified in either feed_devices or fetch_devices was not found in the Graph" debug_error_string = "{"created":"@1537040456.210975912","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1099,"grpc_message":"Tensor :0, specified in either feed_devices or fetch_devices was not found in the Graph","grpc_status":3}" >
The wired part is the Tensor reported not found does not has a name, which I guess is caused because the client is asking to feed into this empty tensor. But I just don't get where this operation could possibly come from.
Exact Steps to Reproduce
I build the serving based on the mnist client and inception client example code, the exported .pb model has been tested successfully by reloading through tf.saved_model.loader.load, so I think the problem is caused by the request.
This is the part of the client code:
channel = grpc.insecure_channel(FLAGS.server)
stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)
request = predict_pb2.PredictRequest()
request.model_spec.name = 'chiron'
request.model_spec.signature_name = tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY
collector = _Result_Collection()
for batch_x,seq_len,i,f,N,reads_n in data_iterator(FLAGS.raw_dir):
request.inputs['signals'].CopyFrom(
tf.contrib.util.make_tensor_proto(batch_x, shape=[FLAGS.batch_size, CONF.SEGMENT_LEN]))
request.inputs['seq_length'].CopyFrom(
tf.contrib.util.make_tensor_proto(seq_len, shape=[FLAGS.batch_size]))
result_future = stub.Predict.future(request, 5.0) # 5 seconds
result_future.add_done_callback(_post_process(collector,i,f))
I found the reason, it is because when creating a TensorProto of a SparseTensor, there is no name assigned to it. See here as well: https://github.com/tensorflow/serving/issues/1100 So a solution would be building the TensorProto for the Sparse Tensor separately: