Tensor not found with empty name, when serving the model

1.1k views Asked by At

System information

  • Linux Ubuntu 16.04:
  • TensorFlow Serving installed from pip (1.10.1):
  • TensorFlow Serving version 1.10.1:

Describe the problem

I found a wired error message when serving my own model, I have tested the .pb file with saved_model.load and it's all good, but when I send a request through client, the following error is reported:

<_Rendezvous of RPC that terminated with: status = StatusCode.INVALID_ARGUMENT details = "Tensor :0, specified in either feed_devices or fetch_devices was not found in the Graph" debug_error_string = "{"created":"@1537040456.210975912","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1099,"grpc_message":"Tensor :0, specified in either feed_devices or fetch_devices was not found in the Graph","grpc_status":3}" >

The wired part is the Tensor reported not found does not has a name, which I guess is caused because the client is asking to feed into this empty tensor. But I just don't get where this operation could possibly come from.

Exact Steps to Reproduce

I build the serving based on the mnist client and inception client example code, the exported .pb model has been tested successfully by reloading through tf.saved_model.loader.load, so I think the problem is caused by the request.

This is the part of the client code:

channel = grpc.insecure_channel(FLAGS.server)
stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)
request = predict_pb2.PredictRequest()
request.model_spec.name = 'chiron'
request.model_spec.signature_name = tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY
collector = _Result_Collection()
for batch_x,seq_len,i,f,N,reads_n in data_iterator(FLAGS.raw_dir):
    request.inputs['signals'].CopyFrom(
        tf.contrib.util.make_tensor_proto(batch_x, shape=[FLAGS.batch_size, CONF.SEGMENT_LEN]))
    request.inputs['seq_length'].CopyFrom(
        tf.contrib.util.make_tensor_proto(seq_len, shape=[FLAGS.batch_size]))
    result_future = stub.Predict.future(request, 5.0)  # 5 seconds
    result_future.add_done_callback(_post_process(collector,i,f))
2

There are 2 answers

0
昊天滕 On

I found the reason, it is because when creating a TensorProto of a SparseTensor, there is no name assigned to it. See here as well: https://github.com/tensorflow/serving/issues/1100 So a solution would be building the TensorProto for the Sparse Tensor separately:

import tensorflow as tf
signal =  tf.constant([[[1]]])
sequence_length = tf.constant([1])
output,log_prob = tf.nn.ctc_beam_search_decoder(signal,sequence_length)
indices = output[0].indices
values = output[0].values
dense_shape = output[0].dense_shape
indices_tensor_proto = tf.saved_model.utils.build_tensor_info(indices)
values_tensor_proto = tf.saved_model.utils.build_tensor_info(values)
dense_shape_tensor_proto = tf.saved_model.utils.build_tensor_info(dense_shape)
0
Preshen On

I had this issue with an LSTM Model + RaggedTensors, my output tensor had an empty name. I think it's due to the model outputting ragged tensors.

The given SavedModel SignatureDef contains the following output(s):
    outputs['output_1'] tensor_info:
        dtype: DT_INVALID
        shape: ()
        name: 
  Method name is: tensorflow/serving/predict

I solved it by writing a custom output signature, and converting the models output to a regular tensor. This worked for a Functional and Subclassed Model.

    @tf.function()
    def overwrite_predict_signature(prediction_input):
        inputs = prediction_input
        prediction = model.call(inputs)
        return {"named_prediction_outputs": prediction.to_tensor()}

    my_signatures = overwrite_predict_signature.get_concrete_function(
       prediction_input={'feature_1': RaggedTensorSpec(TensorShape([None, None, 1]), tf.float32, 1, tf.int64), 'feature_2': RaggedTensorSpec(TensorShape([None, None, 1]), tf.string, 1, tf.int64), 'feature_3': RaggedTensorSpec(TensorShape([None, None, 1]), tf.float32, 1, tf.int64), 'feature_4': RaggedTensorSpec(TensorShape([None, None, 1]), tf.string, 1, tf.int64), 'feature_5': RaggedTensorSpec(TensorShape([None, None, 1]), tf.int32, 1, tf.int64)})

    tf.saved_model.save(model, export_dir=(args.sm_model_dir + "/1/"), signatures=my_signatures)

Which registered an op/function with my output.

 The given SavedModel SignatureDef contains the following output(s):
    outputs['named_prediction_outputs'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, -1, 1)
        name: StatefulPartitionedCall:0
  Method name is: tensorflow/serving/predict