I have exported a SavedModel
and now I with to load it back in and make a prediction. It was trained with the following features and labels:
F1 : FLOAT32
F2 : FLOAT32
F3 : FLOAT32
L1 : FLOAT32
So say I want to feed in the values 20.9, 1.8, 0.9
get a single FLOAT32
prediction. How do I accomplish this? I have managed to successfully load the model, but I am not sure how to access it to make the prediction call.
with tf.Session(graph=tf.Graph()) as sess:
tf.saved_model.loader.load(
sess,
[tf.saved_model.tag_constants.SERVING],
"/job/export/Servo/1503723455"
)
# How can I predict from here?
# I want to do something like prediction = model.predict([20.9, 1.8, 0.9])
This question is not a duplicate of the question posted here. This question focuses on a minimal example of performing inference on a SavedModel
of any model class (not just limited to tf.estimator
) and the syntax of specifying input and output node names.
Once the graph is loaded, it is available in the current context and you can feed input data through it to obtain predictions. Each use-case is rather different, but the addition to your code will look something like this:
Here, you need to know the names of what your prediction inputs will be. If you did not give them a nave in your
serving_fn
, then they default toPlaceholder_n
, wheren
is the nth feature.The first string argument of
sess.run
is the name of the prediction target. This will vary based on your use case.