With the help of the Stack Overflow community, I have started to understand the complexity of TensorFlow's "high level" API, Estimator
. I, perhaps foolishly, did so thinking that if TensorFlow and TensorFlow JS were to get along, most likely it would be through the Estimator
API...
So in this Colab, I have a simple custom Estimator
which is simply wired up so that methods like .train_and_evaluate
work and after "training" the Estimator
is exported via export_savedmodel
.
Now, suppose I want to go ahead and use this trained model in the browser via TensorFlow js.
To my luck, there is a guide on how to convert a saved_model
for TensorFlow JS:
tensorflowjs_converter \
--input_format=tf_saved_model \
--output_node_names='MobilenetV1/Predictions/Reshape_1' \
--saved_model_tags=serve \
/mobilenet/saved_model \
/mobilenet/web_model
While the arguments have a description...
--output_node_names
The names of the output nodes, separated by commas.--saved_model_tags
Only applicable to SavedModel conversion, Tags of the MetaGraphDef to load, in comma separated format. Defaults to serve.--signature_name
Only applicable to TensorFlow Hub module conversion, signature to load. Defaults to default. See https://www.tensorflow.org/hub/common_signatures/.
I am not sure what I am supposed to replace these with for the demo estimator found in the Colab.
Why? Well for starters, the best_model
exporter, which uses the serving_input_receiver_fn
, has different output when loaded via
from tensorflow.contrib import predictor
predict_fn = predictor.from_saved_model('<exported_location>')
than from
estimator.predict(lambda: predict_input_fn(pred_features), yield_single_examples=False)
namely, the key of the predicted features features outputted in the former is "outputs" and "labels" in the latter.
which, forgive my mini-rant, why doesn't Estimator
have a way to load an exported model?
so:
what are my "
output_node_names
"?since this is a SavedModel, what
tags
do I need?
I would greatly appreciate any guidance.