How to convert a saved_model.pb to EvalSavedModel?

490 views Asked by At

I was going through the tensorflow-model-analysis documentation evaluating TensorFlow models. The getting started guide talks about a special SavedModel called the EvalSavedModel.

Quoting the getting started guide:

This EvalSavedModel contains additional information which allows TFMA to compute the same evaluation metrics defined in your model in a distributed manner over a large amount of data, and user-defined slices.

My question is how can I convert an already existing saved_model.pb to an EvalSavedModel?

2

There are 2 answers

0
Bulat On

EvalSavedModel is exported as SavedModel message, thus there is no need in such conversion.

EvalSavedModel uses SavedModelBuilder under the hood. It populates the estimator graph with several placeholders, creates some additional metric collections. Later on, it performs simple SavedModelBuilder procedure.

Source - https://github.com/tensorflow/model-analysis/blob/master/tensorflow_model_analysis/eval_saved_model/export.py#L228

P.S. I suppose you want to run model-analysis on your model, exported by SavedModelBuilder. Since SavedModel doesn't have neither metric nodes nor related collections, which are created in EvalSavedModel, it's useless to do so - model-analysis just simply couldn't find any metric related to your estimator.

0
AudioBubble On

If I understand your question correctly, you have saved_model.pb generated, either by using tf.saved_model.simple_save or tf.saved_model.builder.SavedModelBuilderor by estimator.export_savedmodel.

If my understanding is correct, then, you are exporting Training and Inference Graphs to saved_model.pb.

The Point you mentioned from the Guide of TF Org Website states that, in addition to Exporting Training Graph, we need to Export Evaluation Graph as well. That is called EvalSavedModel.

The Evaluation Graph comprises the Metrics for that Model, so that you can Evaluate the Model's performance using Visualizations.

Before we Export EvalSaved Model, we should prepare eval_input_receiver_fn, similar to serving_input_receiver_fn.

We can mention other functionalities as well, like, if you want the Metrics to be defined in a Distributed Manner or if we want to Evaluate our Model using Slices of Data, rather than the Entire Dataset. Such Options can be mentioned in eval_input_receiver_fn.

Then we can Export the EvalSavedModel using the Code below:

tfma.export.export_eval_savedmodel(estimator=estimator,export_dir_base=export_dir,
  eval_input_receiver_fn=eval_input_receiver_fn)