What's the proper way of building a frozen graph in tensorflow in order to use it in ML.Net

91 views Asked by At

I want to use my frozen graph models(.pb) in ml.net (eg. ModelOperationsCatalog.LoadTensorflowModel) that i built in python and tensorflow. My several attempts causes errors when I tried to load .pb models in ml.net as below:

error#1:

Tensorflow.TensorflowException: Converting GraphDef to Graph has failed. The binary trying to import the GraphDef was built when GraphDef 
version was 440. The GraphDef was produced by a binary built when GraphDef version was 1645. 
The difference between these versions is larger than TensorFlow's forward compatibility guarantee. 
The following error might be due to the binary trying to import the GraphDef being too old: 
Op type not registered 'DisableCopyOnRead' in binary running on UYGUR-LAPTOP. 
Make sure the Op and Kernel are registered in the binary running in this process. 
Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` 
should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.

error#2:
]:System.FormatException: Tensorflow exception triggered while loading model.
 ---> Tensorflow.InvalidArgumentError: Converting GraphDef to Graph has failed. The binary trying to import the GraphDef was built 
when GraphDef version was 440. The GraphDef was produced by a binary built when GraphDef version was 1645. 
The difference between these versions is larger than TensorFlow's forward compatibility guarantee. 
The following error might be due to the binary trying to import the GraphDef being too old: 
NodeDef mentions attr 'explicit_paddings' not in Op<name=MaxPool; 
signature=input:T -> output:T; attr=T:type,default=DT_FLOAT,allowed=[DT_HALF, DT_BFLOAT16, DT_FLOAT, DT_DOUBLE, DT_INT32, DT_INT64, DT_UINT8, DT_INT16, DT_INT8, DT_UINT16, DT_QINT8]; 
attr=ksize:list(int),min=4; attr=strides:list(int),min=4; attr=padding:string,allowed=["SAME", "VALID"]; attr=data_format:string,default="NHWC",allowed=["NHWC", "NCHW", "NCHW_VECT_C"]>; 
NodeDef: {{node inception_v3/max_pooling2d/MaxPool}}. (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).

error#3:
Converting GraphDef to Graph has failed. The binary trying to import the GraphDef was built when GraphDef version was 440. 
The GraphDef was produced by a binary built when GraphDef version was 1645. The difference between these versions is larger than TensorFlow's 
forward compatibility guarantee. The following error might be due to the binary trying to import the GraphDef being too old:
NodeDef mentions attr 'explicit_paddings' not in Op<name=MaxPool; signature=input:T -> output:T; 
attr=T:type,default=DT_FLOAT,allowed=[DT_HALF, DT_BFLOAT16, DT_FLOAT, DT_DOUBLE, DT_INT32, DT_INT64, DT_UINT8, DT_INT16, DT_INT8, DT_UINT16, DT_QINT8]; 
attr=ksize:list(int),min=4; attr=strides:list(int),min=4; attr=padding:string,allowed=["SAME", "VALID"]; 
attr=data_format:string,default="NHWC",allowed=["NHWC", "NCHW", "NCHW_VECT_C"]>; NodeDef: {{node model/max_pooling2d/MaxPool}}.
 (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.)

error#1 and #2 occurs when i tried to load a model in ml.net that frozen as tensorflow model 
for example keras.applications.InceptionV3.
the last one occurs while loading an xception model built from scratch

.

In short, I'd like to ask what's the proper way of freeze a graph in Python via tensorflow and load and use it in ml.net. thanks in advance.

I built frozen tensorflow models in Python and tensorflow without any error. I tried to load them via ModelOperationsCatalog.LoadTensorflowModel method but it fails always.

I expect that I can load and use these models in my ml.net image classification applications for example.

1

There are 1 answers

1
Miles On

Check which TensorFlow version you're using in python:

import tensorflow as tf
print(tf.__version__)

and check your version of ML.NET. Then check which versions of TensorFlow are supported in your version of ML.NET to see if upgrading or downgrading TensorFlow would help. When you have loads of those kinds of errors, it's often a version compatability issue.

Or, if it's due to your method of freezing, first try if this works:

import tensorflow as tf
from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2

# Load your model
model = ...  # your TensorFlow model

# Convert Keras model to ConcreteFunction
full_model = tf.function(lambda x: model(x))
full_model = full_model.get_concrete_function(tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype))

# Freeze the graph
frozen_func = convert_variables_to_constants_v2(full_model)
frozen_func.graph.as_graph_def()

tf.io.write_graph(graph_or_graph_def=frozen_func.graph,
                  logdir="./frozen_models",
                  name="frozen_graph.pb",
                  as_text=False)