Created a simple dummy sequential model in
tf.keras
as shown below:model = tf.keras.Sequential() model.add(layers.Dense(10, input_shape=(100, 100))) model.add(layers.Conv1D(3, 2)) model.add(layers.Flatten()) model.add(layers.Dense(10, activation='softmax', name='predict_10'))
Trained the model and saved it using
tf.keras.models.saved_model
.To get the
input
input andoutput node names
usedsaved_model_cli
.saved_model_cli show --dir "path/to/SavedModel" --all
Froze the
saved model
with freeze_graph.py utility.python freeze_graph.py --input_saved_model_dir=<path/to/SavedModel> --output_graph=<path/freeze.pb> --input_binary=True --output_node_names=StatefulPartitionedCall
Model is frozen.
Now Here's the main issue:
- To load the frozen graph I've used this guide Migrate tf1.x to tf2.x (
wrap_frozen_graph
) - Used
with tf.io.gfile.GFile("patf/to/freeze.pb", 'rb') as f: graph_def = tf.compat.v1.GraphDef() graph_def.ParseFromString(f.read()) load_frozen = wrap_frozen_graph(graph_def, inputs='dense_3_input:0', outputs='predict_10:0')
- Output error
ValueError: Input 1 of node StatefulPartitionedCall was passed float from dense_3/kernel:0 incompatible with expected resource.
I'm getting same error when converting .pb to .dlc (Qualcomm). Actually I want to run original model on Qualcomm's Hexagon DSP or GPU.