I'm trying to prune the input pipeline of Dataset and all the train_op of my model restored from checkpoint/.pb. But with the nodes haven't been pruned after using tf.graph_util.extract_sub_graph(). My code looks like:

# load checkpoint or saved_model
restorer = tf.train.import_meta_graph('./model.meta')
graph = tf.get_default_graph()
graph.as_default()

#print node names
print_nodes_name(graph)

# cut it after the first layer
nodes_to_conserve = ['model/conv1/Relu']

# extract subgraph
subgraph = tf.graph_util.extract_sub_graph(graph.as_graph_def(), nodes_to_conserve)

# for the second time
print_nodes_name(graph)

with tf.Session(graph=tf.graph_util.import_graph_def(subgraph)) as sess:
    ...

I tested on a simple two layers convolutional model expecting it cuts the model between layers. When I print nodes name for the second time, the nodes of second layer are still there. enter image description here

I'm quite confused with this 'extract_sub_graph()'. I expected to get something like this post but it doesn't work. My questions are:

  1. For the nodes_to_conserve, should I put operations or tensor ends like Relu:0?
  2. How to load the saved weights and bias? Before or after the extract_sub_graph

0 Answers