Run Tensorflow with NVIDIA TensorRT Inference Engine

7.9k views Asked by At

I would like to use NVIDIA TensorRT to run my Tensorflow models. Currenly, TensorRT supports Caffe prototxt network descriptor files.

I was not able to find source code to convert Tensorflow models to Caffe models. Are there any workarounds?

2

There are 2 answers

1
Andrei Pokrovsky On

TensorRT 3.0 supports import/conversion of TensorFlow graphs via it's UFF (universal framework format). Some layer implementations are missing and will require custom implementations via IPlugin interface.

Previous versions didn't support native import of TensorFlow models/checkpoints.

What you can also do is export the layers/network description into your own intermediate format (such as text file) and then use TensorRT C++ API to construct the graph for inference. You'd have to export the convolution weights/biases separately. Make sure to pay attention to weight format - TensorFlow uses NHWC while TensorRT uses NCHW. And for the weights, TF uses RSCK ([filter_height, filter_width, input_depth, output_depth]) and TensorRT uses KCRS.

See this paper for an extended discussion of tensor formats: https://arxiv.org/abs/1410.0759

Also this link has useful relevant info: https://www.tensorflow.org/versions/master/extend/tool_developers/

0
bounikos On

No workarounds are currently needed as the new TensorRT 3 added support for TensorFlow.