How to do inference using TesnorFlow-GPU models on Tegra X2?

880 views Asked by At

I am new to Jetson tegra x2 board.

I have a plan to run my tensorflow-gpu models on TX2 board and see how they perform there. These models are trained and tested on GTX GPU machine.

On tx2 board, Jetpack full does not have tensorflow in it. So tensorflow needs to be built/installed which I have seen several tutorials on and tried. My python files train.py and test.py expect tensorflow-gpu.

Now I suspect, if tensorflow-gpu buiding on tx2 board is the right way to go?

Oh, there is Nvidia TensorRT on TX2, that will do part of the job, but how? and is that right?

Will tensorflow and tensorRT work together to replace tensorflow-gpu? but how? then what modifications will i have to make in my train and test python files?

Do I really need to build tensorflow for tx2 at all? I only need inference I don't want to do training there.

I have studied different blogs and tried a several options but now things are bit messed up.

My simple question is:

What are steps to get inference done on Jetson TX2 board by using TensorFlow-GPU deep learning models trained on GTX machine?

3

There are 3 answers

0
Pooya Davoodi On BEST ANSWER

The easiest way is to install the NVIDIA provided wheel: https://docs.nvidia.com/deeplearning/dgx/install-tf-jetsontx2/index.html

All the dependencies are already installed by JetPack.

After you install Tensorflow using the wheel, you can use it however you use Tensorflow on other platforms. For running inference, you can download a Tensorflow model into TX2 memory, and run your Tensorflow inference scripts on them.

You can also optimize your Tensorflow models by passing them through TF-TRT: https://docs.nvidia.com/deeplearning/dgx/integrate-tf-trt/index.html There is just one API call that does the optimization: create_inference_graph(...) This will optimize the Tensorflow graph (by mostly fusing nodes), and also let you build the models for lower precision to get better speedup.

0
rakidedigama On

I built tensorflow on JetsonTX2 following this guide. It provides instructions and wheels for both Python 2 and Python3.

If you are new to Jetson TX2, also take a look at this "Guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson". (*This does not require tensorflow installation since Jetpack already builds TensorRT)

If you have tensorflow trained graphs that you want to run inference on Jetson then you need to first install tensorflow. Afterwards, it is recommended (not compulsory for inference) that you optimize your trained models with tensorRT.Check out these repos for object detection/classification examples that uses TensorRT optimization.

0
Karthik On

You can find the tensorflow-gpu wheel files of TX2 for both python 2.7 and python 3.5 in this link of Nvidia's Developer Forum.

https://devtalk.nvidia.com/default/topic/1031300/jetson-tx2/tensorflow-1-8-wheel-with-jetpack-3-2-/