I've recently encountered such an amazing tool called tensorRT, but because I don't have NVIDIA GPU on my laptop, I decided to use Google Collab instead to play around with this technology.
I used simple pip command to install necessary libraries, including ones for CUDA management
pip install nvidia-tensorrt --index-url https://pypi.ngc.nvidia.com
pip install pycuda
After installation everything seems to be ready for usage. However, it turns out that some of the common methods simply does not exist.
When I tried to create tensorRT Engine
via
builder = trt.Builder(trt.Logger(trt.Logger.INFO))
network = builder.create_network(batch_size)
engine = builder.build_cuda_engine(network)
It throws exception, 'tensorrt.tensorrt.Builder' has no attribute 'build_cuda_engine'
, despite the fact, that it suppose to.
Am I missing out on some important installation, or I just use some deprecated version?
TensorRT is indeed quite a nice tool for inference. It is tricky to use at the beginning but quickly becomes logical. Follow the python examples available on their github here.
To solve your particular problem, meaning, programmatically building a TensorRT engine follow this structure :
It is quite a basic snippet of code which should fix your current issue. Following the samples and following this structure should fix your issues.
This Yolov7 github repository also supports TensorRT and has a complete implementation of how to export your model and this google colab about how to infer it.
Cheers !