TensorRT seems to lack common functionality

306 views Asked by At

I've recently encountered such an amazing tool called tensorRT, but because I don't have NVIDIA GPU on my laptop, I decided to use Google Collab instead to play around with this technology.

I used simple pip command to install necessary libraries, including ones for CUDA management

pip install nvidia-tensorrt --index-url https://pypi.ngc.nvidia.com
pip install pycuda

After installation everything seems to be ready for usage. However, it turns out that some of the common methods simply does not exist.

When I tried to create tensorRT Engine via

builder = trt.Builder(trt.Logger(trt.Logger.INFO))
network = builder.create_network(batch_size)
engine = builder.build_cuda_engine(network)

It throws exception, 'tensorrt.tensorrt.Builder' has no attribute 'build_cuda_engine', despite the fact, that it suppose to.

enter image description here

Am I missing out on some important installation, or I just use some deprecated version?

1

There are 1 answers

1
Maxime Debarbat On BEST ANSWER

TensorRT is indeed quite a nice tool for inference. It is tricky to use at the beginning but quickly becomes logical. Follow the python examples available on their github here.

To solve your particular problem, meaning, programmatically building a TensorRT engine follow this structure :

explicit_batch = 1 << (int)(
    trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)

with trt.Builder(self.TRT_LOGGER) as builder, \
    builder.create_network(explicit_batch) as network, \
        trt.OnnxParser(network, self.TRT_LOGGER) as parser:

    with open("model.onnx", 'rb') as model:
        parser.parse(model.read())

    builder.max_workspace_size = 1 << 30
    config = builder.create_builder_config()
    config.max_workspace_size = 1 << max_workspace_size
    config.set_flag(trt.BuilderFlag.GPU_FALLBACK)
    config.set_flag(trt.BuilderFlag.FP16)


    engine = builder.build_engine(network, config)
    with open("result.engine", "wb") as f:
        f.write(engine.serialize())

It is quite a basic snippet of code which should fix your current issue. Following the samples and following this structure should fix your issues.

This Yolov7 github repository also supports TensorRT and has a complete implementation of how to export your model and this google colab about how to infer it.

Cheers !