Segmentation Fault when exporting to onnx a quantized Pytorch model

1.4k views Asked by At

I am trying to export a model to the onnx format. The architecture is complicated so I won't share it here, but basically, I have the network weights in a .pth file. I'm able to load them, create the network and perform inference with it. It's important to note that I have adapted the code to be able to quantize the network. I have added quantize and dequantize operators as well as some torch.nn.quantized.FloatFunctional() operators.

However, whenever I try to export it with

torch.onnx.export(torch_model,               # model being run
                  input_example,             # model input
                  model_name,                # where to save the model
                  export_params=True,        # store the trained parameter
                  opset_version=11,          # the ONNX version to export
                  # the model to
                  do_constant_folding=True,  # whether to execute constant
                  # folding for optimization
                 )

I get Segmentation fault (core dumped) I am working on Ubuntu 20, with the following packages installed :

torch==1.6.0
torchvision==0.7.0
onnx==1.7.0
onnxruntime==1.4.0

Note that the according to some prints I have left in the code, the inference part of the exporting completes. The segmentation fault happens afterward.

Does anyone see any reason why this may happen ?

[Edit] : I can export my network when it is not adapted for quantized operations. Therefore, the problem is not a broken installation but more a problem of some quantized operators for onnx saving.

1

There are 1 answers

0
Joseph Budin On BEST ANSWER

Well, it turns out that ONNX does not support quantized models (but does not warn you in anyway when running, it just throws out a segfault). It does not seem to be on the agenda yet, so a solution can be to use TensorRT.