I am getting the following error while I am trying to apply static quantization on a model. The error is in the fuse part of the code: torch.quantization.fuse_modules(model, modules_to_fuse):

model = torch.quantization.fuse_modules(model, modules_to_fuse)
  File "/Users/celik/PycharmProjects/GFPGAN/colorization/lib/python3.8/site-packages/torch/ao/quantization/fuse_modules.py", line 146, in fuse_modules
    _fuse_modules(model, module_list, fuser_func, fuse_custom_config_dict)
  File "/Users/celik/PycharmProjects/GFPGAN/colorization/lib/python3.8/site-packages/torch/ao/quantization/fuse_modules.py", line 77, in _fuse_modules
    new_mod_list = fuser_func(mod_list, additional_fuser_method_mapping)
  File "/Users/celik/PycharmProjects/GFPGAN/colorization/lib/python3.8/site-packages/torch/ao/quantization/fuse_modules.py", line 45, in fuse_known_modules
    fuser_method = get_fuser_method(types, additional_fuser_method_mapping)
  File "/Users/celik/PycharmProjects/GFPGAN/colorization/lib/python3.8/site-packages/torch/ao/quantization/fuser_method_mappings.py", line 132, in get_fuser_method
    assert fuser_method is not None, "did not find fuser method for: {} ".format(op_list)
AssertionError: did not find fuser method for: (<class 'torch.nn.modules.conv.Conv2d'>,) 
2

There are 2 answers

0
Rushiraj Parmar On

I faced the same error but for me the issue was, I was using LeakyReLU which is not supported, changing LeakyReLU() to just nn.ReLU() worked for me

0
Celik On

The modules_to_fuse list should obey the following rules:

Fuses only the following sequence of modules:
    conv, bn
    conv, bn, relu
    conv, relu
    linear, relu
    bn, relu
    All other sequences are left unchanged.
    For these sequences, replaces the first item in the list
    with the fused module, replacing the rest of the modules
    with identity.

I can not fuse a model for 'torch.nn.modules.conv.Conv2d'. It should be fused with like "cone, bn" or "conv,bn,relu" or "conv,relu" other combinations is not working. Use the above list to prepare your fusing list. It worked for me.

Also here is another list of the fusing methods:

DEFAULT_OP_LIST_TO_FUSER_METHOD : Dict[Tuple, Union[nn.Sequential, Callable]] = {
(nn.Conv1d, nn.BatchNorm1d): fuse_conv_bn,
(nn.Conv1d, nn.BatchNorm1d, nn.ReLU): fuse_conv_bn_relu,
(nn.Conv2d, nn.BatchNorm2d): fuse_conv_bn,
(nn.Conv2d, nn.BatchNorm2d, nn.ReLU): fuse_conv_bn_relu,
(nn.Conv3d, nn.BatchNorm3d): fuse_conv_bn,
(nn.Conv3d, nn.BatchNorm3d, nn.ReLU): fuse_conv_bn_relu,
(nn.Conv1d, nn.ReLU): nni.ConvReLU1d,
(nn.Conv2d, nn.ReLU): nni.ConvReLU2d,
(nn.Conv3d, nn.ReLU): nni.ConvReLU3d,
(nn.Linear, nn.BatchNorm1d): fuse_linear_bn,
(nn.Linear, nn.ReLU): nni.LinearReLU,
(nn.BatchNorm2d, nn.ReLU): nni.BNReLU2d,
(nn.BatchNorm3d, nn.ReLU): nni.BNReLU3d,}