I am getting the following error while I am trying to apply static quantization on a model. The error is in the fuse part of the code: torch.quantization.fuse_modules(model, modules_to_fuse)
:
model = torch.quantization.fuse_modules(model, modules_to_fuse)
File "/Users/celik/PycharmProjects/GFPGAN/colorization/lib/python3.8/site-packages/torch/ao/quantization/fuse_modules.py", line 146, in fuse_modules
_fuse_modules(model, module_list, fuser_func, fuse_custom_config_dict)
File "/Users/celik/PycharmProjects/GFPGAN/colorization/lib/python3.8/site-packages/torch/ao/quantization/fuse_modules.py", line 77, in _fuse_modules
new_mod_list = fuser_func(mod_list, additional_fuser_method_mapping)
File "/Users/celik/PycharmProjects/GFPGAN/colorization/lib/python3.8/site-packages/torch/ao/quantization/fuse_modules.py", line 45, in fuse_known_modules
fuser_method = get_fuser_method(types, additional_fuser_method_mapping)
File "/Users/celik/PycharmProjects/GFPGAN/colorization/lib/python3.8/site-packages/torch/ao/quantization/fuser_method_mappings.py", line 132, in get_fuser_method
assert fuser_method is not None, "did not find fuser method for: {} ".format(op_list)
AssertionError: did not find fuser method for: (<class 'torch.nn.modules.conv.Conv2d'>,)
I faced the same error but for me the issue was, I was using LeakyReLU which is not supported, changing LeakyReLU() to just nn.ReLU() worked for me