Is there any method to convert a quantization aware pytorch model to .tflite?

479 views Asked by At

I have trained yolov4-tiny on pytorch with quantization aware training. My model layers look like

module_list.0.Conv2d.weight
module_list.0.Conv2d.activation_quantizer.scale
module_list.0.Conv2d.activation_quantizer.zero_point
module_list.0.Conv2d.activation_quantizer.range_tracker.min_val
module_list.0.Conv2d.activation_quantizer.range_tracker.max_val
module_list.0.Conv2d.activation_quantizer.range_tracker.first_a
module_list.0.Conv2d.weight_quantizer.scale
module_list.0.Conv2d.weight_quantizer.zero_point
module_list.0.Conv2d.weight_quantizer.range_tracker.min_val
module_list.0.Conv2d.weight_quantizer.range_tracker.max_val
module_list.0.Conv2d.weight_quantizer.range_tracker.first_w
module_list.0.BatchNorm2d.weight
module_list.0.BatchNorm2d.bias
module_list.0.BatchNorm2d.running_mean
module_list.0.BatchNorm2d.running_var
module_list.0.BatchNorm2d.num_batches_tracked
module_list.1.Conv2d.weight
module_list.1.Conv2d.activation_quantizer.scale
module_list.1.Conv2d.activation_quantizer.zero_point
module_list.1.Conv2d.activation_quantizer.range_tracker.min_val
module_list.1.Conv2d.activation_quantizer.range_tracker.max_val 
...

I tried some methods to convert it to tflite, but I am getting error as RuntimeError: Error(s) in loading state_dict for Darknet: Missing key(s) in state_dict:

I think the reason is that quantization aware training added some new layers, hence tflite conversion is giving error messages.

Any idea how to solve this?

0

There are 0 answers