Linked Questions

Popular Questions

Accuracy drop for Tensorflow object detection Post Quantization

Asked by At

I am fine-tuning SSD Mobilenet v2 for a custom dataset. I am fine-tuning the model for 50k steps and quantization aware training kicks in at 48k step count.

graph_rewriter {
  quantization {
    delay: 48000
    weight_bits: 8
    activation_bits: 8

I am observing a 95%+ training, validation and testing mAP post training.

After quantization using the commands

python object_detection/ 
--output_directory=${OUTPUT_DIR} --add_postprocessing_op=true

 --input_file=${OUTPUT_DIR}/tflite_graph.pb \
 --output_file=${OUTPUT_DIR}/detect.tflite \
 --input_format=TENSORFLOW_GRAPHDEF \
 --output_format=TFLITE \
 --inference_type=QUANTIZED_UINT8 \
 --input_shapes="1,300,300,3" \
 --input_arrays=normalized_input_image_tensor \
 --output_arrays="TFLite_Detection_PostProcess","TFLite_Detection_PostProcess:1","TFLite_Detection_PostProcess:2","TFLite_Detection_PostProcess:3" \
 --std_values=128.0 --mean_values=128.0 --allow_custom_ops --default_ranges_min=0 --default_ranges_max=6

I tested the generated detect.tflite model using same test set. I see a drop in mAP to about 85%.

Is this mAP number drop to be expected? How can I improve the post quantization mAP?

Related Questions