I try to create fully quantized tflite model to be able to run it on coral. I downloaded SSD MobileNet V2 FPNLite 640x640 from https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md
I installed in virtual environment tf-nightly-2.5.0.dev20201123 tf-nightly-models and tensorflow/object_detection_0.1
I run this code to do post training quantization
import tensorflow as tf
import cv2
import numpy as np
converter = tf.lite.TFLiteConverter.from_saved_model('./0-ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8/saved_model/',signature_keys=['serving_default']) # path to the SavedModel directory
VIDEO_PATH = '/home/andrej/Videos/outvideo3.h264'
def rep_data_gen():
REP_DATA_SIZE = 10#00
a = []
video = cv2.VideoCapture(VIDEO_PATH)
i=0
while(video.isOpened()):
ret, img = video.read()
i=i+1
if not ret or i > REP_DATA_SIZE:
print('Reached the end of the video!')
break
img = cv2.resize(img, (640, 640))#todo parametrize based on network size
img = img.astype(np.uint8)
#img = (img /127.5) -1 #
#img = img.astype(np.float32)#causing types mismatch error
a.append(img)
a = np.array(a)
print(a.shape) # a is np array of 160 3D images
for i in tf.data.Dataset.from_tensor_slices(a).batch(1).take(REP_DATA_SIZE):
yield [i]
#tf2 models
converter.allow_custom_ops=True
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = rep_data_gen
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8, tf.lite.OpsSet.SELECT_TF_OPS]
#converter.quantized_input_stats = {'inputs': (0, 255)} #does not help
converter.inference_input_type = tf.uint8 # or tf.uint8
converter.inference_output_type = tf.uint8 # or tf.uint8
quantized_model = converter.convert()
# Save the model.
with open('quantized_model.tflite', 'wb') as f:
f.write(quantized_model)
I got
RuntimeError: Max and min for dynamic tensors should be recorded during calibration: Failed for tensor Cast
Empty min/max for tensor Cast
I trained the same model,
SSD MobileNet V2 FPNLite 640x640
, using the scriptmodel_main_tf2.py
and then exported the checkpoint tosaved_model
using the scriptexporter_main_v2.py
. When trying to convert to ".tflite" for use on Edge TPU I was having the same problem.The solution for me was to export the trained model using the script
export_tflite_graph_tf2.py
instead ofexporter_main_v2.py
to generate thesaved_model.pb
. Then the conversion occurred well.Maybe try to generate a saved_model using
export_tflite_graph_tf2.py
.