Is XNNPACK foiling my attempt at compiling my tflite file to an edge model for the Coral USB Accelerator?

366 views Asked by At

I have trained BirdNET with another species per the BirdNET docs on github. The training seems to work perfectly well, and I get a tflite model as a result. Now I want to convert this for use on my USB accelerator. I followed the example in the tensorflow docs and fully quantized this model thusly,

# Save model as tflite
converter = tf.lite.TFLiteConverter.from_keras_model(combined_model)
print('Adding necessary optimizations for use on Coral.AI board, set batch_size=1')
print('via command line option for Coral.AI board')
print('See: https://www.tensorflow.org/lite/performance/post_training_quantization')
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_dataset
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8  # or tf.uint8
converter.inference_output_type = tf.int8  # or tf.uint8
tflite_model = converter.convert()
open(model_path, "wb").write(tflite_model)

I used a 'dummy' rep data set as,

print("Creating dummy data set for converter below")
def representative_dataset():
    for _ in range(100):
        data = np.random.rand(1, 144000)
        yield [data.astype(np.float32)]

But when I compile the model to edge, I continue to get the error, ERROR: Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors.

Does the message I see when invoking training cause this perhaps? INFO: Created TensorFlow Lite XNNPACK delegate for CPU. Do I need to disable XNNPACK? How could I do that? Thank you!

Hm. First time SO user...did not expect another box of explanation...

0

There are 0 answers