I am seeking assistance regarding the conversion of the MediaPipe FaceMeshV2 model for use with the Coral EdgeTPU Accelerator. As per the Coral documentation, a model must undergo full integer quantization before it can be compiled for the Coral using their EdgeTPU Compiler.
I have discovered quantized FaceMeshV2 models in this model zoo, but unfortunately, they are not functioning correctly. I have experimented with quantized FaceMeshV1 models from the same source, and they work without issue, indicating that my testing methodology is accurate. However, I am particularly interested in utilizing FaceMeshV2 due to its superior accuracy in detecting facial landmarks.
The TensorFlow documentation outlines the process of post-training quantization for existing TFLite models, but full integer quantization requires a representative dataset, which I am struggling to acquire. I have not attempted the conversion process yet due to this challenge.
My primary inquiry is whether the described conversion is feasible, as I have found limited resources on the topic, and the converted model obtained from Pinto is not functioning correctly. Could it be that the model's architecture is hindering the conversion process? Any insights or guidance would be greatly appreciated.