I have retrained a RESNET50 model for reidentification on EDGE TPU. However, it seems to be no way to fetch a batch of image to EDGE_TPU.
I have come up with a solution of running multiple same model for images.
However, is there anyway to speed up the model inference for multiple model? The threading now is even slower than single model inference
Because batch inference is not available now, so pipelining is another secondary option. However, after experiencing with my model, we can make a psuedo batch by feeding multiple single input for EDGE_TPU as another option