Update model obtained from keras YOLOV8Detector to Apple MLPackage/CoreML

80 views Asked by At

I trained a YOLOV8Detector model using a yolo_v8_xs_backbone backbone by following this tutorial on KerasCV Efficient Object Detection with YOLOV8 and KerasCV and training it on a different dataset

After some times, I was able to get predictions and visualise them as explained in the tutorial:

29d5951f-5e90-4b16-a0b8-987f443afbc4

I would like to use this model inside an apple iOS application, so I used the coremltools package to convert it. However, it seems that the "outputs" produced by kerascv is not exactly the one expected by the apple world.

Once the model is trained, I can ask for a prediction:

 images, y_true = next(iter(dataset.take(1)))
 y_pred = model.predict(images) // y_pred is a dictionary 

y_pred is a dictionary of with those keys ['boxes', 'confidence', 'classes', 'num_detections']

Using Netron, I can take a look at the shape of a model expected by the apple world

The goal here would to be able to have a mlpackage file where the preview mode is usable from Xcode:

Xcode MLMOdel preview mode

How can I modify/reshape the model generated from kerascv so instead of outputting dictionaries, I can have a model that outputs the confidence and coordinates answer as two separate outputs?

I have found some elements on this link with MobileNetv2 and SSD but I am not sure how to apply those elements in this case

0

There are 0 answers