Object Detection using Vision performs different than in Create ML Preview

36 views Asked by At

Context

So basically I've trained my model for object detection with +4k images. Under preview I'm able to check the prediction for Image "A" which detects two labels with 100% and its Bounding Boxes look accurate.

The problem itself

However, inside the Swift Playground, when I try to perform object detection using the same model and same Image I don't get same results.

What I expected

Is that after performing the request and processing the array of VNRecognizedObjectObservation would show the very same results that appear in CreateML Preview.

Notes:

  • So the way I'm importing the model into playground is just by drag and drop.
  • I've trained the images using JPEG format.
  • The test Image is rotated so that it looks vertical using MacOS Finder rotation tool.
  • I've tried, while creating VNImageRequestHandlerto pass a different orientation, with the same result.

Swift Playgrounds code

This is the code I'm using.

import UIKit
import Vision

do{
    let model = try MYMODEL_FROMCREATEML(configuration: MLModelConfiguration())

    let mlModel = model.model
    let coreMLModel = try VNCoreMLModel(for: mlModel)


    let request = VNCoreMLRequest(model: coreMLModel) { request, error in
        
        guard let results = request.results as? [VNRecognizedObjectObservation] else {
            return
        }
        results.forEach { result in
            print(result.labels)
            print(result.boundingBox)
        }

    }

    let image = UIImage(named: "TEST_IMAGE.HEIC")!
    
    let requestHandler = VNImageRequestHandler(cgImage: image.cgImage!)

    try requestHandler.perform([request])
} catch {
    print(error)
}
0

There are 0 answers