CoreML Memory Leak in iOS 14.5

1.7k views Asked by At

In my application, I used VNImageRequestHandler with a custom MLModel for object detection.

The app works fine with iOS versions before 14.5.

When iOS 14.5 came, it broke everything.

  1. Whenever try handler.perform([visionRequest]) throws an error (Error Domain=com.apple.vis Code=11 "encountered unknown exception" UserInfo={NSLocalizedDescription=encountered unknown exception}), the pixelBuffer memory is held and never released, it made the buffers of AVCaptureOutput full then new frame not came.
  2. I have to change the code as below, by copy the pixelBuffer to another var, I solved the problem that new frame not coming, but memory leak problem is still happened.

enter image description here

enter image description here

enter image description here

Because of memory leak, the app crashed after some times.

Notice that before iOS version 14.5, detection works perfectly, try handler.perform([visionRequest]) never throws any error.

Here is my code:

private func predictWithPixelBuffer(sampleBuffer: CMSampleBuffer) {
  guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
    return
  }
  
  // Get additional info from the camera.
  var options: [VNImageOption : Any] = [:]
  if let cameraIntrinsicMatrix = CMGetAttachment(sampleBuffer, kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, nil) {
    options[.cameraIntrinsics] = cameraIntrinsicMatrix
  }
  
  autoreleasepool {
    // Because of iOS 14.5, there is a bug that when perform vision request failed, pixel buffer memory leaked so the AVCaptureOutput buffers is full, it will not output new frame any more, this is a temporary work around to copy pixel buffer to a new buffer, this currently make the memory increased a lot also. Need to find a better way
    var clonePixelBuffer: CVPixelBuffer? = pixelBuffer.copy()
    let handler = VNImageRequestHandler(cvPixelBuffer: clonePixelBuffer!, orientation: orientation, options: options)
    print("[DEBUG] detecting...")
    
    do {
      try handler.perform([visionRequest])
    } catch {
      delegate?.detector(didOutputBoundingBox: [])
      failedCount += 1
      print("[DEBUG] detect failed \(failedCount)")
      print("Failed to perform Vision request: \(error)")
    }
    clonePixelBuffer = nil
  }
}

Has anyone experienced the same problem? If so, how did you fix it?

3

There are 3 answers

1
Michael Kupchick On

In my case it helped to disable neural engine by forcing CoreML to run on CPU and GPU only. This is often slower but doesn't throw the exception (at least in our case). At the end we implemented a policy to force some of our models to not run on neural engine for certain iOS devices.

See MLModelConfiguration.computeUntis to constraint the hardware coreml model can use.

1
James On

I have a partial fix for this using @Matthijs Hollemans CoreMLHelpers library.

The model I use has 300 classes and 2363 anchors. I used a lot of the code Matthijs provided here to convert the model to MLModel.

In the last step a pipeline is built using the 3 sub models: raw_ssd_output, decoder, and nms. For this workaround you need to remove the nms model from the pipeline, and output raw_confidence and raw_coordinates.

In your app you need to add the code from CoreMLHelpers.

Then add this function to decode the output from your MLModel:

    func decodeResults(results:[VNCoreMLFeatureValueObservation]) -> [BoundingBox] {
        let raw_confidence: MLMultiArray = results[0].featureValue.multiArrayValue!
        let raw_coordinates: MLMultiArray = results[1].featureValue.multiArrayValue!
        print(raw_confidence.shape, raw_coordinates.shape)
        var boxes = [BoundingBox]()
        let startDecoding = Date()
        for anchor in 0..<raw_confidence.shape[0].int32Value {
            var maxInd:Int = 0
            var maxConf:Float = 0
            for score in 0..<raw_confidence.shape[1].int32Value {
                let key = [anchor, score] as [NSNumber]
                let prob = raw_confidence[key].floatValue
                if prob > maxConf {
                    maxInd = Int(score)
                    maxConf = prob
                }
            }
            let y0 = raw_coordinates[[anchor, 0] as [NSNumber]].doubleValue
            let x0 = raw_coordinates[[anchor, 1] as [NSNumber]].doubleValue
            let y1 = raw_coordinates[[anchor, 2] as [NSNumber]].doubleValue
            let x1 = raw_coordinates[[anchor, 3] as [NSNumber]].doubleValue
            let width = x1-x0
            let height = y1-y0
            let x = x0 + width/2
            let y = y0 + height/2
            let rect = CGRect(x: x, y: y, width: width, height: height)
            let box = BoundingBox(classIndex: maxInd, score: maxConf, rect: rect)
            boxes.append(box)
        }
        let finishDecoding = Date()
        let keepIndices = nonMaxSuppressionMultiClass(numClasses: raw_confidence.shape[1].intValue, boundingBoxes: boxes, scoreThreshold: 0.5, iouThreshold: 0.6, maxPerClass: 5, maxTotal: 10)
        let finishNMS = Date()
        var keepBoxes = [BoundingBox]()
        
        for index in keepIndices {
            keepBoxes.append(boxes[index])
        }
        print("Time Decoding", finishDecoding.timeIntervalSince(startDecoding))
        print("Time Performing NMS", finishNMS.timeIntervalSince(finishDecoding))
        return keepBoxes
    }

Then when you receive the results from Vision, you call the function like this:

if let rawResults = vnRequest.results as? [VNCoreMLFeatureValueObservation] {
   let boxes = self.decodeResults(results: rawResults)
   print(boxes)
}

This solution is slow because of the way I move the data around and formulate my list of BoundingBox types. It would be much more efficient to process the MLMultiArray data using underlying pointers, and maybe use Accelerate to find the maximum score and best class for each anchor box.

1
Jon Vogel On

iOS 14.7 Beta available on the developer portal seems to have fixed this issue.