CoreMLTools converted Tensorflow model completely wrong predictions in Swift

49 views Asked by At

I have a Keras model (cnn and lstm layers, among others, output to a single numerical output) I've trained offline that has a low loss rate on my testing data. However, after I've exported it to run in my IOS app via Core ML, the model becomes useless and the results might as well be random.

One suggestion I have seen in the few other posts about similar issues was to ensure the model was exported with fp32 precision, which I did try as well to no avail. I also saw a suggestion to set "Cpu only" compute mode in CoreML, which again made no difference.

I also know for a fact that it is also not due to any differences with the data being pre-processed differently in both code paths, because along with trying to test on real data I have constructed an MLMultiArray in Swift with the exact X_Test input values I am using in Python and have input that directly to the CoreML version of the model, with wildly different results than the Python prediction.

I am at my wits end here debugging, is there anything I could possibly be missing? Here are a simplified set of relevant lines from my model testing/exporting code.

Python:

Exporting the model:

       coreml_model = coremltools.converters.convert(
            model,
            source="tensorflow",
            inputs=[
                coremltools.TensorType(
                    shape=(
                        1,
                        x_train.shape[1],
                        x_train.shape[2],
                    ),
                    dtype=np.float32,
                    name="my_data_input",
                )
            ],
            # Note: I've tested with both this configured and with these lines commented out
            compute_precision=coremltools.precision.FLOAT32,
            convert_to="mlprogram",
        )

For the hardcoded data to test in iOS app I printed out x_test in the order I would need to initialize an MLMultiArray with the data:

            oneDArr = []
            for timestep in range(data.x_test.shape[1]):
                for sensor_num in range(data.x_test.shape[2]):
                    oneDArr.append(str(data.x_test[my_chosen_index][timestep][sensor_num]))

            print(",".join(oneDArr))

Swift:


`// This is the hardcoded data from real X_Test values I tested in Python
let myArrs: [[Double]] = [[-1.680063009262085, 07335805893, -4.93632972240448, -2.826584279537201, ..., -1.5832659602165222, 0.3495193272829056], ... [-4.035530686378479, 1.160736083984375, ...,  12.94029951095581]]
 for subArray in myArrs {
            let mlArray = try! MLMultiArray(shape: [1 as NSNumber, TIEM_STEPS as NSNumber, SENSOR_DIMS as NSNumber], dataType: MLMultiArrayDataType.float32)
            
            for (idx, val) in subArray.enumerated() {
                mlArray[idx] = val as NSNumber
            }
            
            // NOTE: I've tried it with and without setting this cpu only configuration
            let config = MLModelConfiguration()
            config.computeUnits = .cpuOnly
            let model = try! MyModel(configuration: config)
            let prediction: MyModelOutput = try! model.prediction(
                input: MyModelInput(my_data_input: mlArray)
            )
            let outputRaw = prediction.Identity[0].floatValue
            print("Found \(outputRaw)")
 }   `
  
0

There are 0 answers