Firebase Local Model throws "Didn't find op for builtin opcode 'CONV_2D' version '2'"

3k views Asked by At

I got a model which is looking like this and trained it:

      keras.layers.Conv2D(128, (3,3), activation='relu', input_shape=(150,150,3)),
      keras.layers.MaxPooling2D(2,2),
      keras.layers.Dropout(0.5),
      
      keras.layers.Conv2D(256, (3,3), activation='relu'),
      
      keras.layers.MaxPooling2D(2,2), 
     
      keras.layers.Conv2D(512, (3,3), activation='relu'),
      
      keras.layers.MaxPooling2D(2,2),
     
      keras.layers.Flatten(),
          
      keras.layers.Dropout(0.3),      
      
      keras.layers.Dense(280, activation='relu'),
      
      keras.layers.Dense(4, activation='softmax')
    ])

I converted it to .tflite with the following Code:

import tensorflow as tf
converter = tf.compat.v1.lite.TFLiteConverter.from_keras_model_file("model.h5")
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.post_training_quantize=True
converter.allow_custom_ops=True

tflite_model = converter.convert()

open("model.tflite", "wb").write(tflite_model)

Then I want it to use it with local Firebase:

 val bitmap = Bitmap.createScaledBitmap(image, 150, 150, true)

        val batchNum = 0
        val input = Array(1) { Array(150) { Array(150) { FloatArray(3) } } }
        for (x in 0..149) {
            for (y in 0..149) {
                val pixel = bitmap.getPixel(x, y)
                input[batchNum][x][y][0] = (Color.red(pixel) - 127) / 255.0f
                input[batchNum][x][y][1] = (Color.green(pixel) - 127) / 255.0f
                input[batchNum][x][y][2] = (Color.blue(pixel) - 127) / 255.0f
            }


        }
        val localModel = FirebaseCustomLocalModel.Builder()
            .setAssetFilePath("model.tflite")
            .build()
val interpreter = FirebaseModelInterpreter.getInstance(FirebaseModelInterpreterOptions.Builder(localModel).build())
                val inputOutputOptions = FirebaseModelInputOutputOptions.Builder()
                    .setInputFormat(0, FirebaseModelDataType.FLOAT32, intArrayOf(1, 150, 150, 3))
                    .setOutputFormat(0, FirebaseModelDataType.FLOAT32, intArrayOf(1, 4))
                    .build()

                val inputs = FirebaseModelInputs.Builder()
                    .add(input)
                    .build()
                interpreter?.run(inputs, inputOutputOptions)
                    ?.addOnSuccessListener { result ->
                        val output = result.getOutput<Array<FloatArray>>(0)
                        val probabilities = output[0]




But it throws this Error:


Internal error: Cannot create interpreter: Didn't find op for builtin opcode 'CONV_2D' version '2'

Somebody knows what I'm doing wrong? I'm using tensorflow-gpu and tensorflow-estimator 2.3.0

3

There are 3 answers

0
Mare Seestern On BEST ANSWER

I fixed it with following changes:

I saved my model like this (tf-gpu 2.2.0) or by my Callback (.pb):

tf.saved_model.save(trainedModel,path)

In build.gradle I added:

implementation 'org.tensorflow:tensorflow-lite:0.0.0-nightly'
implementation'org.tensorflow:tensorflow-lite-gpu:0.0.0-nightly'

I updated my tensorflow version (only for the converter) to tf-nightly (2.5.0) by running:

pip3 install tf-nightly

And used this code (Thanks to Alex K.):

new_model= tf.keras.models.load_model(filepath=path)
converter = tf.lite.TFLiteConverter.from_keras_model(new_model)
converter.optimizations = []
converter.allow_custom_ops=False
converter.experimental_new_converter = True
tflite_model = converter.convert()
open("model.tflite", "wb").write(tflite_model)

That's it.

0
Ahmad Raza On

Check out the version of Tensorflow which you use while training, Use the same version in android build.gridle(app). I use tensorflow 2.4.0 (which is latest) while training, so I put (implementation 'org.tensorflow:tensorflow-lite:2.4.0') in my android build.gridle(app)

3
Alex K. On

TFLite operators have different versions. Looks like that you have converted model with newer version of Conv2D and your current interpreter does not support it.

I have meet that issue on Android when tried converter.optimizations = [tf.lite.Optimize.DEFAULT]. So I would suggest you drop optimization and custom ops at the beginning:

import tensorflow as tf
converter = tf.compat.v1.lite.TFLiteConverter.from_keras_model_file("model.h5")
converter.optimizations = []
converter.post_training_quantize=True
converter.allow_custom_ops=False

tflite_model = converter.convert()

open("model.tflite", "wb").write(tflite_model)

Edit: Also make sure you are using same version of Tensorflow while converting model and in your application. Version mismatch caused by older version of interpreter which does not support new op version.
Edit 2(maybe more useful):
Try to convert your model with older version of Tensorflow, lets say 2.1 with command:

import tensorflow as tf
converter = tf.compat.v1.lite.TFLiteConverter.from_keras_model_file("model.h5")
converter.optimizations = []
converter.allow_custom_ops=False
converter.experimental_new_converter = True

tflite_model = converter.convert()