using TensorFlow Model in Spring for inference/predictions - Kotlin

139 views Asked by At

I'm trying to find a way to use tensorflowmodel in Spring Boot, I loaded the model successfully, and created the needed tensor, but I can't make a call to get the result from the model because of this error:

Caused by: org.tensorflow.exceptions.TFInvalidArgumentException: Expects arg[0] to be float but uint8 is provided

I checked the model signature and it was like this:

Signature for "serving_default":
    Method: "tensorflow/serving/predict"
    Inputs:
        "input_1": dtype=DT_FLOAT, shape=(-1, 299, 299, 3)
    Outputs:
        "dense_3": dtype=DT_FLOAT, shape=(-1, 41)

Signature for "__saved_model_init_op":
    Outputs:
        "__saved_model_init_op": dtype=DT_INVALID, shape=()

my tensor details are DT_UINT8 tensor with shape [299, 299, 3].

When I changed my tensor data type into float like this:

val imageShape = TFloat32.tensorOf(runner.fetch(decodeImage).run()[0].shape())
        val reshape = tf.reshape(
            decodeImage,
            tf.array(
                -1.0f,
                imageShape[0].getFloat(),
                imageShape[1].getFloat(),
                imageShape[2].getFloat())
            )

I got this error:

org.tensorflow.exceptions.TFInvalidArgumentException: Value for attr 'Tshape' of float is not in the list of allowed values: int32, int64

how can I fix this problem?

if someone is curious how I loaded the model, created the tensor and called it, here is the code below

Loading the model in TFServices:

fun model(): SavedModelBundle  {
        return SavedModelBundle
            .loader("/home/***/src/main/resources/pd/")
            .withRunOptions(RunOptions.getDefaultInstance())
            .load()
    }

Building the Tensor and calling the model

        val graph = Graph()
        val session = Session(graph)
        val tf = Ops.create(graph)
        val fileName = tf.constant("/home/***/src/main/resources/keyframe_1294.jpg")
        val readFile = tf.io.readFile(fileName)
        val runner = session.runner()
        val decodingOptions = DecodeJpeg.channels(3)
        val decodeImage = tf.image.decodeJpeg(readFile.contents(), decodingOptions)
        val imageShape = runner.fetch(decodeImage).run()[0].shape()
        val reshape = tf.reshape(
            decodeImage,
            tf.array(
                -1,
                imageShape.asArray()[0],
                imageShape.asArray()[1],
                imageShape.asArray()[2])
            )
        val tensor = runner.fetch(reshape).run()[0]
        val inputMap = mutableMapOf("input_tensor" to tensor)
        println(tensor.shape())
        println(tensor.dataType())
        println(tensor.asRawTensor())
        val result = tfService.model().function("serving_default").call(inputMap)

Update:

I changed the whole code, and used the Kotlin Tensorflow dependencies

implementation("org.jetbrains.kotlinx:kotlin-deeplearning-api:0.5.2")
implementation("org.jetbrains.kotlinx:kotlin-deeplearning-tensorflow:0.5.2")

I loaded the model:

fun myModel(): SavedModel {
        return SavedModel.load("/home/***/src/main/resources/pd/")
    }

and called for the prediction:

val file = File("/home/***/src/main/resources/keyframe_1294.jpg")
val byteArray = ImageIO.read(file)
val floatArray = ImageConverter.toRawFloatArray(byteArray)
val myResult = tfService.myModel().predictSoftly(floatArray, "dense_3")
println(myResult)

but i got this error:

Caused by: org.tensorflow.TensorFlowException: Op type not registered 'DisableCopyOnRead' in binary running on My Computer. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) tf.contrib.resampler should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.

1

There are 1 answers

0
Saher Al-Sous On BEST ANSWER

I kept looking around and I found out that there is a way to cast the content as Float32, and work on the image to fit the needed data shape and type. here is the solution below

val graph = Graph()
val session = Session(graph)
val tf = Ops.create(graph)
val fileName = tf.constant("/home/saher/kotlin/Spring/machinelearning/src/main/resources/20220821_203556.jpg")
val readFile = tf.io.readFile(fileName)
val runner = session.Runner()
val decodingOptions = DecodeJpeg.channels(3)
val decodeImage = tf.image.decodeJpeg(readFile.contents(), decodingOptions)
//Cast the data type in the image.
val castedImage = tf.dtypes.cast(decodeImage, TFloat32::class.java)
// Add an extra dimension to make it 4-dimensional
val expandedImage = tf.expandDims(castedImage, tf.constant(0))
// Resize the image
val reshapedImage =
            tf.image.resizeBilinear(expandedImage, tf.constant(intArrayOf(299, 299)))
val tensor = runner.fetch(reshapedImage).run()[0]
val inputMap = mutableMapOf("input_1" to tensor)
val result = tfService.model().function("serving_default").call(inputMap)

Happy coding.