How to properly export keras_cv detection model for inference?

147 views Asked by At

Using keras_cv (TF 2.12 on Linux) I have built and trained a model, similar to official keras_cv documentation and examples.

model = keras_cv.models.RetinaNet.from_preset(
    "resnet50_imagenet",
    num_classes=len(class_mapping),
    bounding_box_format="xywh")

    model.compile(...)
    model.fit(...)

This part is fine and model works as expected.

Problem

I am struggling to properly export model for inference.

I tried model.export() - it is failing. I tried with tf.keras.export.ExportArchive:

export_archive = tf.keras.export.ExportArchive()
export_archive.track(model)
export_archive.add_endpoint(
    name="serve",
    fn=model.call,
    input_signature=[tf.TensorSpec(shape=(None, 3), dtype=tf.float32)],
)
export_archive.write_out("path/to/location") ## throws exception

it is failing as well.

Every export method shows bunch of errors INVALID_ARGUMENT: You must feed a value for placeholder tensor 'inputs'.

The exception export_archive.write_out() is raising:

TypeError: bad argument type for built-in operation

I guess this is due to some specifics of keras_cv model implementation. What is the proper way?

0

There are 0 answers