Any example workflow from TensorFlow to OpenMV?

183 views Asked by At

I have trained an image multi classification model based on MobileNet-V2(Only the Dense layer has been added), and have carried out full integer quantization(INT8), and then exported model.tflite file, using TF Class () to call this model.

Here is my code to quantify it:

import tensorflow as tf
import numpy as np
import pathlib


def representative_dataset():
    for _ in range(100):
        data = np.random.rand(1, 96, 96, 3)  // random tensor for test
        yield [data.astype(np.float32)]


converter = tf.lite.TFLiteConverter.from_saved_model('saved_model/my_model')
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_dataset
tflite_quant_model = converter.convert()

tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)

tflite_model_quant_file = tflite_models_dir/"mnist_model_quant.tflite"
tflite_model_quant_file.write_bytes(tflite_quant_model)

The accuracy of this model is quite good in the test while training. However, when tested on openmv, the same label is output for all objects (although the probability is slightly different).

I looked up some materials, one of them mentioned TF Classify() has offset and scale parameters, which is related to compressing RGB values to [- 1,0] or [0,1] during training, but this parameter is not available in the official API document.

for obj in tf.classify(self.net , img1, min_scale=1.0, scale_mul=0.5, x_overlap=0.0, y_overlap=0.0):
          print("**********\nTop 1 Detections at [x=%d,y=%d,w=%d,h=%d]" % obj.rect())
          sorted_list = sorted(zip(self.labels, obj.output()), key = lambda x: x[1], reverse = True)
          for i in range(1):
          print("%s = %f" % (sorted_list[i][0], sorted_list[i][1]))
          return sorted_list[i][0]

So are there any examples of workflow from tensorflow training model to deployment to openmv?

1

There are 1 answers

0
BehicMV On

Well, you did asked this question about 1 years ago but I'm here to help anyway!

If you want to train an image classification model for OpenMV H7 Plus, you can check this repo out.

Also you can inference them with this code:

import sensor, image, time, os, tf, uos, gc

sensor.reset()                         # Reset and initialize the sensor.
sensor.set_pixformat(sensor.RGB565)    # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA)      # Set frame size to QVGA (320x240)
sensor.set_windowing((240, 240))       # Set 240x240 window.
sensor.skip_frames(time=2000)          # Let the camera adjust.

net = None
labels = None

try:
    # load the model, alloc the model file on the heap if we have at least 64K free after loading
    net = tf.load("trained.tflite", load_to_fb=uos.stat('trained.tflite')[6] > (gc.mem_free() - (64*1024)))
except Exception as e:
    print(e)
    raise Exception('Failed to load "trained.tflite", did you copy the .tflite and labels.txt file onto the mass-storage device? (' + str(e) + ')')

try:
    labels = [line.rstrip('\n') for line in open("labels.txt")]
except Exception as e:
    raise Exception('Failed to load "labels.txt", did you copy the .tflite and labels.txt file onto the mass-storage device? (' + str(e) + ')')

clock = time.clock()
while(True):
    clock.tick()

    img = sensor.snapshot()

    # default settings just do one detection... change them to search the image...
    for obj in net.classify(img, min_scale=1.0, scale_mul=0.8, x_overlap=0.5, y_overlap=0.5):
        print("**********\nPredictions at [x=%d,y=%d,w=%d,h=%d]" % obj.rect())
        img.draw_rectangle(obj.rect())
        # This combines the labels and confidence values into a list of tuples
        predictions_list = list(zip(labels, obj.output()))

        for i in range(len(predictions_list)):
            print("%s = %f" % (predictions_list[i][0], predictions_list[i][1]))

    print(clock.fps(), "fps")

I hope it helps!