I have created a model for object detection in Python Tensorflow and then converted it in Tensorflow JS so as to use in browser. The model works perfectly in python. Now, when I give an input image to browser, there is major difference between prediction results in python and in Tensorflow JS. I am sharing the prediction results for both python and JS.

Results for Python :

enter image description here

And Results for JS :

enter image description here

I have given the same image as input to both python and JS but still the big difference specially for Scores where python predicts with 99% and JS predicts with just 16%.

What could be the reason for this ? Have I inadvertently committed some mistake while converting to Tensorflow JS or is there some other reason for this ?

I went through this and other resources on the internet but couldn't find any specific reason for the difference in results.

Any help will be grateful. Thanks a lot.

Update 1 :

Here is my Python Code :

def load_image_into_numpy_array(image_path):
     return np.array(Image.open(image_path))

image_path = random.choice(TEST_IMAGE_PATHS)
image_np = load_image_into_numpy_array(image_path)
input_tensor = tf.convert_to_tensor(
    np.expand_dims(image_np, 0), dtype=tf.float32)
detections, predictions_dict, shapes = detect_fn(input_tensor)

label_id_offset = 1
image_np_with_detections = image_np.copy()

viz_utils.visualize_boxes_and_labels_on_image_array(
      image_np_with_detections,
      detections['detection_boxes'][0].numpy(),
      (detections['detection_classes'][0].numpy() + label_id_offset).astype(int),
      detections['detection_scores'][0].numpy(),
      category_index,
      use_normalized_coordinates=True,
      max_boxes_to_draw=200,
      #Set min_score_thresh accordingly to display the boxes
      min_score_thresh=.5, 
      agnostic_mode=False
)

plt.figure(figsize=(12,25))
plt.imshow(image_np_with_detections)
plt.show()

And here is model call in JS :

async function run() {

    //Loading the Model : 
    model = await tf.loadGraphModel(MODEL_URL);
    console.log("SUCCESS");

    let img = document.getElementById("myimg");

    console.log("Predicting....");

    //Image PreProcessing 
    var example = tf.browser.fromPixels(img);
    example = example.expandDims(0);

    //model call
    const output = await model.executeAsync(example);
    console.log(output);

    const boxes = output[4].arraySync();
    const scores = output[5].arraySync();
    const classes = output[1].arraySync();

    console.log(boxes);
    console.log(scores);
    console.log(classes);

}

Update 2 :

import pathlib

filenames = list(pathlib.Path('/content/train/').glob('*.index'))

filenames.sort()
print(filenames)

#recover our saved model
pipeline_config = pipeline_file
#generally you want to put the last ckpt from training in here
model_dir = str(filenames[-1]).replace('.index','')
configs = config_util.get_configs_from_pipeline_file(pipeline_config)
model_config = configs['model']
detection_model = model_builder.build(
      model_config=model_config, is_training=False)

# Restore checkpoint
ckpt = tf.compat.v2.train.Checkpoint(
      model=detection_model)
ckpt.restore(os.path.join(str(filenames[-1]).replace('.index','')))


def get_model_detection_function(model):
  """Get a tf.function for detection."""

  @tf.function
  def detect_fn(image):
    """Detect objects in image."""

    image, shapes = model.preprocess(image)
    prediction_dict = model.predict(image, shapes)
    detections = model.postprocess(prediction_dict, shapes)

    return detections, prediction_dict, tf.reshape(shapes, [-1])

  return detect_fn

detect_fn = get_model_detection_function(detection_model)
1

There are 1 answers

2
Lescurel On

You're missing preprocessing. When exporting your model, you are exporting the default serve tag, so your call to model.executeAsync in JS is equivalent to model.predict in python. However, in your python code, you are also preprocessing the inputs with a call to model.preprocess.

You should replicate the python preprocessing in JS.