Why are the Keras and Orange VGG16 outputs different and how to fix it?

35 views Asked by At

I'm optimizing some work I did moving from Orange to Python code, but I'm having some problems with image Embedders. I'm trying to recreate my work using Tensorflow/Keras, but the outputs of the VGG16 networks but the 4096 outputs of the activation layer of the penultimate FC layer for this architecture, using Orange and Keras, are different.

In the Orange documentation it is written:

For python:

model_vgg = VGG16(include_top=True, weights='imagenet', pooling=None, input_shape=(224, 224, 3))

model_vgg16 = Model(inputs=model_vgg.input, outputs=model_vgg.layers[-2].output)

Keras VGG16

To reshape some images to 224x244 pixels i use the same package and code of function load_image_or_none -> https://github.com/biolab/orange3-imageanalytics/blob/master/orangecontrib/imageanalytics/utils/embedder_utils.py

And get the same image resized to 224x224 used for VGG16 in Orange, by widget Save Image. My resized images and Orange images are the same

Perhaps I'm making a mistake during preprocessing, since in the Orange documentation it is written that they use the original weights of the model.

To preprocess the images i try the Keras preprocess_input of VGG16, and manually

def process_vgg16(imgs):
  output = np.zeros(imgs.shape)

  VGG_MEAN = np.array([103.939, 116.779, 123.68], dtype=np.float32)
  for i in range(0, imgs.shape[0]):

    b = np.array(imgs[i,:,:,2], dtype=np.float32)
    g = np.array(imgs[i,:,:,1], dtype=np.float32)
    r = np.array(imgs[i,:,:,0], dtype=np.float32)

    output[i,:,:,0] = b - VGG_MEAN[0]
    output[i,:,:,1] = g - VGG_MEAN[1]
    output[i,:,:,2] = r - VGG_MEAN[2]

    #output = output/255
    return output

Note: The images are in gray scale, so all channels are the same.

Results:

First three output of an image in orange (VGG16)

First three output of an image in Keras (VGG16)

Would anyone know the reason?

0

There are 0 answers