Output of Unet multi-class segmentation

74 views Asked by At

I have several questions concerning a multi class image segmentation model pipeline.


1 - Influence of images masks

I have created masks for each image according to this template :
  1. (0,0,0) is the background,
  2. (1,1,1) external wound,
  3. (2,2,2) wound itself.

Segmentation mask example This example show the repartition of the wound and external wounds on the image but does not respect the (0,0,0)...etc template.

If I train a model respecting this template or using another template like :

  1. (50,50,50) is the background,
  2. (100,100,100) external wound,
  3. (232,232,232) wound itself.

Does that influence the model. If I rescale data = dict(rescale=1./255.) my image does it still influence ?

2 - Output of segmentation

Output of image segmentation The output of my model after training if really bad. What is the reason ?

This is the general pipeline of my model :

data_full_aug = dict(rescale=1./255.,
                     rotation_range=90,
                     vertical_flip=True,
                     horizontal_flip=True,
                     width_shift_range=0.3,
                     height_shift_range=0.3,
                     zoom_range=0.3,
                     shear_range=0.3, 
                     fill_mode = 'reflect'
                    )

The step below is repeated for the masks.

datagen_full_data_aug = ImageDataGenerator(**data_full_aug)

train_generator_images=datagen_full_data_aug.flow_from_dataframe(
    dataframe=train,
    directory=folder_images,
    x_col='filename',
    class_mode=None,
    shuffle=True,
    seed = seed,
    batch_size=batch_size,
    target_size=(image_size,image_size))

validation_generator_images=datagen_no_data_aug.flow_from_dataframe(
    dataframe=validation,
    directory=folder_images,
    x_col='filename',
    class_mode=None,
    shuffle=True,
    seed = seed,
    batch_size=batch_size,
    target_size=(image_size,image_size))

Images and masks are then zipped.

train_generator = zip(train_generator_images, train_generator_masks)
validation_generator = zip(validation_generator_images, validation_generator_masks)

And then the model fit.

history=model.fit(train_generator,
                  steps_per_epoch=steps_per_epoch,
                  epochs=EPOCHS,
                  verbose=2,
                  validation_data=validation_generator,
                  validation_steps=val_steps_per_epoch,
                  callbacks=[checkpoint],
                  batch_size = batch_size)

The model itself is taken from the library : segmentation-models. https://github.com/qubvel/segmentation_models

model = sm.Unet(input_shape=(image_size, image_size, nb_channels), 
                classes=nb_classes, 
                activation='softmax', 
                encoder_weights='imagenet')

I have tried changing the learning_rate and epochs values.

0

There are 0 answers