effect of ligting on image segmentation in matllab

362 views Asked by At

I'm trying to do a segmentation task on the below picture.

orignal image

I'm using fuzzy c-means with some minimal pre-processing. The segmentation will have 3 classes: background(the blue region), meat(the red region) and fat(the white region). The background segmentation works perfectly. However meat-and-fat segmentation on the left side of the photo maps lots of meat tissues as fat. the final meat-mask is like this:

mask

I suspect that's because of lighting conditions which makes the left side brighter so the algorithm classifies that region as fat-class. Also I think there could be some improvements if I could somehow make the surface smoother. I've used a 6x6 median filter which works alright, but I'm open to new suggestions. Any suggestions how to overcome this problem? May be some kind of smoothing? Thanks :)

Edit 1: The fat areas are roughly marked in the below photo. The top area is ambiguous, but as rayryeng has mentioned in the comments, if it is ambiguous for me as a human, it's alright for the algorithm to misclassify it too. But the left hand section is clearly all meat and the algorithm assigns a big chunk of that as fat. rough fat segments

2

There are 2 answers

4
DanielHsH On BEST ANSWER

First rule in segmentation is "try to describe how you (as a human being) were able to do the segmentation". Once you do that the algorithm becomes clear. In other words, you must perform the following 2 tasks:

  1. Identify the unique features of each segmented part (in your case - what is the difference between the fat and the meat).
  2. Choose a classifier that best suits you (C-means, Neural network, SVM, decision tree, Boosted classifier, etc). This classifier will operate on features selected in step 1

It seems that you skipped step 1 and that is the problem of your algorithm.

Here are my observations:

  1. The brightness of a pixel does not differentiate between meat and fat. It is depends mostly on the illumination, angle of the tissue and specular reflection. So you must remove the brightness
  2. It seems that the fat is more "yellow". in other words the ratio of red color / green color is much higher for meat than for fat. That is the feature from which I would start the implementation. In order to grab that feature, convert your RGB image to YUV or HSV color space. If you work with YUV completely discard the Y and run your classifier on the V plane. In HSV run it on the H plane. This way you will discard the brightness and deal only with the colors (mainly the red and green components). I recommend also using those color spaces for background separation.
  3. Next step - you should add more features to your classifier, since color will not be enough. Another observation is that meat is a much more flexible tissue so it will have more wrinkles on it and fat tends to be more smooth. You can search for edges and insert the absolute amount of edges as another feature to your classifier.
  4. Continue observing your results, identify where the classifier made mistakes and try to come up with other new features to separate the two textures better. Example of features which might be very good in your case: HOG, LBP on pyramid of images, MCT features, three patch lbp, (x,y)-projections. My intuition whispers that three-patch-lbp will help you the most but I it is very difficult to explain why.
  5. Personal suggestion: I don't know which features are implemented in Matlab. But you should start from the features that already exists to save time on writing a lot of new code. For example, I know that HAAR features are already implemented in matlab but they might be not descriptive enough by themselves for your case. Combine few types of features to get the strongest result and avoid using overlapping features (Two different features that capture almost identical information in the image). For example - If you use MCT, don't use LBP.

For more information you can read my answer here about textures similarities. You have a reverse problem (instead of measuring similarity you want to train a classifier that distinguish between non similar textures). But the framework of the solution is identical. Identify important features which distinguish between textures, concatenate the features to a vector and run classifier. You can run the classifier on each pixel or on image patches of small area (say 5x5 pixels). The result you are expecting is to train such a smart classifier that for every patch in the image it can tell you if it resembles more a chunk of meat or fat

0
fati On

In case that you do not have your output labels, you need to apply an unsupervised learning algorithm for classification. For many images, the human eye is not a perfect tool to do the classification. That is why then we use computers :D Since it can show us the distribution of the intensities and provide different classes. One alternative is using connected components to identify and separate the fat-meat and BG classes since they have totally different intensities except the edges between mean and fat.

You can see the output of my thresholding based segmentation with different parameters. Please let me know if that is what you want so that I can support you with the code. Bests

parameterset 1

parameterset 2