How to remove outer line of a binary images?

115 views Asked by At

I am trying to find the optic disc in fundus images by using extracting the vessels and marking their center pixel.

This is what my code looks like:

 # Read the image and resize it to 800x800 pixels to reduce the computation time and memory usage
    img = load_image(file)
    blur = cv2.bilateralFilter(img, 9, 75, 75)
    median = cv2.medianBlur(blur, 5)
    # Extract the vessels from the image using the sato filter and normalize the image
    vessels = sato(median, sigmas=range(1, 10, 2), black_ridges=False)
    vessels = (vessels - np.min(vessels)) / (np.max(vessels) - np.min(vessels))
    vessels = vessels * 255
    vessels = vessels.astype(np.uint8)
    _, binary = cv2.threshold(vessels, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
    skeleton = morphology.skeletonize(binary)
    g, nodes = graph.pixel_graph(skeleton, connectivity=2)
    px, distances = graph.central_pixel(g, nodes=nodes, shape=skeleton.shape, partition_size=100)

Here is how original image looks like: Original Image And the skeleton Skeletonized Image

Obviously the outer part of the image is artifact and it completely messes the calculation to find the center of the vessels.

I was wondering how can I solve this issue.

The blue areas needs to be gone

1

There are 1 answers

0
Richard On

I've done similar types of analysis previously with a bunch of different modality images.

Sorry this isn't any code, and I haven't gone and dug up the various options for python modules to do the following - but it's all pretty generic and standard.

In this case, the images are detecting edges/skeletons due to the boundary with the "imaged" portion of the image and the "non-imaged" "background" (the pure black regions on the outside).

When scanners create a rectangular image that only contains a non-rectangular region that is from the actual "imaged" region, they typically will fill the non-imaged regions with some sort of pure-black, "zero" type of pixel value. You can see that quite easily in your first example image where this "non-imaged" region is clearly visible as black. You could likely identify this by using a viewer to look at the pixel values in different locations.

Therefore, the likely the most robust method to exclude these boundary regions would be to

  1. Identify the "non imaged" black background regions.
  2. Once you have that region, expand it inwards by a certain amount. Use a real units value (mm), not pixels, so that the algorithm is resolution independent.
  3. Either use that region as a mask to get rid of the skeletonized output, or use the "inner" region masked with the "outer" as the input to the skeletonize algorithm.

That should make sure you don't get artifacts due to the edges of the "imaged" area.

This logic should be capable of handling any shaped "imaged" region, be resolution independent and quite robust for consistently getting rid of the "imaged area" edge artifacts.