numpy vectorize / more efficient for loop

451 views Asked by At

I am performing erosion on an image. The image has been padded accordingly. In a nutshell, I have a cross element(+) that I put on every pixel of the image and pick the lowest value for that pixel from pixel above, below, right, left and itself.

It is inefficient and I can't figure out a vectorized version. It must be possible since all calculations are done independently of each other.

for y in range(t,image.shape[0]-b):
    for x in range(l,image.shape[1]-r):
        a1 = numpy.copy(str_ele)
        for filter_y in range(a1.shape[0]):
            for filter_x in range(a1.shape[1]):
                if (not (numpy.isnan(a1[filter_y][filter_x]))):
                    a1[filter_y][filter_x] = str_ele[filter_y][filter_x]*image[y+(filter_y-str_ele_center_y)][x+(filter_x-str_ele_center_x)]
        eroded_image[y][x] = numpy.nanmin(a1)   

basically:

Every pixel in final image = min(pixel, above, below, left, right) from the original image

 for y in range(len(eroded_image)):
     for x in range(len(eroded_image[1])):
         eroded_image2[y][x] = numpy.nanmin(str_ele*image2[y:y+len(str_ele),x:x+(len(str_ele[1]))])

This is what I now have. Still 2 loops.

1

There are 1 answers

2
unutbu On BEST ANSWER

If image is a NaN-padded array, and you are eroding with a cross-shaped footprint, you could remove the for-loops by stacking slices of image (to effectively shift the image up, left, right, and down) and then apply np.nanmin to the stack of slices.

import numpy as np

def orig(image):
    t, l, b, r = 1, 1, 1, 1
    str_ele = np.array([[np.nan, 1, np.nan], [1, 1, 1], [np.nan, 1, np.nan]], dtype='float')
    str_ele_center_x, str_ele_center_y = 1, 1
    eroded_image = np.full_like(image, dtype='float', fill_value=np.nan)
    for y in range(t,image.shape[0]-b):
        for x in range(l,image.shape[1]-r):
            a1 = np.copy(str_ele)
            for filter_y in range(a1.shape[0]):
                for filter_x in range(a1.shape[1]):
                    if (not (np.isnan(a1[filter_y][filter_x]))):
                        a1[filter_y][filter_x] = str_ele[filter_y][filter_x]*image[y+(filter_y-str_ele_center_y)][x+(filter_x-str_ele_center_x)]
            eroded_image[y][x] = np.nanmin(a1)   
    return eroded_image

def erode(image):
    result = np.stack([image[1:-1, 1:-1], image[2:, 1:-1], image[:-2, 1:-1], image[1:-1, 2:], image[1:-1, :-2]])
    eroded_image = np.full_like(image, dtype='float', fill_value=np.nan)
    eroded_image[1:-1, 1:-1] = np.nanmin(result, axis=0)
    return eroded_image

image = np.arange(24).reshape(4,6)
image = np.pad(image.astype(float), 1, mode='constant', constant_values=np.nan)

yields

In [228]: erode(image)
Out[228]: 
array([[nan, nan, nan, nan, nan, nan, nan, nan],
       [nan,  0.,  0.,  1.,  2.,  3.,  4., nan],
       [nan,  0.,  1.,  2.,  3.,  4.,  5., nan],
       [nan,  6.,  7.,  8.,  9., 10., 11., nan],
       [nan, 12., 13., 14., 15., 16., 17., nan],
       [nan, nan, nan, nan, nan, nan, nan, nan]])

For the small example image above, erode appears to be around 33x faster than orig:

In [23]: %timeit erode(image)
10000 loops, best of 3: 35.6 µs per loop

In [24]: %timeit orig(image)
1000 loops, best of 3: 1.19 ms per loop

In [25]: 1190/35.6
Out[25]: 33.42696629213483