OpenCV / Image Processing techniques to find the centers of bright spots in an image

3.7k views Asked by At

I'm currently doing a project based on the methodology described in this paper: Camera calibration from a single night sky image

As a beginner in computer vision, I do not quite understand how I can implement the method used in the paper to find the centre of all the bright spots (luminaries) within an image, particular on the paragraph in section 4.1:

The surrounding patch of size 15 × 15 pixels (Figure 1(a)), is upsampled by a given factor (Figure 1(c)) and the corresponding gradient map is calculated (Figure 1(d)). Starting from the brightest region, the gray value threshold is decreased, until an energy function is maximized. The energy function is defined as the sum of the border gradients and normalized by the border length (Figure 1(e)). This leads to a segmented star image shown in Figure 1(f). The segmentation ensures that the weighted centre of gravity algorithm [11] gives a robust estimation.

upsampled image after gradient function applied after energy-function applied segmented image

From my understanding, I think I can do a Laplacian / Sobel gradient function on the upsampled image, but after that I'm not too sure how I can perform the energy function part and produce the segmented image. Also I would also want to understand how implement the weighted centre of gravity algorithm to find the centre of the bright spot using openCV or other python library.

Much appreciated if any of you can provide some lights on this.

Thanks and regards.

1

There are 1 answers

4
Boyko Perfanov On BEST ANSWER

The main thing to take away is energy function used in this context is any function that is used for a maximization problem. Here, the energy function is the sum of gradients/derivatives/differences (i.e. "detected borders likelihood" in this case).

Since you seem to have a non-algorithmic background, I suggest you to read on breadth-first search (remember an image is a very specific type of graph, where every edge is a pixel, connected to adjacent ones), recursion, and floodfill.

  1. Up/downscale the image
  2. Run horizontal and vertical Sobel filters. Combine the resultant images into grad_img = max_per_pixel(sobel_horiz,sobel_vert).
  3. For each 15x15 pixel patch, find the brightest spot. This is the seed of the star.
  4. Start from a 1x1 region that consists of the seed. Keep adding adjacent pixels to that region (recommend breadth-first traversal). Calculate the energy by the sum of pixel values in grad_img with pixel coordinates the borders of the region. If the energy is higher than the energy of the previous iteration, the new pixel is added to the region. If not, the pixel is rejected.
  5. Finding the center of gravity of a closed contour or a collection of pixels is not a hard task. Either do it by its math definition (the sum of x and y coordinates of all the in-region pixels, divided by the area), or do it by using the image moments (cv::moments example).

My solution is a bit different than their solution. They actually run a floodfill algorithm that fills all pixels of brightness [threshold;255], calculating energy func, decreasing the threshold, rinse and repeat, stopping when they maximize the energy function. Note that their algorithm is very inefficient as they are making effectively up to 255 floodfills for every pre-detected star compared to 1 floodfill in my proposal, and that could be a performance issue in practice.