Advice on preprocessing steps for edge detection on license plate images

271 views Asked by At

I am currently working on a university project for which I want to explore different computer vision techniques. My goal is to do automatic number plate recognition (ALPR) on chinese license plates. So far, I have trained a detection model (YOLOv5) to locate the license plates. The bounding boxes of the model don't account for any skew in the images, so I chose to just crop them for now. That's what the images look like right now: Example license plate crop

I want to use OpenCV to do edge detection on the license plates and do further cropping and perspective transformations with that. I want the images to contain only the license plate so I can later do character segmentation with the best input images possible.

Unfortunately, I am very new to computer vision and can't find a preprocessing that works for most of the images and enables OpenCV to reliably detect the rectangular license plates. It's really hard for me to do preprocessing that works for different lighting conditions and colors of the vehicles. I would apprechiate some advice on how I could go about this (or alternatively, whether I should go for a different approach)

This is what I did so far with OpenCV to achieve edge detection:

  1. These are my preprocessing steps:
  • Conversion to grayscale
  • Gaussian blur to remove some noise (how should I choose the kernel size?)
  • Histogram equalization to increase contrast
  1. Canny edge detection
  2. Further preprocessing:
  • Dilation to close gaps (how should I choose the kernel size?)
  • Erosion to shrink the edges again
  1. Finding contours
  2. Looking for the contour which best approximates a rectangle and has the largest area

Without tweaking the thresholds for canny edge detection every time, this approach doesn't even find a rectangular contour for the license plates. This is what a result looks like after tweaking: Example result Description from top to bottom:

  1. Original image
  2. after preprocessing
  3. after canny edge detection
  4. after further preprocessing
  5. With contours
  6. With the best contour
  7. The contour stretched to fill the image

I didn't just dump my code in here as I assume the biggest problem to be the preprocessing of the images.

2

There are 2 answers

0
toyota Supra On

Faster and easier to crop image by using mouseevent. It is straight forward. It will save iterator files. And the rest you can do Canny, contours, etc.

  1. Drag rectangle
  2. Press "s" to save
  3. Press "r" to rest
  4. Do step 1 to 3
  5. Press "c" to exit.

Snippet:

import cv2 as cv

refPt = []
cropping = False

def click_crop(event, x, y, flags, param):
    global refPt, cropping, num
    if event == cv.EVENT_LBUTTONDOWN:
        refPt = [(x, y)]
        cropping = True

    if event == cv.EVENT_LBUTTONUP:
        num += 1
        refPt.append((x, y))
        cropping = False        
        cv.rectangle(img, refPt[0], refPt[1], (0, 255, 0), 2)

if __name__ == "__main__":
    num = 0

    windowName = 'Click and Crop'
    img = cv.imread('l.jpg', cv.IMREAD_COLOR)
    clone = img.copy()
    cv.namedWindow(windowName)

    cv.setMouseCallback(windowName, click_crop)

    num = 0
    if len(refPt) == 2:
        num += 1

    while True:
        cv.imshow(windowName, img)
        key = cv.waitKey(1)
        if key == ord("r"): 
            img = clone.copy()
        elif key == ord("s"):   
            roi = clone[refPt[0][1]:refPt[1][1], refPt[0][0]:refPt[1][0]]
            cv.imwrite('roi{}.jpg'.format(num), roi)            
        elif key == ord("c"):  
            break    
    cv.destroyAllWindows()

Screenshot before and after:

enter image description here enter image description here

0
Nimbus Flight On

The answer depends on the dataset you are using. For example, on a toll road, the traffic cam may be pointed directly at the license plate, simplifying the problem. Compared that to a car dash cam where both vehicles are moving and the problem is complicated. Likewise, distance is a factor- toll booth is close while a stationary traffic cam overlooking an interstate may not even see license plates.

Since this is an academic project, you presumably are focused on higher quality datasets that you can manually improve, through cropping for example. Nevertheless, PyImageSearch tends to have good tutorials in my experience. Here is one such tutorial for an ANPR.

The process is as follows:

rectKern = cv2.getStructuringElement(cv2.MORPH_RECT, (13, 5))
blackhat = cv2.morphologyEx(gray, cv2.MORPH_BLACKHAT, rectKern)

Gradient:

gradX = cv2.Sobel(blackhat, ddepth=cv2.CV_32F, dx=1, dy=0, ksize=-1)

Gaussian Blur:

gradX = cv2.GaussianBlur(gradX, (5, 5), 0)
gradX = cv2.morphologyEx(gradX, cv2.MORPH_CLOSE, rectKern)
thresh = cv2.threshold(gradX, 0, 255,
cv2.THRESH_BINARY | cv2.THRESH_OTSU)

Etc. I recommend going through this tutorial and see if you get improved results on the contours before performing OCR.