Standardize dirty images by pre-processing for OCR

150 views Asked by At

I'm working on a project to study the problem faced in OCR'ing images from different sources however this question is limited to camera captured images. samples are from books , question papers etc

What I'm looking for is a way by which I can make the images OCR ready.

source: https://ibb.co/Q8qWWZm

Some of the methods that I have tried

th = cv2.adaptiveThreshold(gray,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
                            cv2.THRESH_BINARY,11,2)

output : https://ibb.co/nR4kcsP (compressed due to large size)

after threshold I have tried median blur with kernel size 9 which removed almost all of the noises but damaged some characters too.

output: https://ibb.co/TBF5Fs4

The other method that I have tried is use skimage (local threshold) :

warped = cv2.cvtColor(imgc, cv2.COLOR_BGR2GRAY)
T = threshold_local(warped, 11, offset = 1, method = "gaussian")
warped = (warped > T).astype("uint8") * 255
cv2.imwrite('newth.jpg', warped)

this method generates to much noises but, that to can be cleared using median blur also damages some characters.

Am I doing it the right way? If not then what should I do and what should I do next. My goal is the find a method which works on all the images with maximum accuracy preferably 100%

0

There are 0 answers