Hope you're all doing well! I am working on a project two fuse two images (RGB + IR). I am using FLIR dataset. RGB image resolution is 1800x1600, whereas IR images are 640x512. Additionally, RGB images have slightly wider angle and has more info, so when the images go through image fusion, it produces shadowy output. I am looking to crop the images, so the details match infrared images. I came across a medium article where the author used feature matching to identify and crop parts. It works great for daylight scenes, however, it produces very wrong results when night scenes images are used. I notice the feature matching is not correct either. I have tried other methods (SURF, SIFT as well, but they don't seem to work well either, unless
Do you know what could be causing this issue to occur?
Cropped Image Daylight (1) - https://i.stack.imgur.com/M7nxQ.jpg
Cropped Image Nightscene (2) - https://i.stack.imgur.com/EELBW.jpg
As you can see, the matching is extremely poor in night scenes. Is there any way to address the issue? Sorry, I am new to openCV. Appreciate any pointers! Thank you! :)
Medium article referenced: https://medium.com/@aswinvb/how-to-perform-thermal-to-visible-image-registration-c18a34894866
The issue is the difference of illumination conditions. You need something phase based. Have a look at Phase Correlation matching systems. You can use Phase Correlation through FFT with OpenCV or you can use something like ECC which can also work, only thing is it can be slow.
For the ADAS dataset the cameras are fixed width apart and almost always in the same alignment for the dataset. So if you match the location of 1 pair RGB to Thermal you can use the same rotation matrix for subsequent pairs.