Opencv Image stitching blending (Multiband blending)

8.1k views Asked by At

I am trying to blend out the seams in images I have just stitched together using the blender from OpenCV 3.2 in cv::detail::MultiBandBlender found in #include "opencv2/stitching/detail/blenders.hpp". There is not a lot of documentation and even less coding examples from what I could find but I managed to find a good blog that helped explain the steps here.

When I run the code I have I get the following error:/opencv/modules/core/src/copy.cpp:1176: error: (-215) top >= 0 && bottom >= 0 && left >= 0 && right >= 0 in function copyMakeBorder

Here is the code to for the blending (assume stitching, warpPerspective and homographies found are correct)

//Mask of iamge to be combined so you can get resulting mask
Mat mask1(image1.size(), CV_8UC1, Scalar::all(255));
Mat mask2(image2.size(), CV_8UC1, Scalar::all(255));
Mat image1Updated, image2Updated;
//Warp the masks and the images to their new posistions so their are of all the same  size to be overlayed and blended
warpPerspective(image1, image1Updated, (translation*homography), result.size(), INTER_LINEAR, BORDER_CONSTANT,(0));
warpPerspective(image2, image2Updated, translation, result.size(), INTER_LINEAR, BORDER_TRANSPARENT,   (0));
warpPerspective(mask1, mask1, (translation*homography), result.size(), INTER_LINEAR, BORDER_CONSTANT,(0));
warpPerspective(mask2, mask2, translation, result.size(), INTER_LINEAR, BORDER_TRANSPARENT,   (0));

//create blender
detail::MultiBandBlender blender(false, 5);
//feed images and the mask areas to blend
blender.feed(image1Updated, mask1, Point2f (0,0));
blender.feed(image2Updated, mask2, Point2f (0,0));
//prepare resulting size of image
blender.prepare(Rect(0, 0, result.size().width, result.size().height));
Mat result_s, result_mask;
//blend
blender.blend(result_s, result_mask);

The error occurs when I try to do blender.feed

On a little side note; When making the masks for the blender should the mask be the entire images or just the be the area of the images that overlap one another during the stitch?

Thanks for any help in advance

EDIT

I have it working but am now getting this resulting blended imaged. enter image description here Here is the stitched image without blending for reference. enter image description here Any ideas on how to improve?

2

There are 2 answers

0
I-Chan Lo On
  1. Use blender.prepare before blender.feed
  2. Redesign your mask (half 255 and a half 0)
//Mask of the image to be combined so you can get resulting mask
Mat mask1, mask2;
mask1 = optimalSeamMask(energy, path);
mask2 = ones(mask1.rows, mask1.cols)*255-mask1

Mat image1Updated, image2Updated;
//Warp the masks and the images to their new posistions so their are of all the same  size to be overlayed and blended
warpPerspective(image1, image1Updated, (translation*homography), result.size(), INTER_LINEAR, BORDER_CONSTANT,(0));
warpPerspective(image2, image2Updated, translation, result.size(), INTER_LINEAR, BORDER_TRANSPARENT,   (0));
warpPerspective(mask1, mask1, (translation*homography), result.size(), INTER_LINEAR, BORDER_CONSTANT,(0));
warpPerspective(mask2, mask2, translation, result.size(), INTER_LINEAR, BORDER_TRANSPARENT,   (0));

//create blender
detail::MultiBandBlender blender(false, 5);
//feed images and the mask areas to blend
blender.prepare(Rect(0, 0, result.size().width, result.size().height));
blender.feed(image1Updated, mask1, Point2f (0,0));
blender.feed(image2Updated, mask2, Point2f (0,0));
//prepare resulting size of image
Mat result_s, result_mask;
//blend
blender.blend(result_s, result_mask);
0
AriaaaZare On

This is old but I found the reason for the problem and will share it in case anyone is facing the same issue, the problem is in the warpPerspective method, there will be a bit of black pixels bordering the warped image, so you have to convert

warpPerspective(image1, image1Updated, (translation*homography), result.size(), INTER_LINEAR, BORDER_CONSTANT,(0));
warpPerspective(image2, image2Updated, translation, result.size(), INTER_LINEAR, BORDER_TRANSPARENT,   (0));
warpPerspective(mask1, mask1, (translation*homography), result.size(), INTER_LINEAR, BORDER_CONSTANT,(0));
warpPerspective(mask2, mask2, translation, result.size(), INTER_LINEAR, BORDER_TRANSPARENT,   (0));

to

warpPerspective(image1, image1Updated, (translation*homography), result.size(), INTER_LINEAR, BORDER_REPLICATE);
warpPerspective(image2, image2Updated, translation, result.size(), INTER_LINEAR, BORDER_REPLICATE);
warpPerspective(mask1, mask1, (translation*homography), result.size());
warpPerspective(mask2, mask2, translation, result.size());

which will replace all black areas around warped image with the pixel closest to it.