I want to warp subsections of an image to project it on a nonuniform surface. Ultimately I want to warp an image as seen HERE, kinda like was is done in HERE from THIS project.
My problem is when I apply the transformations to each subsection of the image, things just do not line up
This is my process by which I achieve the transformations and then stitch (crop and paste them together on the final image.
- Get a list of all the points
- Create a quadrilaterals Region of Interest (ROI) from set of 4 points
Those 4 points are used to Transform the image with the corresponding original 4 points. This is done using my function called perspective_transform()
a. I take the 2 set of 4 points and pass them in to M = cv2.getPerspectiveTransform(corners, newCorners)
b. Then I call: warped = cv2.warpPerspective(roi, M, (width, height))
After getting the new warped image I use mask’s to stich everything together based on the ROI it was associated with:
a. This is done by the function quadr_croped()
Initialization of screen to Get raw pixels from the screen, save it to a Numpy array
img0 = np.array(sct.grab(monitor)) clone = img0.copy() total_height, total_width, channels = img0.shape xSub =int (input("How many columns would you like to divide the screen in to? (integers only)")) ySub =int (input("How many rows would you like to divide the screen in to? (integers only)")) roi_width = float(total_width/xSub) roi_height = float(total_height/ySub) point_list = []
Third: Use 2 sets of 4 points to warp the perspective of the image
def perspective_transform(image, roi, corners, newCorners, i = -1 ):
corners = list (corners) newCorners = list (newCorners) height, width, pixType = image.shape corners = np.array([[corners[0][0],corners[0][1],corners[0][2],corners[0][3]]],np.float32) newCorners = np.array([[newCorners[0][0],newCorners[0][1],newCorners[0][2],newCorners[0][3]]],np.float32) M = cv2.getPerspectiveTransform(corners, newCorners) #warped = cv2.warpPerspective(roi, M, (width, height), flags=cv2.INTER_LINEAR) warped = cv2.warpPerspective(roi, M, (width, height)) return warped
Second: cut and Paste quadrilateral in to the main image
def quadr_croped (mainImg,image, pts, i): # example
# mask defaulting to black for 3-channel and transparent for 4-channel # (of course replace corners with yours) mask = np.zeros(image.shape, dtype=np.uint8) roi_corners = pts #np.array([[(10,10), (300,300), (10,300)]], dtype=np.int32) # fill the ROI so it doesn't get wiped out when the mask is applied channel_count = image.shape[2] # i.e. 3 or 4 depending on your image ignore_mask_color = (255,)*channel_count cv2.fillConvexPoly(mask, roi_corners, ignore_mask_color) # apply the mask masked_image = cv2.bitwise_and(image, mask) mainImg = cv2.bitwise_or(mainImg, mask) mainImg = mainImg + masked_image # cv2.imshow("debug: image, mainImg: " +str(i), mainImg) return mainImg
First: Starting Function
def draw_quadr(img1):
#set up list for ROIquadrilateral == polygon with 4 sides numb_ROI = xSub * ySub skips =int((numb_ROI-1)/xSub) numb_ROI = skips + numb_ROI quadrilateral_list.clear() for i in range(numb_ROI): if not point_list[i][0] <= point_list[(i+xSub+2)][0]: continue vert_poly = np.array([[ point_list[i], point_list[i+1], point_list[i+xSub+2], point_list[i+xSub+1] ]], dtype=np.int32) verticesPoly_old = np.array([[ H_points_list[i], H_points_list[i+1], H_points_list[i+xSub+2], H_points_list[i+xSub+1] ]], dtype=np.int32) roi = img0.copy() # cv2.imshow("debug: roi"+str(i), roi) overlay = perspective_transform( img1, roi, verticesPoly_old, vert_poly, i) img1 = quadr_croped(img1,overlay,vert_poly,i) cv2.polylines(img1,vert_poly,True,(255,255,0)) quadrilateral_list.append(vert_poly) pt1 = point_list[i] pt2 = point_list[i+xSub+2] cntPt = (int( (pt1[0]+pt2[0])/2),int((pt1[1]+pt2[1])/2) ) cv2.putText(img1,str(len(quadrilateral_list)-1),cntPt,cv2.FONT_HERSHEY_SIMPLEX, 1,(0,255,0),2,cv2.LINE_AA) #cv2.imshow(str(i), img1) return img1
PICTURE result Links
Please look at these as they show the problem really well.
Original Image with no Distortion
This image has a left offset from the center (with no y directional movement)
Results of x directional distortion image
This image has an up offset from the center (with no x directional movement)
Results of y directional distortion image
This image has an up and left offset from the center
Results of x and y directional distortion image
I am new to computer vision and stackoverflow, I hope I have included everything to help describe the problem, let me know if you need to know anything else to help
There may be certainly some bugs in the code, because the output images don't look as they should (or may be not). But you will never get exactly what you want using perspective transforms because of their mathematical nature. Namely, because they are non-linear. You can make the rectangle corners coincide, but between the corners the image is scaled non-uniformly, and you can't make these non-uniformities be the same at the both sides of a dividing line.
But you can employ affine transforms which scale image uniformly. And this guarantees that if two points on a line coincide, all the other points coincide as well. The only problem here that an affine transform is determined using a triangle, so you will need to split your quadrilaterals in triangles. E.g. in the following code every quadrilateral is split in 4 triangles using the center of the quadrilateral as an additional vertex.