I'm learning OpenCV and I'm looking for a code in python that get an input coordinate of a small image and map it to the coordinate of a large image so that a small image insert to the large image and it can be transform like rotating. I want to use translation matrix as an input to do that. For example if the matrix is:

([75, 120][210,320],
 [30, 90][190,305],
 [56, 102][250,474],
 [110, 98][330,520])

it means that pixel at (75, 120) in small image should map to pixel at (210, 320) in large image and pixel at (30, 90) in small image should map to pixel at (190, 305) in large image ... I searched a lot but I didn't get the proper answer to my problem. How can I solve this problem?

2

There are 2 answers

0
Shamshirsaz.Navid On BEST ANSWER

Inset small image in large one:

import sys
import cv2

dir = sys.path[0]
small = cv2.imread(dir+'/small.png')
big = cv2.imread(dir+'/big.png')

x, y = 20, 20
h, w = small.shape[:2]
big[y:y+h, x:x+w] = small

cv2.imwrite(dir+'/out.png', big)

enter image description here

Resize and then insert:

h, w = small.shape[:2]
small=cv2.resize(small,(w//2,h//2))

x, y = 20, 20
h, w = small.shape[:2]
big[y:y+h, x:x+w] = small

enter image description here

Insert part of image:

x, y = 20, 20
h, w = small.shape[:2]
hh, ww = h//2, w//2
big[y:y+hh, x:x+ww] = small[0:hh, 0:ww]

enter image description here

Rotating sample:

bH, bW = big.shape[:2]
sH, sW = small.shape[:2]
ch, cw = sH//2, sW//2
x, y = sW-cw//2, ch

empty = 0 * np.ones((bH, bW, 3), dtype=np.uint8)
empty[y:y+sH, x:x+sW] = small

M = cv2.getRotationMatrix2D(center=(x+cw, y+ch), angle=45, scale=1)
rotated = cv2.warpAffine(empty, M, (bW, bH))
big[np.where(rotated != 0)] = rotated[np.where(rotated != 0)]

enter image description here

Perspective transform sample:

bH, bW = big.shape[:2]
sH, sW = small.shape[:2]
x, y = 0, 0

empty = 0 * np.ones((bH, bW, 3), dtype=np.uint8)
empty[y:y+sH, x:x+sW] = small

_inp = np.float32([[0, 0], [sW, 0], [bW, sH], [0, sH]])
_out = np.float32([[bW//2-sW//2, 0], [bW//2+sW//2, 0], [bW, bH], [0, bH]])
M = cv2.getPerspectiveTransform(_inp, _out)
transformed = cv2.warpPerspective(empty, M, (bH, bW))

big[np.where(transformed != 0)] = transformed[np.where(transformed != 0)]

enter image description here

And finally for mapping cordinates; I think you just need to fill _out:

bH, bW = big.shape[:2]
sH, sW = small.shape[:2]

empty = 0 * np.ones((bH, bW, 3), dtype=np.uint8)
empty[:sH, :sW] = small

# Cordinates: TopLeft, TopRight, BottomRight, BottomLeft
_inp = np.float32([[0, 0], [sW, 0], [sW, sH], [0, sH]])
_out = np.float32([[50, 40], [300, 40], [200, 200], [10, 240]])
M = cv2.getPerspectiveTransform(_inp, _out)
transformed = cv2.warpPerspective(empty, M, (bH, bW))

big[np.where(transformed != 0)] = transformed[np.where(transformed != 0)]

enter image description here

2
BatWannaBe On

I don't know of a matrix operation that maps pixels to pixels anywhere, and because images are usually represented by 2D arrays, there isn't a general way to make these pixels point to the same data.

But given that these images are represented by NumPy arrays, you can use advanced indexing to copy any pixels from one array to another:

# smallimage is a NumPy array
# bigimage is a NumPy array

### Indices ###
# I formatted it so the matching indices
# between the 2 images line up in a column

bigD1 =   [210, 190, 250, 330] # dimension 0
bigD2 =   [320, 305, 474, 520] # dimension 1

smallD1 = [75,  30,  56,  110]
smallD2 = [120, 90,  102, 98]

### copy pixels from small image to big image ###

# on right side of =, advanced indexing copies
# the selected pixels to a new temporary array
#                                   v
bigimage[bigD1, bigD2] = smallimage[smallD1, smallD2]
#        ^
# on left side of =, advanced indexing specifies
# where we copy the temporary array's pixels to.

# smallimage is unchanged
# bigimage has edited pixels