Reputation: 2062
I have made a small program that reads an image, transforms the perspective and then redraws the image. Currently I rewrite each pixel to the output manually but this way a lot of points are lost and the result is image that is very faint (the larger the transformation the fainter the image). This is my code:
U, V = np.meshgrid(range(img_array.shape[1]), range(img_array.shape[0]))
UV = np.vstack((U.flatten(),V.flatten())).T
UV_warped = cv2.perspectiveTransform(np.array([UV]).astype(np.float32), H)
UV_warped = UV_warped[0]
UV_warped = UV_warped.astype(np.int)
x_translation = min(UV_warped[:,0])
y_translation = min(UV_warped[:,1])
new_width = np.amax(UV_warped[:,0])-np.amin(UV_warped[:,0])
new_height = np.amax(UV_warped[:,1])-np.amin(UV_warped[:,1])
UV_warped[:,0] = UV_warped[:,0] - int(x_translation)
UV_warped[:,1] = UV_warped[:,1] - int(y_translation)
# create box for image
new_img = np.ones((new_height+1, new_width+1))*255 # 0 = black 255 - white background
for uv_pix, UV_warped_pix in zip(UV, UV_warped):
x_orig = uv_pix[0] # x in origineel
y_orig = uv_pix[1] # y in origineel
color = img_array[y_orig, x_orig]
x_new = UV_warped_pix[0] # new x
y_new = UV_warped_pix[1] # new y
new_img[y_new, x_new] = np.array(color)
img = Image.fromarray(np.uint8(new_img))
img.save("test.jpg")
Is there a way to do this differently (with interpolation maybe?) so I won't loose so many pixels and the image is not so faint?
Upvotes: 0
Views: 975
Reputation: 8980
You are looking for the function warpPerspective (As already mentioned in answer to your previous question OpenCV perspective transform in python).
You can use this function like this (although I'm not familiar with python) :
cv2.warpPerspective(src_img, H_from_src_to_dst, dst_size, dst_img)
EDIT: You can refer to this OpenCV tutorial. It uses affine transformations, but there exists similar OpenCV functions for perspective transformations.
Upvotes: 1