Reputation: 177
I am rotating an 3 images 180 degrees with cv2.warpAffine() and then horizontally concatenating them with cv2.hconcat(). This is adding a 1 pixel wide column of black between the images but the width of the image from img.shape is correct. If I do not rotate them the image looks good with no black columns. All 3 images are 1920 wide x 1200 high.
How can I eliminate the black column? It is similar to - warpAffine
It is not happening with Scipy. The commented out code (ndimage.rotate()) is how I solved it with Scipy - from here here. The Scipy code is slower and I have thousands of images.
EDIT
After a minute I am now using numpy just to rotate the matrix 90 degrees twice. From numpy.rot90() This seems even faster. It is also in the commented code below. For non-90 degree angles, I'll stick with the warpAffine from opencv.
import cv2
import numpy as np
from scipy import ndimage
def rotate_image(mat, angle):
""" Rotates an image (angle in degrees) and expands image to avoid cropping
"""
height, width = mat.shape[:2] # image shape has 3 dimensions
image_center = (width/2, height/2) # getRotationMatrix2D needs coordinates in reverse order (width, height) compared to shape
rotation_mat = cv2.getRotationMatrix2D(image_center, angle, 1.0000)
# rotation calculates the cos and sin, taking absolutes of those.
abs_cos = abs(rotation_mat[0,0])
abs_sin = abs(rotation_mat[0,1])
# find the new width and height bounds
bound_w = int(height * abs_sin + width * abs_cos)
bound_h = int(height * abs_cos + width * abs_sin)
# find the new width and height bounds
bound_w = int(height * abs_sin + width * abs_cos)
bound_h = int(height * abs_cos + width * abs_sin)
print(f'Bounds w = {bound_w} Bound H = {bound_h}')
# subtract old image center (bringing image back to original) and adding the new image center coordinates
rotation_mat[0, 2] += bound_w/2 - image_center[0]
rotation_mat[1, 2] += bound_h/2 - image_center[1]
# rotate image with the new bounds and translated rotation matrix
rotated_mat = cv2.warpAffine(mat, rotation_mat, (bound_w, bound_h))
return rotated_mat
left_img = cv2.imread(r"F:\Basler\1595525164.242553_l.tiff",0)
cent_img = cv2.imread(r"F:\Basler\1595525164.242553_c.tiff",0)
rigt_img = cv2.imread(r"F:\Basler\1595525164.242553_r.tiff",0)
print(f'Shape = {rigt_img.shape} is {len(rigt_img.shape)}')
angle = 180
left_rot = rotate_image(left_img, angle)
cent_rot = rotate_image(cent_img, angle)
rigt_rot = rotate_image(cent_img, angle)
'''
left_rot = ndimage.rotate(left_img, angle)
cent_rot = ndimage.rotate(cent_img, angle)
rigt_rot = ndimage.rotate(rigt_img, angle)
THIS SEEMS THE FASTEST
left_rot = np.rot90(left_img,2)
cent_rot = np.rot90(cent_img,2)
rigt_rot = np.rot90(rigt_img,2)
'''
#lane_img = np.concatenate((left_rot, cent_rot, rigt_rot), axis=1)
lane_img = cv2.hconcat([left_rot, cent_rot, rigt_rot])
print(f'Size = {lane_img.shape}')
cv2.imwrite(r'C:\Users\Cary\Desktop\Junk\lane1.tiff', lane_img)
Upvotes: 1
Views: 1319
Reputation: 3437
For rotations of multiples of 90 deg, it's always faster and safer to use numpy.rot90() or numpy.flip().
Nevertheless, the rotate_image()
function suffers from a common error in many image rotation recipes.
The problem is the calculation of the image center. Imagine a small image of 3 columns by 2 rows. Your code uses:
>>> rows = 2
>>> cols = 3
>>> cols/2, rows/2
(1.5, 1.0)
But columns are (0, 1, 2), so the central column must be 1, and rows are (0, 1) so the central "row" must be 0.5:
>>> (cols-1)/2, (rows-1)/2
(1.0, 0.5)
Using your original code on the following images:
>>> rows, cols = 10, 15
>>> left_img = np.full((rows, cols), 200, dtype=np.uint8)
>>> cent_img = np.full((rows, cols), 150, dtype=np.uint8)
>>> rigt_img = np.full((rows, cols), 100, dtype=np.uint8)
you get:
This is your code, updated with the proper center calculations:
def rotate_image(mat, angle):
height, width = mat.shape[:2]
image_center = (width - 1) / 2, (height - 1) / 2 # <<<=========
rotation_mat = cv2.getRotationMatrix2D(image_center, angle, 1.0000)
abs_cos = abs(rotation_mat[0, 0])
abs_sin = abs(rotation_mat[0, 1])
bound_w = int(height * abs_sin + width * abs_cos)
bound_h = int(height * abs_cos + width * abs_sin)
new_center = (bound_w - 1) / 2, (bound_h - 1) / 2 # <<<=========
rotation_mat[0, 2] += new_center[0] - image_center[0]
rotation_mat[1, 2] += new_center[1] - image_center[1]
rotated_mat = cv2.warpAffine(mat, rotation_mat, (bound_w, bound_h))
return rotated_mat
Applied to the same images, now you get:
The problem is less noticeable with large images when angles are not multiples of 90 deg, but the problem is still there. The following is an example of your original code with a rotation of 45 deg:
Using the proper center calculation, opposite corners are really symmetrical:
Upvotes: 1
Reputation: 1239
The line can be removed by adding one additional line each side of image prior to rotation using copyMakeBorder:
after_mat = cv2.copyMakeBorder(
mat,
top=1,
bottom=1,
left=1,
right=1,
borderType=cv2.BORDER_REFLECT
)
# rotate image with the new bounds and translated rotation matrix
rotated_mat = cv2.warpAffine(after_mat, rotation_mat, (bound_w, bound_h))
I don't know the cause of the additional line (maybe a shift due to rotation?), but code above can suppress it, hopefully without side effects.
Upvotes: 0