Ilya.K.
Ilya.K.

Reputation: 321

how to make centralized affine transform in python like in matlab

Ho can I imply in python a transformation with a centralization like imtransform in matlab (see it's exact semantics, it is acutely relevant).

For example in matlab: for this tform:

tform = maketform('affine',[1 0 0; -1 1 0; 0 0 1]);

I get: enter image description here

and in python in a big variety of affine transformation methods (piilow, opencv, skimage and e.t.c) I get it non-centralized and cut:

enter image description here

How can I choose my 3*3 matrix of the tform for python libraries, such that it will centralize the image after such skewing ?

Upvotes: 1

Views: 1018

Answers (1)

Rotem
Rotem

Reputation: 32124

MATLAB default behavior is expanding and centralizing the output image, but this behavior is unique to MATLAB.

There might be some Python equivalent that I am not aware of, but I would like to focus on OpenCV solution.
In OpenCV, you need to compute the coefficients of the transformation matrix, and compute the size of the output image in order to get the same result as in MATLAB.

Consider the following implementation details:

  • In OpenCV, the transformation matrix is transposed relative to MATLAB (different convention).
  • In Python the first index is [0, 0] and in MATLAB [1, 1].
  • You need to compute the dimensions (width and height) of the output image from advance.
    You need the output dimensions to include the entire transformed image (all the corners of the transformed image should enter the output image).
    My suggestion is transforming the four corners, and compute max_x - min_x and max_y - min_y of the transformed corners.
  • For centralizing the output, you need to compute the translation coefficients (last column in OpenCV matrix).
    Assume: Source center is transformed (shifted) to destination center.
    For computing the translation, you may use inverse transformation, and compute the translation (shift in pixels) from the source center to destination center.

Here is a Python code sample (using OpenCV):

import numpy as np
import cv2

# Read input image
src_im = cv2.imread('peppers.png')

# Build a transformation matrix (the transformation matrix is transposed relative to MATLAB)
t = np.float32([[1, -1, 0],
                [0,  1, 0],
                [0,  0, 1]])

# Use only first two rows (affine transformation assumes last row is [0, 0, 1])
#trans = np.float32([[1, -1, 0],
#                    [0,  1, 0]])
trans = t[0:2, :]

inv_t = np.linalg.inv(t)
inv_trans = inv_t[0:2, :]

# get the sizes
h, w = src_im.shape[:2]

# Transfrom the 4 corners of the input image
src_pts = np.float32([[0, 0], [w-1, 0], [0, h-1], [w-1, h-1]]) # https://stackoverflow.com/questions/44378098/trouble-getting-cv-transform-to-work (see comment).
dst_pts = cv2.transform(np.array([src_pts]), trans)[0]

min_x, max_x = np.min(dst_pts[:, 0]), np.max(dst_pts[:, 0])
min_y, max_y = np.min(dst_pts[:, 1]), np.max(dst_pts[:, 1])

# Destination matrix width and height
dst_w = int(max_x - min_x + 1) # 895
dst_h = int(max_y - min_y + 1) # 384

# Inverse transform the center of destination image, for getting the coordinate on the source image.
dst_center = np.float32([[(dst_w-1.0)/2, (dst_h-1.0)/2]])
src_projected_center = cv2.transform(np.array([dst_center]), inv_trans)[0]

# Compute the translation of the center - assume source center goes to destination center
translation = src_projected_center - np.float32([[(w-1.0)/2, (h-1.0)/2]])

# Place the translation in the third column of trans
trans[:, 2] = translation

# Transform
dst_im = cv2.warpAffine(src_im, trans, (dst_w, dst_h))

# Show dst_im as output
cv2.imshow('dst_im', dst_im)
cv2.waitKey()
cv2.destroyAllWindows()

# Store output for testing
cv2.imwrite('dst_im.png', dst_im)

MATLAB code for comparing results:

I = imread('peppers.png');

tform = maketform('affine',[1 0 0; -1 1 0; 0 0 1]);
J = imtransform(I, tform);
figure;imshow(J)

% MATLAB recommends using affine2d and imwarp instead of maketform and imtransform.
% tform = affine2d([1 0 0; -1 1 0; 0 0 1]);
% J = imwarp(I, tform);
% figure;imshow(J)

pyJ = imread('dst_im.png');
figure;imagesc(double(rgb2gray(J)) - double(rgb2gray(pyJ)));
title('MATLAB - Python Diff');impixelinfo
max_abs_diff = max(imabsdiff(J(:), pyJ(:)));
disp(['max_abs_diff = ', num2str(max_abs_diff)])

We are lucky to get zero difference - result of imwarp in MATLAB gives minor differences (but imtransform result is same as OpenCV).


Python output image (same as MATLAB output image):
enter image description here

Upvotes: 2

Related Questions