Reputation: 11733
I have the cameraMatrix
and the distCoeff
needed to undistort an image or a vector of points. Now I'd like to distort them back.
Is it possible with Opencv?
I remember I read something about it in stackoverflow but cannot find now.
EDIT: I found the way to do it in this answer. It is also in the opencv developer zone (in this issue)
But my results are not properly correct. There is some error of 2-4 pixel more or less. Probably there is something wrong in my code because in the answer I linked everything seems good in the unit test. Maybe type casting from float to double, or something else that I cannot see.
here is my test case:
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace cv;
using namespace std;
void distortPoints(const std::vector<cv::Point2d> & src, std::vector<cv::Point2d> & dst,
const cv::Mat & cameraMatrix, const cv::Mat & distorsionMatrix)
{
dst.clear();
double fx = cameraMatrix.at<double>(0,0);
double fy = cameraMatrix.at<double>(1,1);
double ux = cameraMatrix.at<double>(0,2);
double uy = cameraMatrix.at<double>(1,2);
double k1 = distorsionMatrix.at<double>(0, 0);
double k2 = distorsionMatrix.at<double>(0, 1);
double p1 = distorsionMatrix.at<double>(0, 2);
double p2 = distorsionMatrix.at<double>(0, 3);
double k3 = distorsionMatrix.at<double>(0, 4);
for (unsigned int i = 0; i < src.size(); i++)
{
const cv::Point2d & p = src[i];
double x = p.x;
double y = p.y;
double xCorrected, yCorrected;
//Step 1 : correct distorsion
{
double r2 = x*x + y*y;
//radial distorsion
xCorrected = x * (1. + k1 * r2 + k2 * r2 * r2 + k3 * r2 * r2);
yCorrected = y * (1. + k1 * r2 + k2 * r2 * r2 + k3 * r2 * r2);
//tangential distorsion
//The "Learning OpenCV" book is wrong here !!!
//False equations from the "Learning OpenCv" book below :
//xCorrected = xCorrected + (2. * p1 * y + p2 * (r2 + 2. * x * x));
//yCorrected = yCorrected + (p1 * (r2 + 2. * y * y) + 2. * p2 * x);
//Correct formulae found at : http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html
xCorrected = xCorrected + (2. * p1 * x * y + p2 * (r2 + 2. * x * x));
yCorrected = yCorrected + (p1 * (r2 + 2. * y * y) + 2. * p2 * x * y);
}
//Step 2 : ideal coordinates => actual coordinates
{
xCorrected = xCorrected * fx + ux;
yCorrected = yCorrected * fy + uy;
}
dst.push_back(cv::Point2d(xCorrected, yCorrected));
}
}
int main(int /*argc*/, char** /*argv*/) {
cout << "OpenCV version: " << CV_MAJOR_VERSION << " " << CV_MINOR_VERSION << endl; // 2 4
Mat cameraMatrix = (Mat_<double>(3,3) << 1600, 0, 789, 0, 1600, 650, 0, 0, 1);
Mat distorsion = (Mat_<double>(5,1) << -0.48, 0, 0, 0, 0);
cout << "camera matrix: " << cameraMatrix << endl;
cout << "distorsion coefficent: " << distorsion << endl;
// the starting points
std::vector<Point2f> original_pts;
original_pts.push_back( Point2f(23, 358) );
original_pts.push_back( Point2f(8, 357) );
original_pts.push_back( Point2f(12, 342) );
original_pts.push_back( Point2f(27, 343) );
original_pts.push_back( Point2f(7, 350) );
original_pts.push_back( Point2f(-8, 349) );
original_pts.push_back( Point2f(-4, 333) );
original_pts.push_back( Point2f(12, 334) );
Mat original_m = Mat(original_pts);
// undistort
Mat undistorted_m;
undistortPoints(original_m, undistorted_m,
cameraMatrix, distorsion);
cout << "undistort points" << undistorted_m << endl;
// back to array
vector< cv::Point2d > undistorted_points;
for(int i=0; i<original_pts.size(); ++i) {
Point2d p;
p.x = undistorted_m.at<float>(i, 0);
p.y = undistorted_m.at<float>(i, 1);
undistorted_points.push_back( p );
// NOTE THAT HERE THERE IS AN APPROXIMATION
// WHAT IS IT? STD::COUT? CASTING TO FLOAT?
cout << undistorted_points[i] << endl;
}
vector< cv::Point2d > redistorted_points;
distortPoints(undistorted_points, redistorted_points, cameraMatrix, distorsion);
cout << redistorted_points << endl;
for(int i=0; i<original_pts.size(); ++i) {
cout << original_pts[i] << endl;
cout << redistorted_points[i] << endl;
Point2d o;
o.x = original_pts[i].x;
o.y = original_pts[i].y;
Point2d dist = redistorted_points[i] - o;
double norm = sqrt(dist.dot(dist));
std::cout << "distance = " << norm << std::endl;
cout << endl;
}
return 0;
}
And here is my output:
OpenCV version: 2 4
camera matrix: [1600, 0, 789;
0, 1600, 650;
0, 0, 1]
distorsion coefficent: [-0.48; 0; 0; 0; 0]
undistort points[-0.59175861, -0.22557901; -0.61276215, -0.22988389; -0.61078846, -0.24211435; -0.58972651, -0.23759322; -0.61597037, -0.23630577; -0.63910204, -0.24136727; -0.63765121, -0.25489968; -0.61291695, -0.24926868]
[-0.591759, -0.225579]
[-0.612762, -0.229884]
[-0.610788, -0.242114]
[-0.589727, -0.237593]
[-0.61597, -0.236306]
[-0.639102, -0.241367]
[-0.637651, -0.2549]
[-0.612917, -0.249269]
[24.45809095301274, 358.5558144841519; 10.15042938413364, 357.806737955385; 14.23419751024494, 342.8856229036298; 28.51642501095819, 343.610956960508; 9.353743900129871, 350.9029663678638; -4.488033489615646, 350.326357275197; -0.3050714463695385, 334.477016554487; 14.41516474594289, 334.9822130217053]
[23, 358]
[24.4581, 358.556]
distance = 1.56044
[8, 357]
[10.1504, 357.807]
distance = 2.29677
[12, 342]
[14.2342, 342.886]
distance = 2.40332
[27, 343]
[28.5164, 343.611]
distance = 1.63487
[7, 350]
[9.35374, 350.903]
distance = 2.521
[-8, 349]
[-4.48803, 350.326]
distance = 3.75408
[-4, 333]
[-0.305071, 334.477]
distance = 3.97921
[12, 334]
[14.4152, 334.982]
distance = 2.60725
Upvotes: 15
Views: 25772
Reputation: 41
There are some points which I found when tried to redistort points using tips from this topic:
xCorrected = x * (1. + k1 * r2 + k2 * r2 * r2 + k3 * r2 * r2 * r2); // Multiply r2 after k3 one more time in yCorrected too
// To relative coordinates <- this is the step you are missing
The code in this question already uses relative coordinates! It is a trick in OpenCV undistortPoints
functions.
It has a new intrinsic matrix as the 6th argument. If it is None, than the function returns points in relative coordinates. And this is why the original code in question has this step:
//Step 2 : ideal coordinates => actual coordinates
xCorrected = xCorrected * fx + ux;
yCorrected = yCorrected * fy + uy;
When I started to study this question, I had the same opinion that these equations undistort points, not the opposite.
The tutorial of OpenCV and its documentation has the different names:
So let's me solve the confuse: Distortion operation can be represented as equations in various distortion models. But Undistortion is only possible through numerical iteration algorithm. There is no analytical solutions to represent undistortion as equations (Because of 6th order part in radial part and nonlinearity)
Upvotes: 4
Reputation: 84
Now, OpenCV (version 4.7 and later) provides an approach function cv::initInverseRectificationMap.
Make sure that your maps are CV_16SC2 type, if aren't, then convert those using cv::convertMaps
cv::Mat map1_fixed, map2_fixed;
cv::convertMaps(mapX, mapY, map1_fixed, map2_fixed, CV_16SC2, false);
With this, you can use 'map1_fixed' to get position from Remapped image to original image, using:
// Now, to find the original position of a pixel from a remapped image, you can do the following:
cv::Vec2s pixelOriginal = map1_fixed.at<cv::Vec2s>(row_from_RemappedImage, col_from_RemappedImage);
// pixelOriginal[0] is the x-coordinate, and pixelOriginal[1] is the y-coordinate in the original (distorted) image.
Then, you can use initInverseRectificationMap to get coordinates from original image to remapped image:
cv::Mat invMapX, invMapY;
cv::initInverseRectificationMap(k, DistCoeff, R, P, cv::Size(Ncols, Nrows), CV_16SC2, invMapX, invMapY);
// Now, to find the remapped position of a pixel from an original image, you can do the following:
cv::Vec2s pixelRemapped = invMapX.at<cv::Vec2s>(row_from_OriginalImage, col_from_OriginalImage);
That it's. It has an error of 1 pixel, I think.
Upvotes: 0
Reputation: 2371
Here is an implementation based on cv2.projectPoints
, the source for interpolate_missing_pixels
can be found here.
import numpy as np
import cv2
def distort(
image: np.ndarray,
cam_matrix: np.ndarray,
dist_coefs,
**kwargs
) -> np.ndarray:
"""
Applies lens distortion to an image.
:param image: input image
:param cam_matrix: camera intrinsic matrix
:param dist_coefs must be given in opencv format
:param kwargs passed to `interpolate_missing_pixels`.
"""
rvec_tvec = np.zeros(3)
h, w = image.shape[:2]
rows, cols = np.meshgrid(np.arange(h), np.arange(w), indexing='ij')
coords_pix_orig = np.array([cols.ravel(), rows.ravel(), np.ones(h*w)])
coords_cam = np.linalg.inv(cam_matrix) @ coords_pix_orig
coords_cam = coords_cam.astype(np.float32)
coords_pix_dist, _ = cv2.projectPoints(
coords_cam, rvec_tvec, rvec_tvec, cam_matrix, dist_coefs)
coords_pix_dist = coords_pix_dist.astype(np.int).squeeze(1)
in_image =\
(coords_pix_dist[:, 0] >= 0) & (coords_pix_dist[:, 0] < w) & \
(coords_pix_dist[:, 1] >= 0) & (coords_pix_dist[:, 1] < h)
orig_r = coords_pix_orig[1][in_image].astype(np.int)
orig_c = coords_pix_orig[0][in_image].astype(np.int)
dist_r = coords_pix_dist[:, 1][in_image]
dist_c = coords_pix_dist[:, 0][in_image]
dist_image = np.zeros_like(image)
dist_image[dist_r, dist_c] = image[orig_r, orig_c]
missing_vals_mask = np.ones((h, w), dtype=np.bool)
missing_vals_mask[dist_r, dist_c] = False
interp_dist_image = interpolate_missing_pixels(
dist_image, missing_vals_mask, **kwargs
)
return interp_dist_image
Upvotes: 0
Reputation: 315
This answer is for the new searchers who are looking to distort back to 2d points from undistorted 3d points.
For the above case, the following answer worked for me greatly.. (Guided by @Joan Charmant answer.)
distortBack(point3f undistored, point2f distored_back){
double temp_x, temp_y;
temp_x = undistored.x / undistored.z;
temp_y = undistored.y / undistored.z;
double r2 = temp_x*temp_x + temp_y*temp_y;
// Radial distorsion
double xDistort = temp_x * (1 + dist_k1 * r2 + dist_k2 * r2 * r2 + dist_k3 * r2 * r2 * r2);
double yDistort = temp_y * (1 + dist_k1 * r2 + dist_k2 * r2 * r2 + dist_k3 * r2 * r2 * r2);
// Tangential distorsion
xDistort = xDistort + (2 * dist_p1 * temp_x * temp_y + dist_p2 * (r2 + 2 * temp_x * temp_x));
yDistort = yDistort + (dist_p1 * (r2 + 2 * temp_y * temp_y) + 2 * dist_p2 * temp_x * temp_y);
// Back to absolute coordinates.
distored_back.x = xDistort * camera_fx + camera_cx;
distored_back.y = yDistort * camera_fy + camera_cy;
}
For additional Details - cv::UndistortPoints() and in opencv/doc/v4.3.0
Upvotes: 1
Reputation: 11
Another way is to use remap to project rectified image to distorted image:
img_distored = cv2.remap(img_rect, mapx, mapy, cv2.INTER_LINEAR)
mapx and mapy are mappings from rectified pixel locations to distorted pixel locations. It can be obtained in below steps:
X, Y = np.meshgrid(range(w), range(h)
pnts_distorted = np.merge(X, Y).reshape(w*h, 2)
pnts_rectified = cv2.undistortPoints(pnts_distorted, cameraMatrix, distort, R=rotation, P=pose)
mapx = pnts_rectified[:,:,0]
mapy = pnts_rectified[:,:,1]
cameraMatrix, distort, rotation, pose are the parameters returned in cv calibration and stereoRectify functions.
Upvotes: 0
Reputation: 15032
There is no analytical solution to this problem once you distort the coordinates there is no way going back at least analytically with this specific model. It is in nature of radial distortion model, the way it is defined allows to distort in simple analytical fashion but not vice versa. In order to do so one has to solve 7-th degree polynomial for which it is proven that there is no analytical solution.
However the radial camera model is not special or sacred in any way it just simple rule that stretches the pixels outwards or inwards to optical center depending on lens you taken your picture with. The closer to optical center the less distortion pixel receives. There is multitude of other ways to define radial distortion model which could yield not only similar quality of distortion but also provide simple way to define the inverse of distortion. But going this way means that you would need to find optimal parameters for such model yourself.
For instance in my specific case I've found that a simple sigmoid function (offset and scaled) is capable to approximating my existing radial model parameters with MSE integral error less than or equal to 1E-06 even though the comparison between model seems pointles. I don't think that native radial model yields better values and must not be treated as etalon one. Physical lens geometry may vary in a way that is not representable by both models and to better approximate lens geometry a mesh-like approach should be used. However I'm impressed by approximated model because it uses only one free parameter and provides notably accurate result which makes me think which model is actually better for the job.
Here's the plot of original radial model (red) and it's sigmoid approximation (green) on top and also their derivatives (blue lines):
So distortion / undistortion function in my case looked like this:
distort = (r, alpha) -> 2/(1 + exp(-alpha*r)) - 1
undistort = (d, alpha) -> -ln((d + 1)/(d - 1))/alpha
(Please note that distortion is performed in polar coordinates around optical center and affects only distance from optical center (i.e. not the angle itself), r - distance from optical center, alpha is a free parameter that needs to be estimated):
Here's how the distortion looked compared to native radial distortion (green is approximated, red is native radial distortion)
And here's how the inverse mapping of pixels looks like if we were to take a regular pixel grid and try to undistort it:
Upvotes: 0
Reputation: 21
The OCV camera model (see http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html) describes how a 3D point first maps to an immaginary ideal pinhole camera coordinate and then "distorts" the coordinate so that it models the image of the actual real world camera.
Using the OpenCV distortion coefficients (= Brown distortion coefficients), the following 2 operations are simple to calculate:
cv::undistort(....)
or alternatively a combination of cv::initUndistortRectifyMap(....)
and cv::remap(....)
.However the following 2 operations are computionally much more complex:
cv::undistortPoints(....)
.This may sound counter intuitive. More detailed explanation:
For a given a pixel coordinate in the distortion-free image it is easy to calculate the corresponding coordinate in the original image (i.e. "distort" the coordinate).
x = (u - cx) / fx; // u and v are distortion free
y = (v - cy) / fy;
rr = x*x + y*y
distortion = 1 + rr * (k1 + rr * (k2 + rr * k3))
# I ommit the tangential parameters for clarity
u_ = fx * distortion * x + cx
v_ = fy * distortion * y + cy
// u_ and v_ are coordinates in the original camera image
Doing it the other way round is much more difficult; basically one would need to combine all the code lines above into one big vectorial equation and solve it for u and v. I think for the general case where all 5 distortion coefficients are used, it can only be done numerically. Which is (without looking at the code) probably what cv::undistortPoints(....)
does.
However, using the distortion coefficients, we can calculate an undistortion-map (cv::initUndistortRectifyMap(....)
) which maps from the distortion-free image coordinates to the original camera image coordinates. Each entry in the undistortion-map contains a (floating point) pixel position in the original camera image. In other words, the undistortion-map points from the distorion-free image into the original camera image. So the map is calculated by exactly the above formula.
The map can then be applied to get the new distortion-free image from the original (cv::remap(....)
). cv::undistort()
does this without the explicit calculation of the undistorion map.
Upvotes: 2
Reputation: 198
If you multiply all the distortion coefficients by -1 you can then pass them to undistort or undistortPoints and basically you will apply the inverse distortion which will bring the distortion back.
Upvotes: 4
Reputation: 665
You can easily distort back your points using ProjectPoints.
cv::Mat rVec(3, 1, cv::DataType<double>::type); // Rotation vector
rVec.at<double>(0) = 0;
rVec.at<double>(1) = 0;
rVec.at<double>(2) =0;
cv::Mat tVec(3, 1, cv::DataType<double>::type); // Translation vector
tVec.at<double>(0) =0;
tVec.at<double>(1) = 0;
tVec.at<double>(2) = 0;
cv::projectPoints(points,rVec,tVec, cameraMatrix, distCoeffs,result);
PS: in the opencv 3 they added a function for distort.
Upvotes: 7
Reputation: 2022
The initUndistortRectifyMap
linked in one of the answers of the question you mention does indeed what you want. Since it is used in Remap
to build the full undistorted image, it gives, for each location in the destination image (undistorted), where to find the corresponding pixel in the distorted image so they can use its color. So it's really an f(undistorted) = distorted
map.
However, using this map will only allow for input positions that are integer and within the image rectangle. Thankfully, the documentation gives the full equations.
It is mostly what you have, except that there is a preliminary step that you are missing. Here is my version (it is C# but should be the same):
public PointF Distort(PointF point)
{
// To relative coordinates <- this is the step you are missing.
double x = (point.X - cx) / fx;
double y = (point.Y - cy) / fy;
double r2 = x*x + y*y;
// Radial distorsion
double xDistort = x * (1 + k1 * r2 + k2 * r2 * r2 + k3 * r2 * r2 * r2);
double yDistort = y * (1 + k1 * r2 + k2 * r2 * r2 + k3 * r2 * r2 * r2);
// Tangential distorsion
xDistort = xDistort + (2 * p1 * x * y + p2 * (r2 + 2 * x * x));
yDistort = yDistort + (p1 * (r2 + 2 * y * y) + 2 * p2 * x * y);
// Back to absolute coordinates.
xDistort = xDistort * fx + cx;
yDistort = yDistort * fy + cy;
return new PointF((float)xDistort, (float)yDistort);
}
Upvotes: 16