RABİA SENA UYSAL
RABİA SENA UYSAL

Reputation: 11

calculating rotation matrix and translation vector

I have some camera parameters, from there I can get the intrinsic parameters matrix, but I didn't know how to calculate the extrinsic parameters without knowing the real world coordinates. I want to obtain the projection matrix by calculating R and t. I will use this matrix in a visual odometry project.

enter image description here

The results of the code I tried for visual odometry in the kitti dataset and when I counted the t vector as 0, the results I obtained in the kitti dataset were quite accurate, but since there is such complexity in my own dataset, the problem is in the parameters.

Here are the parameters I have:

cameraParameters with properties:

Camera Intrinsics
IntrinsicMatrix: \[3×3 double\]
FocalLength: \[1.4133e+03 1.4188e+03\]
PrincipalPoint: \[950.0639 543.3796\]
Skew: 0
RadialDistortion: \[-0.0091 0.0666\]
TangentialDistortion: \[0 0\]
ImageSize: \[1080 1920\]

Camera Extrinsics
RotationMatrices: \[3×3×33 double\]
TranslationVectors: \[33×3 double\]

Accuracy of Estimation
MeanReprojectionError: 0.6450
ReprojectionErrors: \[80×2×33 double\]
ReprojectedPoints: \[80×2×33 double\]

Calibration Settings
NumPatterns: 33
WorldPoints: \[80×2 double\]
WorldUnits: 'millimeters'
EstimateSkew: 0
NumRadialDistortionCoefficients: 2
EstimateTangentialDistortion: 0

Upvotes: 1

Views: 148

Answers (1)

John Bofarull Guix
John Bofarull Guix

Reputation: 820

  1. You are right, to proceed you need a coordinates system reference and at least 1 calibration image, besides the input image.

As explained here

https://uk.mathworks.com/help/vision/ref/cameramatrix.html?s_tid=srchtitle_site_search_1_camera%20rotation%20matrix

Please run the following Mathworks lines on your image.

It's assumed you have the Vision toolbox.

Generating calibration images

images = imageDatastore(fullfile(toolboxdir('vision'),'visiondata','calibration','slr'));

Using the chess board as calibration image.

Getting the corners of the chess board

[imagePoints,boardSize] = detectCheckerboardPoints(images.Files);

Generating coordinates system referene

squareSize = 29; 
worldPoints = generateCheckerboardPoints(boardSize,squareSize);


I = readimage(images,1); 
imageSize = [size(I,1),size(I,2)];
cameraParams = estimateCameraParameters(imagePoints,worldPoints,'ImageSize',imageSize);

moving the input image, not the calibration one, the data you have, to a different position

imOrig = imread(fullfile(matlabroot,'toolbox','vision','visiondata','calibration','slr','image9.jpg'));
figure; imshow(imOrig);
title('Input Image');

im = undistortImage(imOrig,cameraParams);

[imagePoints,boardSize] = detectCheckerboardPoints(im);

[rotationMatrix,translationVector] = extrinsics(imagePoints,worldPoints,cameraParams);

P = cameraMatrix(cameraParams,rotationMatrix,translationVector)

Upvotes: 0

Related Questions