kenneth
kenneth

Reputation: 167

Interpreting Camera Calibration Matrix

I'm having a problem interpreting the camera calibration data found at http://www.cvg.reading.ac.uk/PETS2001/pets2001-cameracalib.html#dataset2 and was wondering whether anyone can help. Basically I understood the example and I it works correctly when I try to compute the 2D image coordinates of 3D objects. The 2D coordinates I get are within the image boundaries, which is good.

The problem is when I try to apply the working to the other matrices. To get you into perspective, these calibration matrices apply to the videos found at

For example consider the transformation matrix of Dataset 2 Camera 2:

FocalLength f=792
ImageCentre (u,v) = (384, 288)
Homogeneous Transform T =



 -0.94194 0.33537 -0.01657 0.00000;
    -0.33152 -0.93668 -0.11278 0.00000;
    -0.05334 -0.10073 0.99348 0.00000;
    11791.10000 22920.20000 6642.89000 1.00000;

According to the instructions at the top of the dataset, the first step is to invert the matrix to get:

  -0.94194 -0.33152 -0.05334 0;
    0.33538 -0.93669 -0.10074 0;
    -0.01657 -0.11277 0.99348 0;
    3529.67074 26127.15587 -3661.65672 1;

Then take for example the point x = (0,0,0) in world coordinates.

xT = (3529.67074,26127.15587,-3661.65672) and the point in 2D coordinates is given by

(792 x 3529.67074 / -3661.65672 + 384, 792 x 26127.15587 / -3661.65672 + 288)
= (-763.45 + 384 , -5651.187 + 288)
= (-379.45, -5363.187)

Now this answer is clearly incorrect since the answer should be within the image boundaries. In fact when I tried to use this information in my program, points on the ground plane in the 3D world are transformed incorrectly into 2D image coordinates.

I would really appreciate if someone could give any idea on how to apply the working correctly.

Thanks,

Upvotes: 0

Views: 803

Answers (2)

Milo
Milo

Reputation: 2171

I think there is nothing wrong with your calculation. If you get projections that are out of the image boundaries, that means the camera cannot see that point.

I made some plots for the camera position from the data in the webpage you mention. (X,Y,Z) are the axes of the world reference frame and (x,y,z) are the axes for the camera reference frame.

The following is for the first example in the webpage you mention, for which they do the projection of the point (0,0,0) to get (244, 253.8). Note the orientation of the z-axis: the camera is looking towards the origin of the world reference frame:

enter image description here

For the dataset 2, camera 2, note the orientation of the z-axis: the camera cannot see the origin of the world reference frame:

enter image description here

Whether this orientation makes sense or not depends on your application and the choice of (X, Y, Z) reference frame. Camera 1 for dataset 2 is oriented in a similar way.

Upvotes: 0

Cybercartel
Cybercartel

Reputation: 12592

It sounds you can use tsai algorithm to map 2d lat lon to 2d x,y. Look here:Projection of 3D Coordinates onto a 2D image with known points.

Upvotes: 0

Related Questions