Reputation: 4767
I've been trying to analyse Apple's pARk(Augmented reality sample application) where I came across the below function,
Method call with parameters below:
createProjectionMatrix(projectionTransform, 60.0f*DEGREES_TO_RADIANS, self.bounds.size.width*1.0f / self.bounds.size.height, 0.25f, 1000.0f);
void createProjectionMatrix(mat4f_t mout, float fovy, float aspect, float zNear, float zFar)
{
float f = 1.0f / tanf(fovy/2.0f);
mout[0] = f / aspect;
mout[1] = 0.0f;
mout[2] = 0.0f;
mout[3] = 0.0f;
mout[4] = 0.0f;
mout[5] = f;
mout[6] = 0.0f;
mout[7] = 0.0f;
mout[8] = 0.0f;
mout[9] = 0.0f;
mout[10] = (zFar+zNear) / (zNear-zFar);
mout[11] = -1.0f;
mout[12] = 0.0f;
mout[13] = 0.0f;
mout[14] = 2 * zFar * zNear / (zNear-zFar);
mout[15] = 0.0f;
}
I see this projection matrix
is multiplied with rotation matrix
(obtained by motionManager.deviceMotion API).
What is the use of projection matrix?Why should it be multiplied with rotation matrix?
multiplyMatrixAndMatrix(projectionCameraTransform, projectionTransform, cameraTransform);
Why the resultant matrix has to be multiplied with a PointOfInterest vector coordinates again?
multiplyMatrixAndVector(v, projectionCameraTransform, placesOfInterestCoordinates[i]);
Appreciate any help here.
Sample code link here
Upvotes: 1
Views: 596
Reputation: 676
In computer vision and in robotics, a typical task is to identify specific objects in an image and to determine each object's POSITION and ORIENTATION (or Translation and Rotation) relative to some coordinate system.
In Augmented Reality we normally calculate the pose of the detected object and then augment a virtual model on top of it. We can project the virtual model more REALISTically if we know the pose of the detected object.
The joint rotation-translation matrix [R|t] is called a matrix of extrinsic parameters. It is used to describe the camera motion around a static scene, or vice versa, rigid motion of an object in front of a still camera. That is, [R|t] translates coordinates of a point (X, Y, Z) to a coordinate system, fixed with respect to the camera. This offers you a 6DOF pose(3 rotation & 3 translation) required for Mobile AR.
A good read if you want to read more http://games.ianterrell.com/learn-the-basics-of-opengl-with-glkit-in-ios-5/
Sorry I am only working with Android AR. Hope this helps :)
Upvotes: 2