Reputation: 339
I try to understand the OpenGL model view matrix. I thought I understand it, but after I tested it I am confused. Why, if I set the OpenGL MODELVIEW matrix with
glMatrixMode(GL_MODELVIEW);
GLfloat model[] = { 1, 0, 0, -1,
0, 1, 0, -1,
0, 0, 1, 0,
0, 0, 0, 1 };
glLoadMatrixf(model);
glBegin(GL_TRIANGLES);
glVertex4f(0, 0, 0, 1);
glVertex4f(1, 0, 0, 1);
glVertex4f(0, 1, 0, 1);
glEnd();
, all things I draw are not translated -1 on the x and -1 on the y axis, I only get a crazy result ? I thought the passed vertices all multiplied with the model matrix.
Upvotes: 0
Views: 705
Reputation: 1206
From the site: http://www.opengl.org/archives/resources/faq/technical/transformations.htm
"The translation components occupy the 13th, 14th, and 15th elements of the 16-element matrix, where indices are numbered from 1 to 16 as described in section 2.11.2 of the OpenGL 2.1 Specification."
In your float array the translation components are the 4th and 8th component.
You should change it to this:
GLfloat model[] = { 1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
-1,-1, 0, 1 };
The matrix you created would cause the w component of the clip coordinates to be expressed as:
gl_Position.w = (position.x * -1.0) + (position.y * -1.0) + 1
Since ndc = gl_Position/gl_Position.w, This will cause the normalized device coordinates (ndc) to be:
ndc.x = position.x / ( (position.x * -1.0) + (position.y * -1.0) + 1 )
ndc.y = position.y / ( (position.x * -1.0) + (position.y * -1.0) + 1 )
ndc.z = position.z / ( (position.x * -1.0) + (position.y * -1.0) + 1 )
Which I can imagine would cause some weird results.
Upvotes: 1