Reputation: 105
Opengl superbible 4th Edition.page 164
To apply a camera transformation, we take the camera’s actor transform and flip it so that moving the camera backward is equivalent to moving the whole world forward. Similarly, turning to the left is equivalent to rotating the whole world to the right.
I can't understand why?
Upvotes: 2
Views: 619
Reputation: 2904
Mathematically there is only one correct answer. It is defined that after transforming to eye-space
by multiplying a world-space
position by the view-matrix
, the resulting vector is interpreted relative to the origin (the position in space where the camera conceptually is located relative to the aforementioned point).
What SuperBible states is mathematically just a negation of translation in some direction, which is what you will automatically get when using functions that compute a view-matrix
like gluLookAt()
or glmLookAt()
(although GLU is a lib layered on legacy GL stuff, mathematically the two are identical).
Have a look at the API ref for gluLookAt()
. You'll see that the first step is setting up an ortho-normal base of the eye-space
which first results in a 4x4 matrix basically only encoding the upper 3x3 rotation matrix. The second is multiplying the former matrix by a translation matrix. In terms of legacy functions, this can be expressed as
glMultMatrixf(M); // where M encodes the eye-space basis
glTranslated(-eyex, -eyey, -eyez);
You can see, the vector (eyex, eyey, eyez)
which specifies where the camera is located in world-space
is simply multiplied by -1
. Now assume we don't rotate the camera at all, but assume it to be located at world-space
position (5, 5, 5). The appropriate view-matrix View
would be
[1 0 0 -5
0 1 0 -5
0 0 1 -5
0 0 0 1]
Now take a world-space
vertex position P = (0, 0, 0, 1) transformed by that matrix: P' = View * P
. P'
will then simply be P'=(-5, -5, -5, 1)
.
When thinking in world-space
, the camera is at (5, 5, 5)
and the vertex is at (0, 0, 0)
. When thinking in eye-space
, the camera is at (0, 0, 0)
and the vertex is at (-5, -5, -5)
.
So in conclusion: Conceptually, it's a matter of how you're looking at things. You can either think of it as transforming the camera relative to the world, or you think of it as transform the world relative to the camera.
Mathematically, and in terms of the OpenGL transformation pipeline, there is only one answer, and that is: the camera in eye-space
(or view-space
or camera-space
) is always at the origin and world-space
positions transformed to eye-space
will always be relative to the coordinate system of the camera.
EDIT: Just to clarify, although the transformation pipeline and involved vector spaces are well defined, you can still use world-space
positions of everything, even the camera, for instance in a fragment shader for lighting computation. The important thing here is to know never to mix entities from different spaces, e.g. don't compute stuff based on a world-space
and and eye-space
position and so on.
EDIT2: Nowadays, in a time that we all use shaders *cough and roll-eyes*, you're pretty flexible and theoretically you can pass any position you like to gl_Position
in a vertex shader (or the geometry shader or tessellation stages). However, since the subsequent computations are fixed, i.e. clipping, perspective division and viewport transformation the resulting position will simply be clipped if its not inside [-gl_Position.w, gl_Position.w]
in x
, y
and z
.
There is a lot to this to really get it down. I suggest you read the entire article on the rendering pipeline in the official GL wiki.
Upvotes: 2
Reputation: 28982
Image yourself placed within a universe that also contains all other things. In order for your viewpoint to appear to move in a forwardly direction, you have two options...
Because you defining everything in OpenGL in terms of the viewer (you're ultimately rendering a 2D image of a particular viewpoint of the 3D world), it can often make more sense from both a mathematical and programatic sense to take the 2nd approach.
Upvotes: 3