slimbo
slimbo

Reputation: 2759

3D Graphics Algorithms (Hardware)

I am trying to design an asic graphics processor. I have done extensive research on the topic but I am still kind of fuzzy on how to translate and rotate points. I am using orthographic projection to rasterize the transformed points.

I have been using the following lecture regarding the matrix multiplication (homogenous coordinates) http://www.cs.kent.edu/~zhao/gpu/lectures/Transformation.pdf

Could someone please explain this a little more in depth to me. I am still somewhat shakey on the algorithm. I am passing a camera (x,y,z) and a camera vector (x,y,z) representing the camera angle, along with a point (x,y,z). What should go where within the matrices to transform the point to the new appropriate location?

Upvotes: 0

Views: 928

Answers (2)

user223264
user223264

Reputation:

For the first few years they were for sale, mass-market graphics processors for PC didn't translate or rotate points at all. Are you required to implement this feature? If not, you may wish to let software do it. Depending on your circumstances, software may be the more sensible route.

If you are required to implement the feature, I'll tell you how they did it in the early days.

The hardware has sixteen floating point registers that represent a 4x4 matrix. The application developer loads these registers with the ModelViewProjection matrix just before rendering a mesh of triangles. The ModelViewProjection matrix is:

Model * View * Projection

Where "Model" is a matrix that brings vertices from "model" coordinates into "world" coordinates, "View" is a matrix that brings vertices from "world" coordinates into "camera" coordinates, and "Projection" is a matrix that brings vertices from "camera" coordinates to "screen" coordinates. Together they bring vertices from "model" coordinates - coordinates relative to the 3D model they belong to - into "screen" coordinates, where you intend to rasterize them as triangles.

Those are three different matrices, but they're multiplied together and the 4x4 result is written to hardware registers.

When a buffer of vertices is to be rendered as triangles, the hardware reads in vertices as [x,y,z] vectors from memory, and treats them as if they were [x,y,z,w] where w is always 1. It then multiplies each vector by the 4x4 ModelViewProjection matrix to get [x',y',z',w']. If there is perspective (you said there wasn't) then we divide by w' to get perspective [x'/w',y'/w',z'/w',w'/w'].

Then triangles are rasterized with the newly computed vertices. This enables a model's vertices to be in read-only memory if desired, though the model and camera may be in motion.

Upvotes: 0

Stefan Monov
Stefan Monov

Reputation: 11732

Here's the complete transformation algorithm in pseudocode:

void project(Vec3d objPos, Matrix4d modelViewMatrix,
    Matrix4d projMatrix, Rect viewport, Vec3d& winCoords)
{
    Vec4d in(objPos.x, objPos.y, objPos.z, 1.0);
    in = projMatrix * modelViewMatrix * in;
    in /= in.w; // perspective division
    // "in" is now in normalized device coordinates, which are in the range [-1, 1].

    // Map coordinates to range [0, 1]
    in.x = in.x / 2 + 0.5;    
    in.y = in.y / 2 + 0.5;    
    in.z = in.z / 2 + 0.5;    

    // Map to viewport
    winCoords.x = in.x * viewport.w + viewport.x;    
    winCoords.y = in.y * viewport.h + viewport.y;    
    winCoords.z = in.z;    
}

Then rasterize using winCoords.x and winCoords.y.

For an explanation of the stages of this algorithm, see question 9.011 from the OpenGL FAQ.

Upvotes: 1

Related Questions