LivingRobot
LivingRobot

Reputation: 913

OpenGL ray tracing using inverse transformations

I have a pipeline that uses model, view and projection matrices to render a triangle mesh.

I am trying to implement a ray tracer that will pick out the object I'm clicking on by projecting the ray origin and direction by the inverse of the transformations.

When I just had a model (no view or projection) in the vertex shader I had

Vector4f ray_origin = model.inverse() * Vector4f(xworld, yworld, 0, 1);
Vector4f ray_direction = model.inverse() * Vector4f(0, 0, -1, 0);

and everything worked perfectly. However, I added a view and projection matrix and then changed the code to be

Vector4f ray_origin = model.inverse() * view.inverse() *  projection.inverse() * Vector4f(xworld, yworld, 0, 1);
Vector4f ray_direction = model.inverse() * view.inverse() *  projection.inverse() * Vector4f(0, 0, -1, 0);

and nothing is working anymore. What am I doing wrong?

Upvotes: 2

Views: 853

Answers (1)

Rabbid76
Rabbid76

Reputation: 210908

If you use perspective projection, then I recommend to define the ray by a point on the near plane and another one on the far plane, in normalized device space. The z coordinate of the near plane is -1 and the z coordinate of the far plane 1. The x and y coordinate have to be the "click" position on the screen in the range [-1, 1] The coordinate of the bottom left is (-1, -1) and the coordinate of the top right is (1, 1). The window or mouse coordinates can be mapped linear to the NDCs x and y coordinates:

float x_ndc = 2.0 * mouse_x/window_width - 1.0;
flaot y_ndc = 1.0 - 2.0 * mouse_y/window_height; // flipped

Vector4f p_near_ndc = Vector4f(x_ndc, y_ndc, -1, 1); // z near = -1
Vector4f p_far_ndc  = Vector4f(x_ndc, y_ndc,  1, 1); // z far = 1

A point in normalized device space can be transformed to model space by the inverse projection matrix, then the inverse view matrix and finally the inverse model matrix:

Vector4f p_near_h = model.inverse() * view.inverse() *  projection.inverse() * p_near_ndc;
Vector4f p_far_h  = model.inverse() * view.inverse() *  projection.inverse() * p_far_ndc;

After this the point is a Homogeneous coordinates, which can be transformed by a Perspective divide to a Cartesian coordinate:

Vector3f p0 = p_near_h.head<3>() / p_near_h.w();
Vector3f p1 = p_far_h.head<3>()  / p_far_h.w();

The "ray" in model space, defined by point r and a normalized direction d finally is:

Vector3f r = p0;
Vector3f d = (p1 - p0).normalized()

Upvotes: 4

Related Questions