Reputation: 65
Mostly all answers I've found involve multiplying a vector of normalised device coordinates
by a inverse(projection * view)
matrix, however every example I've tried results in at least two invalid things..
worldray.xy
at varying ndc.z
ranges, preventing me from generating a direction vector at varying near/far planesworldray.z
Can someone provide working generation of world ray from mouse coordinates?
Edit:
I've added the code I'm using, if I use inverse
z
is completely off from where I expect it to be, at least with affineInverse
I get an accurate z
for near
mat4 projection = perspective(radians(fov), (Floating)width / (Floating)height, 0.0001f, 10000.f);
vec3 position = { 0, 0, -2 };
vec3 direction = { 0, 0, 1 };
vec3 center = position + direction;
mat4 view = lookAt(position, center, up);
vec2 ndc = {
-1.0f + 2.0f * mouse.x / width,
1.0f + -2.0f * mouse.y / height
};
vec4 near = { ndc.x, ndc.y, 0, 1 };
vec4 far = { ndc .x, ndc .y, -1, 1 };
mat4 invP = inverse(projection);
mat4 invV = inverse(view);
vec4 ray_eye_near = invP * near;
ray_eye_near.z = near.z;
vec4 ray_world_near = invV * ray_eye_near;
ray_world_near /= ray_world_near.w;
printf("ray_world_near x: %f, y: %f, z: %f, w: %f\n\r", ray_world_near.x, ray_world_near.y, ray_world_near.z, ray_world_near.w);
vec4 ray_eye_far = invP * far;
ray_eye_far.z = far.z;
vec4 ray_world_far = invV * ray_eye_far;
ray_world_far /= ray_world_far.w;
printf("ray_world_far x: %f, y: %f, z: %f, w: %f\n\r", ray_world_far.x, ray_world_far.y, ray_world_far.z, ray_world_far.w);
Here is a screenshot of what I'm experiencing
Edit 2: These are the numbers I get if using inverse
instead of affineInverse
and dividing by w
Upvotes: 1
Views: 1406
Reputation: 11
The first answer is almost correct, but it is switching the forward direction on the fly. That might be why the vector has so small values. Because the code sets ray_eye.z to -1.0f, the x and y values are too small compared to that, and the ray points more towards the screen center than the mouse.
The code should be changed into this:
vec3 rayCast(double xpos, double ypos, mat4 view, mat4 projection, unsigned SCR_WIDTH, unsigned SCR_HEIGHT)
{
float x = (2.0f * xpos) / SCR_WIDTH - 1.0f;
float y = 1.0f - (2.0f * ypos) / SCR_SHEIGHT;
float z = 1.0f;
vec3 ray_nds = vec3(x, y, z);
// Change this part
vec4 ray_clip = vec4(ray_nds.x, ray_nds.y, ray_nds.z, 1.0f);
vec4 ray_eye = inverse(projection) * ray_clip;
// And this part
ray_eye = vec4(ray_eye.x, ray_eye.y, ray_eye.z, 0.0f);
vec4 inv_ray_wor = (inverse(view) * ray_eye);
vec3 ray_wor = vec3(inv_ray_wor.x, inv_ray_wor.y, inv_ray_wor.z);
ray_wor = normalize(ray_wor);
return ray_wor;
}
Upvotes: 0
Reputation: 1547
This is the function I use to generate a normalized ray from screen space into the scene:
vec3 rayCast(double xpos, double ypos, mat4 view, mat4 projection, unsigned SCR_WIDTH, unsigned SCR_HEIGHT) {
// converts a position from the 2d xpos, ypos to a normalized 3d direction
float x = (2.0f * xpos) / SCR_WIDTH - 1.0f;
float y = 1.0f - (2.0f * ypos) / SCR_SHEIGHT;
// or (2.0f * ypos) / SCR_HEIGHT - 1.0f; depending on how you calculate ypos/lastY
float z = 1.0f;
vec3 ray_nds = vec3(x, y, z);
vec4 ray_clip = vec4(ray_nds.x, ray_nds.y, -1.0f, 1.0f);
// eye space to clip we would multiply by projection so
// clip space to eye space is the inverse projection
vec4 ray_eye = inverse(projection) * ray_clip;
// convert point to forwards
ray_eye = vec4(ray_eye.x, ray_eye.y, -1.0f, 0.0f);
// world space to eye space is usually multiply by view so
// eye space to world space is inverse view
vec4 inv_ray_wor = (inverse(view) * ray_eye);
vec3 ray_wor = vec3(inv_ray_wor.x, inv_ray_wor.y, inv_ray_wor.z);
ray_wor = normalize(ray_wor);
return ray_wor;
}
For example,
// at the last update of the mouse cursor, `lastX, lastY`
vec3 rayMouse = rayCast(lastX, lastY, viewMatrix, projectionMatrix, SCR_WIDTH, SCR_HEIGHT);
This will give you back a normalized ray from which you can get a parametric position along that ray into the scene with glm::vec3 worldPos = cameraPos + t * rayMouse
, for example when t=1
, worldPos
would be 1 unit along the mouse cursor into the scene, you can use a line rendering class to better see what is happening.
Note: glm::unproject
can be used to achieve the same result:
glm::vec3 worldPos = glm::unproject(glm::vec3(lastX, lastY, 1.0),
viewMatrix, projectionMatrix,
glm::vec4(0,0,SCR_WIDTH, SCR_HEIGHT);
glm::vec3 rayMouse = glm::normalize(worldPos-cameraPos);
Note: These functions cannot be used to get an exact world space position of a fragment at the mouse coordinates, for that you have three options AFAIK:
glReadPixels
to get the depth value at the mouse/texture coordinate, which you can convert back from NDC to world space.Extra:
4. if you are doing object picking, you can get pixel perfect GPU mouse picking by using a buffer to tag each different object in the scene and use glReadPixels
to ID the object from its unique color tag.
I typically use option 1. for 3D math workflows and find it more than suffices for things like object picking, dragging, drawing 3D lines, etc...
Upvotes: 1