Reputation: 283133
I wrote a function for converting "screen" coordinates to "view" coordinates. It looks like this (examples in C#):
Vector2 ScreenToViewCoords(Point p)
{
return new Vector2(
_projMatrix.M11 * p.X + _projMatrix.M12 * p.Y + _projMatrix.M14,
_projMatrix.M21 * p.X + _projMatrix.M22 * p.Y + _projMatrix.M24
);
}
p
is a point relative to the top-left corner of the OpenGL Control/viewport, in pixels.
_projMatrix
is my projection matrix.
My render function looks like this:
private void Render()
{
GL.MatrixMode(MatrixMode.Projection);
GL.LoadMatrix(ref _projMatrix);
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
GL.MatrixMode(MatrixMode.Modelview);
GL.LoadIdentity();
if (_texture != null)
{
_texture.Bind();
GL.Color3(Color.White);
GL.Begin(BeginMode.Quads);
{
GL.TexCoord2(0, 0); GL.Vertex2(0, 0);
GL.TexCoord2(0, 1); GL.Vertex2(0, _texture.Height);
GL.TexCoord2(1, 1); GL.Vertex2(_texture.Width, _texture.Height);
GL.TexCoord2(1, 0); GL.Vertex2(_texture.Width, 0);
}
GL.End();
_texture.Unbind();
}
glControl.SwapBuffers();
}
Which uses the projection matrix, and draws an image on the screen, always at the same location: (0,0) to (width,height).
Since the image is drawn at 1 view unit = 1 pixel, then I'd expect all my projection matrix has to do is a little bit of translation to put the image in the right place. Thus, I'd think it would look like
1 0 0 0
0 1 0 0
0 0 1 0
tx ty 0 1
But it actually looks like this:
(Note: It appears rotated because those are columns, not rows)
That part is already confusing me, but regardless, my math should be correct, unless I'm forgetting something?
Note that the render function works exactly as intended, I'm just trying to translate a mouse click into view coordinates. Also note that this is a 2D application so the Z vector isn't really used (again, not sure why Column2
is coming out weird).
So what am I doing wrong here that the coordinates aren't coming out correctly?
For example, if I'm drawing my image at (0,0) to (w,h) as stated before, and I click on the bottom right corner of a 700x500 image, I'd expect it to output 699x499 but it instead comes out as (0.9959349, -0.9971989)
.
Edit: Found out why the Z
coordinates were weird. I had set up my orthographic matrix between -1 and 11 instead of -1 and 1. This, however, doesn't really change anything as the Z values don't matter.
Edit2: It occurs to me now that the projection matrix transforms a modelview to screen coordinates, and I essentially want to do the opposite -- do I need to invert the matrix? Doing so gives me more pixelish numbers, but multiplying by that makes my coordinates way too big instead:
Upvotes: 0
Views: 183
Reputation: 16612
The projection matrix in GL transforms into a normalised coordinate space, where X and Y will be between -1 and 1 (after the perspective divide, which can be ignored in an orthographic projection as it's just a divide by 1)
To get pixel coordinates, you need to scale/offset by the viewport, such that the -1 to 1 ranges map to the width and height, left and right as specified in the viewport.
For example:
screenX = (viewWidth * (normX + 1) / 2) + viewLeft;
To transform from screen coordinates back to view coordinates, do everything in reverse - transform the screen coordinates into normalised coordinates, and then push them through the inverse transform.
gluProject()
and gluUnproject()
will do this for you, or you can consult the source.
Upvotes: 1