Reputation: 1314
I'm writing a ray caster in C++, which outputs its results to regular image files. Now, I want to render the internal RGBA-representation (4 integers) to an OpenGL window, provided by GLUT.
I already know that I have to use glTexImage2D to generate a texture, assign that texture to a quad, and then render this quad right in front of the camera.
http://www.opengl.org/sdk/docs/man/xhtml/glTexImage2D.xml
The problem I'm having is that I don't know how I should present the data to the glTexImage2D function: how do I construct this gluByte data chunk containing my data, coming from a simple 4-integer RGBA representation?
Upvotes: 1
Views: 3845
Reputation: 473174
How you store your data is, more or less, up to you. The last three parameters to glTexImage2D
(as well as additional parameters) describe how your data is formatted. There are limitations as to how you can store your data (you can't have much padding beyond 4-byte alignment per-pixel, and the components have to be adjacent), but there is a lot of variation available.
As an array of width x height x 4 size, with an unsigned int for each component: RGBARGBARGBA... ?
That's one way of doing it, but it's not the only way.
Upvotes: 1
Reputation: 233
The data chunk is just a pointer to your RGBA representation. You'll want to call glTexImage2D(GL_TEXTURE_2D, 0, 4, width, height, 0, GL_RGBA, GL_UNSIGNED_INT, dataBuffer);
. A line in your texture has to have width * sizeof(GLuint) bytes for this to work.
Upvotes: 0
Reputation: 52084
Convert your 0-(2^32-1) range down to 0-(2^8-1).
Or use GL_UNSIGNED_INT
for format
.
Upvotes: 0