Reputation: 1043
I am now confused with the glDrawPixels() function. I know the function signature is like: gl.glDrawPixels(int width, int height, format, type, data);
I not not sure how to make format-type-data be consistent. for instance, I have to use format as GL2.GL_RGB, I am asking for type=GL2.GL_DOUBLE, GL2.GL_FLOAT, GL2.GL_BYTE, respectively, What does data look like? how should I wrap and format my data in Java before invoke glDrawPixels() funciton.
Upvotes: 1
Views: 756
Reputation: 162164
First of all, you shouldn't really use glDrawPixels. It's a very slow function and in most OpenGL implementation not very optimized. You should use a textured quad instead. But your question also applies to the parameters of glTexImage2D.
So what do these parameters mean. Let's have a look at the signature of glTexImage2D
C SPECIFICATION
void glTexImage2D( GLenum target,
GLint level,
GLint internalformat,
GLsizei width,
GLsizei height,
GLint border,
GLenum format,
GLenum type,
const GLvoid *pixels )
internalformat
designates the format the data will have internally in OpenGL.
format
is the format the data in pixels
has and has exactly the same meaning as the parameter of the same name in glDrawPixels. Essentially format
tells OpenGL how many elements there are to a pixel of data in pixels
type
tells OpenGL the type containing a single pixel. Now this is interesting, because its nontrivial.
Let's look at some combinations:
This tells OpenGL that a pixel consists of 4 elements in the order blue, green, red and alpha, and that all four elements are contained in a single unsigned integer of 32 bits, divided into 4 groups of 8 bits each. You maybe know that color notation of HTML e.g. #ffffffff
for write. Well, this is essentially a 32 bit unsigned int in hexadecimal notation.
You could have an array of 32 bit unsigned integers
uint32_t pixels[] = {
0xffffffff, 0x0000ffff, 0xffffffff,
0x0000ffff, 0xffffffff, 0x0000ffff,
0xffffffff, 0x0000ffff, 0xffffffff,
};
That would be a 3×3 sized image of a red diamond on white ground.
In this case we tell OpenGL that there are 3 elements, red, green and blue, to a pixel and because the type has no explicit subsizes indicated, that each element of such type contains one color element of the pixel
Our red diamond would look like this then
uint8_t pixels[] = {
0xff, 0xff, 0xff, /**/ 0xff, 0x00, 0x00, /**/ 0xff, 0xff, 0xff,
0xff, 0x00, 0x00, /**/ 0xff, 0xff, 0xff, /**/ 0xff, 0x00, 0x00,
0xff, 0xff, 0xff, /**/ 0xff, 0x00, 0x00, /**/ 0xff, 0xff, 0xff
};
In this case each element of a pixel is a individual float, in the range [0; 1]. The diamond would then look like
float pixels[] = {
0., 1., 0.,
1., 0., 1.,
0., 1., 0.
};
For GL_DOUBLE it looks exactly the same but with double
instead of float
.
I hope you now get the basic gist
Upvotes: 1