Reputation: 31
I am currently writing an raycaster and I wanted to add textures to my walls, but an color error holds me back.
I use floating point numbers in my C code to represent colors, but the graphic I am using is in the rgb byte format. I get the error when using the normalized floating point values. Loaded with the following code everything works correctly (but using the undesired rgb byte functions):
//... glfw window initialization...
for (int y = 0; y < 32; y++)
{
for (int x = 0; x < 32; x++)
{
int p = (y * 32 + x) * 3;
float r = rgb[p];
float g = rgb[p + 1];
float b = rgb[p + 2];
glColor3ub(r, g, b);
glPointSize(16);
glBegin(GL_POINTS);
glVertex2i(x * 16, y * 16);
glEnd();
}
}
Resulting in:
The error occurs if I try to normalize the bytes (one by one) with the following code:
//... glfw window initialization...
for (int y = 0; y < 32; y++)
{
for (int x = 0; x < 32; x++)
{
int p = (y * 32 + x) * 3;
float r = ((float)rgb[p]) / 255.0f;
float g = ((float)rgb[p + 1]) / 255.0f;
float b = ((float)rgb[p + 2]) / 255.0f;
glColor3f(r,g, b);
glPointSize(16);
glBegin(GL_POINTS);
glVertex2i(x * 16, y * 16);
glEnd();
}
}
The picture loaded this way has different colors:
Upvotes: 0
Views: 204
Reputation: 34585
Suppose the array rbg[]
is signed
, and consider a red component value 255
.
In the first example the 255
is actually -1
and is converted to -1.000000
and this is then passed to the function after converting to type GLubyte
which is unsigned 8-bit. The conversion rules will make this value 255
which is what you thought you had.
In the second example the 255
is again converted to -1.000000
and scaled to be -0.003922
instead of the expected 1.000000
and so the result is very different from what you expected.
The solution is to define the array as unsigned char
or uint8_t
.
Upvotes: 4