yba
yba

Reputation: 1

How can I convert a byte in textureImage to a float?

I have to convert a TextureImage[][][] which it's type in GLubyte (each element represents different color of pixel in the texture image) to a GLfloat variable that represents a color. I have to do it without any of openGL commands. Here is my code:

GLubyte TextureImage[TEXTURE_SIZE][TEXTURE_SIZE][3];
GLfloat pixColor1 , pixColor2 , pixColor3;
                            pixColor1 = (GLfloat)(GLint)TextureImage[t][s][0];
                            pixColor2 = (GLfloat)(GLint)TextureImage[t][s][1];
                            pixColor3 = (GLfloat)(GLint)TextureImage[t][s][2];
                            pixColor1 /= 255.0;
                            pixColor2 /= 255.0;
                            pixColor3 /= 255.0;

Upvotes: 0

Views: 415

Answers (2)

yba
yba

Reputation: 1

Thanks. I've figured out the problem. I probably had to move "GLubyte" variable to an "unsigned char" variable and only after then turn it to integer and float.

GLubyte TextureImage[TEXTURE_SIZE][TEXTURE_SIZE][3];
GLfloat pixColor1 , pixColor2 , pixColor3;
unsigned int temp1, temp2, temp3;
unsigned char r, g, b;

r = TextureImage[s][t][0];
g = TextureImage[s][t][1];
b = TextureImage[s][t][2];
temp1 = r + '0';
temp2 = g + '0';
temp3 = b + '0';
pixColor1 = temp1/255.0;
pixColor2 = temp2/255.0;
pixColor3 = temp3/255.0; 

Upvotes: 0

ryyker
ryyker

Reputation: 23218

"...convert a byte in textureImage to a float?"

First an aside:

According to the OpenGL spec:

...GL types are not C types. Thus, for example, GLtype int is referred to as GLint outside this document, and is not necessarily equivalent to the C type int. An implementation must use exactly the number of bits indicated in the table to represent a GL type.

So although we do not know exactly what C type GLint is equivalent to, we do know that it is 32 bits wide (4 bytes), not 8 bits (1 byte) as would be true for byte (native type unsigned char)

Making the question How can I convert a byte in textureImage to a float? seem suspicious as to what you are really doing, because the type of GLint is not == to a byte. This not withstanding

Because OpenGL is a library rooted in C, the type conversion is done like any other, using a type cast with caution:

In this case the width specified in the standard for both GLint and GLfloat are both 32 bits wide but width alone does not eliminate the risks, as @Margaret Bloom mentions in comments:

Beware that integers cannot be converted to float safely. One valid implementation of a float (actually the one used on almost all architectures) is through the IEEE754 binary32 format. This only has 23 bits of mantissa. A number like 0xFFFFFFFF cannot be converted exactly to float.

With that said, The typecast would be syntactically identical to any other typecast in C

the statement :

pixColor1 = (GLfloat)(GLint)TextureImage[t][s][0]; 
            //       ^^^^^^^ this part is not necessary but does not change the assigned value

pixColor1 = (GLfloat)TextureImage[t][s][0];//modified to eliminate (GLint)

Effectively casts the value contained in TextureImage[t][s][0] to a GLfloat. The same as the pure C example below does:

float someFloatVal = (float)someIntegerArray[a][b][c];

Upvotes: 1

Related Questions