Reputation: 809
I am using directx 9 with 64bit render targets...I need to read the data on the render target surfaces. Each color component( a,r,g,b ) is encoded with 2 bytes( or 16bits x 4 = 64 ). How do I convert each 16 bit color component to a 32 bit floating point variable? Here is what I've tried:
BYTE *pData = ( BYTE* )renderTargetData;
for( UINT y = 0; y < Height; ++y )
{
for( UINT x = 0; x < width; ++x )
{
// declare 4component vector to hold 4 floats
D3DXVECTOR4 vColor;
// convert the pixel color from 16 to 32 bits
D3DXFloat16To32Array( ( FLOAT* )&vColor, ( D3DXFLOAT16* )&pData[ y + 8 * x ], 4 );
}
}
For some reason this is incorrect...In one case after conversion, where the actual renderTargetData for one pixel is ( 0, 0, 0, 65535 ), I get this result: ( 0, 0, 0, -131008.00 ).
Upvotes: 2
Views: 2387
Reputation: 129364
In general, converting an integer v
from integer in range [0..n]
to float in range [0.0..1.0]
is:
float f = v/(float)n;
So, in your case, a loop that does:
vColor.x = (pData[ y + 4 * x ])/65535.0f;
vColor.y = (pData[ y + 4 * x + 1 ])/65535.0f;
// ... etc.
should work, if we change the BYTE *pData = ( BYTE* )renderTargetData;
into WORD *pData = ( WORD* )renderTargetData;
But there may be some clever way for DX to do this for you that I don't know of since I
Upvotes: 2