Reputation: 111
I've been working through Frank D. Luna's book "Introduction to 3D programming with DirectX 10" and one of the problems is to switch from using
D3DXCOLOR color (128 bits)
to
UINT color (32 bits)
Presumably the format code to use is: DXGI_FORMAT_R8G8B8A8_UNORM.
In my mind this means you have a variable which at the byte level has information about the channels in the exact order: RGBA (Is this the correct interpretation?--Asking because I'm sure I've read that when you want RGBA you really need a code like: A#R#G#B# where the alpha channel is specified first.
Anyway, I opted (there's probably a better way) to do:
UINT color = (UINT)WHITE;
where WHITE is defined: const D3DXCOLOR WHITE(1.0f, 1.0f, 1.0f, 1.0f);
This cast is defined in the extension to D3DXCOLOR.
However, when DXGI_FORMAT_R8G8B8A8_UNORM is used with the UINT color variable you get the wrong results. Luna attributes this to endianness.
Is this because the cast from D3DXCOLOR produces a UINT of the form RGBA but because intel x86 uses little endiann then at byte level you really get 'ABGR'?? So when this variable actually gets interpreted the shader sees ABGR instead of RGBA? Shouldn't it just know when interpreting bytes that the higher order bits are at the smaller address? And the last question: Since the code is specified as DXGI_FORMAT_R8G8B8A8_UNORM, does this mean that R should be the smallest address and A should be at the largest? I'm sure there are a ton of misconceptions I have so please feel free to dispel them.
Upvotes: 3
Views: 3580
Reputation: 41077
When you use the statement "UINT color = (UINT)WHITE" it is invoking the D3DXCOLOR operator DWORD () conversion. Since legacy D3DX9Math was designed for Direct3D 9, that's a BGRA color (equivalent to DXGI's DXGI_FORMAT_B8G8R8A8_UNORM). From d3dx9math.inl:
D3DXINLINE
D3DXCOLOR::D3DXCOLOR( DWORD dw )
{
CONST FLOAT f = 1.0f / 255.0f;
r = f * (FLOAT) (unsigned char) (dw >> 16);
g = f * (FLOAT) (unsigned char) (dw >> 8);
b = f * (FLOAT) (unsigned char) (dw >> 0);
a = f * (FLOAT) (unsigned char) (dw >> 24);
}
Since the original Direct3D 10 version of DXGI (v1.0) did not define DXGI_FORMAT_B8G8R8A8_UNORM (it was added with DXGI 1.1), the 'default' color to use for Direct3D 10+ is RGBA.
All DXGI formats are Little-Endian because that's true of all Direct3D 10+ Microsoft platforms. For Xbox 360, some special Big-Endian versions of Direct3D 9 D3DFORMATs were introduced, but that's not really what's at work here. The issue is more basic: BGRA and RGBA are both valid options, but Direct3D 9 preferred BGRA and Direct3D 10+ prefers RGBA.
To make things a little more confusing, the naming convention of Direct3D 9 D3DFMT colors is reversed for DXGI. "D3DFMT_A8R8G8B8" is the same thing as "DXGI_FORMAT_B8G8R8A8_UNORM". See this topic on MSDN. Historically Windows graphics would call it a "32-bit ARGB" format, but the more natural way to describe is "BGRA".
Upvotes: 6