Reputation: 1237
I read some articles about DXTn and BCn texture compression, and I learned how to encode these formats. But I still can't figure out what is the difference between DXT2 and DXT3 format or between DXT4 and DXT5 format, I guess the differences of these formats are same.
In the document of D3D SDK, it says:
The difference between DXT2 and DXT3 are that, in the DXT2 format, it is assumed that the color data has been premultiplied by alpha. In the DXT3 format, it is assumed that the color is not premultiplied by alpha. These two formats are needed because, in most cases, by the time a texture is used, just examining the data is not sufficient to determine if the color values have been multiplied by alpha. Because this information is needed at run time, the two FOURCC codes are used to differentiate these cases. However, the data and interpolation method used for these two formats is identical.
Does it mean this: If we have a pixel of color RGB 127,127,127 with alpha 0, then in DXT2, it will actually store RGB 0,0,0 instead, because 127 was multiplied by alpha 0?
If it was right, what can we benefit from in actual game developement? In my opinion, it's totoally unnecessary.
Besides, what happend to BC2 and BC3 format? As I know, DXT2/3 is mapped to BC2, DXT4/5 is mapped to BC3, so the distinction is not exist anymore. Does it be abondend? Why?
Any help will be grateful, thanks!
Upvotes: 4
Views: 1532
Reputation: 1055
To the best of my knowledge, there really was no physical difference between DXT2 & 3 and DXT4 & 5 - GPUs decoded them in the same way. As per the quote from the documentation, I suspect the only reason was to give a hint to the developer as to whether the texture had been premultiplied or not and to thus know what framebuffer blend mode to use, ie. (1, 1-SrcAlpha) vs (SrcAlpha, 1-SrcAlpha).
Upvotes: 4