I've this DDS file. I wrote a simple DDS reader to read a DDS header and print its details based on the MSDN specification. It says that the it's an RGB DDS with 32 bytes per pixel bit depth and the alpha is ignored i.e. pixel format is X8R8G8B8 (or A8R8G8B8). To verify this, I also opened this file in a hex editor which shows the first (i.e. from the data start) 4 bytes as BB GG RR 00
(replace them with the first pixel's right hex colour values). I read that OpenGL's texture copy functions act on bytes (atleast conceptually) and thus from its viewpoint, this data is B8G8R8A8. Please correct me if my understanding is wrong here.
Now to glTexImage2D
internal format I pass RGBA8
and to external format and type I pass BGRA
and UNSIGNED_BYTE
. This leads to a blue tint in the rendered output. In my fragment shader, just to verify, I did a swizzle to swap R
and B
and it renders correctly.
I reverted the shader code and then replaced the type from UNSIGNED_BYTE
with UNSIGNED_INT_8_8_8_8_REV
(based on this suggestion) and it still renders the blue tint. Now changing the external format to RGBA
and with either type (UNSIGNED_BYTE
or UNSIGNED_INT_8_8_8_8_REV
) it renders fine!
- Since OpenGL doesn't support ARGB, giving BGRA is understandable. But how come RGBA is working correctly here? This seems wrong.
- Why does the type have no effect on the ordering of the channels?
- Does the GL_UNPACK_ALIGNMENT have a bearing in this? I left it as the default (4). If I read the manual right, this should have no effect on how the client memory is read.
Details
- OpenGL version 3.3
- Intel HD Graphics that supports upto OpenGL 4.0
- Used GLI to load the DDS file and get the data pointer
I finally found the answers myself! Posting it here so that it may help someone in future.
By inspecting the memory pointed to by
void* data
that GLI returns when a pointer to the image's binary data is asked for, it can be seen that GLI had already reordered the bytes when transferring data from the file to client memory. The memeory window shows, from lower to higher address, data in the formRR GG BB AA
. This explains why passingGL_RGBA
works. However, the wrong on GLI's part is that when external format is queried for it returnsGL_BGRA
instead ofGL_RGBA
. A bug to address this has been raised.No, it has an effect. The machine that I'm trying this experiment on is an Intel x86_64 little endian machine. OpenGL Wiki clearly states that the client pixel data is always in client byte ordering. Now when
GL_UNSIGNED_BYTE
orGL_UNSIGNED_INT_8_8_8_8_REV
is passed, the underlying base type (not the component type) is anunsigned int
for both; thus reading anint
fromdata
, on a little-endian machine would mean, the variable in register would end up with the bytes swapped i.e.RR GG BB AA
in the RAM would reach the VRAM asAA BB GG RR
; when addressed by a texture of type RGBA (RR GG BB AA
), readingAA
would actually giveRR
. To correct it, the OpenGL implementation swaps the bytes to neutralise the endianness of the machine, in the case ofGL_UNSIGNED_BYTE
type. While forGL_UNSIGNED_INT_8_8_8_8_REV
we explicitly instruct OpenGL to swap the byte order and thus it renders correctly. However, if the type is passed asGL_UNSIGNED_INT_8_8_8_8
then the rendering is screwed up, since we instruct OpenGL to copy the bytes as it was read on the machine.It does have a bearing on the unpacking of texture data from client memory to server memory. However, that's to account for the padding bytes present in an image's rows to compute the stride (pitch) correctly. But to this issue specifically it doesn't have a bearing since it's pitch flag is 0 i.e. there're no padding bits in the DDS file in question.
Related material: https://www.opengl.org/wiki/Pixel_Transfer#Pixel_type