I'm just learning how to texture in OpenGL and am a bit confused by some of the results I'm getting.

I'm using stb_image to load the following checkerboard png image:

Black and white checkerboard image

When I saved the png image I explicitly chose to save it as 32 bit. That would lead me to believe that each component (RGBA) would be stored as 8 bits for a total of 32 bits - the size of an unsigned int. However, using the following code:

unsigned char * texture_data = 
  stbi_load("resources/graphics-scene/tut/textures/checker.png", &w, &h, nullptr, 4);

// ...

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, w, h, 0,
  GL_RGBA, GL_UNSIGNED_INT_8_8_8_8,  texture_data);

yields:

Black and white checkerboard in engine

If I instead use GL_UNSIGNED_BYTE for the type parameter I get the proper results.

Also, just incase it helps, I also tried the following image:

Colored checkerboard image

which yields

Colored checkerboard in engine

GL_UNSIGNED_BYTE gives the correct result in this case as well.

I'm not sure if this is a case of me misunderstanding glTexImage2D or stb_image (does it convert loaded data to 8-bit? that would seem unlikely to me).

EDIT: I just finally found a related posted (had already searched some but had no luck). However the answer (https://stackoverflow.com/a/4191875/2507444) confuses me. If that is the case - that the type parameter specifies how many bytes per component - then what exactly do things like GL_UNSIGNED_BYTE_3_3_2 and GL_UNSIGNED_INT_8_8_8_8 mean???

1

There are 1 answers

0
Nicol Bolas On BEST ANSWER

If that is the case - that the type parameter specifies how many bytes per component - then what exactly do things like GL_UNSIGNED_BYTE_3_3_2 and GL_UNSIGNED_INT_8_8_8_8 mean???

It does both, depending on what the actual type is.

If the pixel transfer type is just a data type, then it specifies the data size per-component. If it has numbers in it, then the type specifies the size of the data per-pixel; the numbers specify the individual component sizes within that data type.

GL_UNSIGNED_INT_8_8_8_8 means that OpenGL will interpret each pixel as an unsigned integer. The first component will be the high 8 bits, the next will be the next 8 bits, and so forth.

However, where your problem is coming from is the fact that STB-image does not work with unsigned integers. Each pixel is written as 4 separate bytes, in RGBA order. Basically, it does this:

GLubyte arr[4] = {red, green, blue, alpha};

Now, that may sound like the same thing. But it isn't. The reason why has to do with endian issues.

When you do this in C/C++:

GLuint foo = 0;
foo |= (red << 24);
foo |= (green << 16);
foo |= (blue << 8);
foo |= (alpha << 0);

OpenGL's data types require GLuint to be an unsigned integer exactly 32-bits in size. And assuming that red, green, blue, and alpha are all GLubytes (8-bit unsigned integers), C/C++ says that this will pack the red bits into the high 8-bit byte, the green into the next one, and so on. The C and C++ standards require this to work.

However, the C and C++ standards do not require this to work:

GLubyte *ptr = (GLubyte*)&foo;
ptr[0] == ((foo >> 24) & 0xFF);

That is, the first byte of memory pointed to by foo does not have to be the red component.

In little endian byte ordering, the low byte of a 32-bit integer is stored first, not last.

When OpenGL sees GL_UNSIGNED_INT, that means it will interpret those four bytes exactly the way your CPU does. So GL_UNSIGNED_INT_8_8_8_8 will do the equivalent of foo above. The first byte of memory it sees will be interpreted, on a little endian machine, as the low byte, not the high byte.

STB-image does not output GL_UNSIGNED_INT_8_8_8_8. It treats each pixel as a 4-byte array, like arr above. Therefore, you must tell OpenGL that this is how your data is stored. So you say that each component is one byte. Which is what GL_UNSIGNED_BYTE does.