In my OpenGL program, I'm loading a 24BPP image with the width of 501
. The GL_UNPACK_ALINGMENT
parameter is set to 4
. They write it shouldn't work because the size of each of the rows which are being uploaded (501*3 = 1503
) cannot be divided by 4. However, I can see a normal texture without artifacs when displaying it.
So my code works. I'm considering why to understand this fully and prevent the whole project from getting bugged.
Maybe (?) it works because I'm not just calling glTexImage2D
. Instead, at first I'm creating a proper (with dimensions which are powers of two) blank texture, then uploading pixels with glTexSubImage2D
.
EDIT:
But do you think it does a sense to write some code like that?
// w - the width of the image
// depth - the depth of the image
bool change_alignment = false;
if (depth != 4 && !is_divisible(w*depth)) // *
{
change_alignment = true;
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
}
// ... now use glTexImage2D
if (change_alingment) glPixelStorei(GL_UNPACK_ALIGNMENT, 4); // set to default
// * - of course we don't even need such a function
// but I wanted to make the code as clear as possible
Hope it should prevent the application from crashing or malfunction?
It depends on where your image data is coming from.
The Windows BMP format, for example, enforces a 4-byte row alignment. Indeed, formats like this are exactly why OpenGL has a row-alignment field: because some image formats enforce a row alignment.
So how correct it is to use a 4-byte row alignment on your data depends entirely on how your data is aligned in memory. Some image loaders will automatically align to 4 bytes. And some will not.