Convert X8B8G8R8 to R8G8B8 C++ code

192 views Asked by At

I would like to convert a hardware pixel buffer that is in the format X8B8G8R8 into unsigned int 24 bit memory buffer.

Here is my attempt:

 // pixels is uin32_t;
 src.pixels = new pixel_t[src.width*src.height];





    readbuffer->lock( Ogre::HardwareBuffer::HBL_DISCARD );
            const Ogre::PixelBox &pb = readbuffer->getCurrentLock();

            /// Update the contents of pb here
            /// Image data starts at pb.data and has format pb.format
            uint32 *data = static_cast<uint32*>(pb.data);
            size_t height = pb.getHeight();
            size_t width = pb.getWidth();
            size_t pitch = pb.rowPitch; // Skip between rows of image
            for ( size_t y = 0; y<height; ++y )
            {
                for ( size_t x = 0; x<width; ++x )
                {
                    src.pixels[pitch*y + x] = data[pitch*y + x];
                }
            }
1

There are 1 answers

4
N00byEdge On BEST ANSWER

This should do

uint32_t BGRtoRGB(uint32_t col) {
    return (col & 0x0000ff00) | ((col & 0x000000ff) << 16) | ((col & 0x00ff0000) >> 16)
}

With

src.pixels[pitch*y + x] = BGRtoRGB(data[pitch*y + x]);

Note: BGRtoRGB here converts both ways if you want it to, but remember it throws away whatever you have in the X8 bits (alpha?), but it should keep the values themselves.

To convert the other way around with an alpha of 0xff

uint32_t RGBtoXBGR(uint32_t col) {
    return 0xff000000 | (col & 0x0000ff00) | ((col & 0x000000ff) << 16) | ((col & 0x00ff0000) >> 16)
}