I have an 8 mp (3296 x 2472) gigE camera that is capable of running at around 25 fps with 14bit Bayer encoded frames.
I have code (using the vimba api) that can capture frames at full rate and wrote the raw data to disk. However, we also wish to implement a network stream of this video feed and don't have the bandwidth to broadcast the frames in the raw naive format.
Ultimately, I think I want to create an h.264 network stream from the frames, but I'm not sure a normal computer is going to be able to transcode that at the data rates I'm running. So I'm thinking I might need to use a GPU or some other hardware accelerator.
Does anyone have any specific advice on where to start
GPUs or FPGAs are where to go. Which is best depends on your specific needs but I think GPUs make more sense for most people.
Take a look at NVENC for Nvidia's library dedicated to doing real time video encoding. You may not have to get your hands too wet.
A note that the debayer operation can also be done on the GPU but I'm not sure if you can avoid a round-trip transfer while using the NVENC api - it may be insignificant though with these numbers.
update: now amd has a library