I am following the FFmpeg video enconding example here, but it makes some dummy YUV420P frames and I have a BGR image already captured from a camera.
I am not sure how to use frame->data[]
and frame->linesize[]
for filling them with my BGR image instead, so I can encode an H264 video.
EDIT:
I have the following code (it's called for every new picture the camera sends) after Ronald's answer:
.............
AVFrame *bgrFrame = av_frame_alloc();
bgrFrame->width = originalBGRImage.cols;
bgrFrame->height = originalBGRImage.rows;
ret = av_image_alloc(bgrFrame->data, bgrFrame->linesize, bgrFrame->width, bgrFrame->height, AV_PIX_FMT_BGR24, 32);
/////////////////////////////////////
// This works and prevents memory leak....if I remove it, it consumes all the RAM....but I can't free this memory here, since I will use it later...
av_freep(&bgrFrame->data[0]);
av_frame_free(&bgrFrame);
return;
/////////////////////////////////////
ret = av_image_fill_pointers(bgrFrame->data, AV_PIX_FMT_BGR24, bgrFrame->height, originalBGRImage.data, bgrFrame->linesize);
/////////////////////////////////////
// Here is where I am done using the memory so I will want to free it...but this same code crashes the program.
av_freep(&bgrFrame->data[0]);
av_frame_free(&bgrFrame);
return;
/////////////////////////////////////
So if I remove the av_freep(&bgrFrame->data[0]);
at the end of the code I will have a memory leak...but leaving it there crashes....so what's the correct way to free the used memory?
Use av_image_fill_linesizes() to fill the linesizes (unless they're padded, in which case you should specify them manually), and after that use av_image_fill_pointers() to fill the data pointers. As pix_fmt, use AV_PIX_FMT_BGR24.
Now, that gives you a BGR24 picture. You can encode that as a RGB H264 image using the libx264rgb encoder. If you want to encoder YUV H264, you will first need to convert the RGB image to YUV, which you can do using libswscale. Google "libswscale example" to find example code on how to do that.