FFMPEG: Mapping YUV data to output buffer of decode function

1.2k views Asked by At

I am modifying a video decoder code from FFMPEG. The decoded code is of format YUV420. I'm having 3 pointers, each pointing to Y, U and V data i.e:

yPtr-> is a pointer pointing to the Luma
uPtr-> is a pointer pointing to the Chroma Cb
vPtr-> is a pointer pointing to the Chroma Cr

However, the output pointer to which I need to map my YUV data is of type void. And the output pointer is just one.

i.e: void *data is the output pointer for which I need to point my yPtr, uPtr and vPtr. How should I do this?

One approach I have tried is, created a new buffer whose size is equal to the sum of Y, U and V data, and copy the contents of yPtr, uPtr and vPtr to the newly allocated buffer, and the pointer to this buffer I'm allocating to *data output pointer.

However this approach is not preferred because memcpy needs to be performed and other performance drawbacks.

Can anyone please suggest an alternative for this issue. This may be not related directly to FFMPEG, but since I'm modifying FFMPEG's libavcodec's decoder code, I'm tagging it in FFMPEG.

Edit: What I'm trying to do:

Actually my understanding is if I make this pointer point to void *data pointer of any decode function of any decoder and setting *got_frame_ptr to 1, the framework will take care of dumping this data into the yuv file. is my understanding right?

The function prototype of my custom video decoder or any video decoder in ffmpeg is as shown below:

static int myCustomDec_decode_frame(AVCodecContext *avctx,
            void *data, int *data_size,
            uint8_t *buf, int buf_size) {

I'm referring to this post FFMPEG: Explain parameters of any codecs function pointers and assuming that I need to point *data to my YUV data pointer, and the dumping stuff will be taken care by ffmpeg. Please provide suggestions regrading the same.

1

There are 1 answers

2
szatmary On

You are doing something wrong. You don't explain why you need to map the planes to a single pointer. What are you trying to accomplish by doing this, because the is no reason to ever need to do this. Are you trying to convert from a planer to non planer format (like YUYV, or RGB)? If so, it is more complicated then memcpy, the bytes need to be interleaved. You can do this with libswscale.

Update:

First, your prototype is wrong. The correct one is.

static int cook_decode_frame(AVCodecContext *avctx, void *data,
                             int *got_frame_ptr, AVPacket *avpkt)

That is answered in the question you linked. Next, void *data points to an AVFrame.

static int decode_frame(AVCodecContext *avctx, void *data,
                        int *got_frame, AVPacket *avpkt)
{
    ...
    AVFrame *pict      = data;
    ...
}

And vorbiddec.c

static int vorbis_decode_frame(AVCodecContext *avctx, void *data,
                               int *got_frame_ptr, AVPacket *avpkt)
{
    ...
    AVFrame *frame     = data;
    ...
}

And the example linked the the question you referenced.

static int cook_decode_frame(AVCodecContext *avctx, void *data,
                             int *got_frame_ptr, AVPacket *avpkt)
{
      AVFrame *frame     = data;
...
}