Pass ffpmeg OpenCL filter output to NVenc without hwdownload?

2.4k views Asked by At

I'm trying to do tonemapping (and resizing) of a UHD HDR video stream with ffmpeg. The following command:

ffmpeg -vsync 0 -hwaccel cuda -init_hw_device opencl=ocl -filter_hw_device ocl 
    -threads 1 -extra_hw_frames 3 -c:v hevc_cuvid -resize 1920x1080 -i "INPUT.hevc" 
    -vf "hwupload,
         tonemap_opencl=tonemap=mobius:param=0.01:desat=0:r=tv:p=bt709:t=bt709:m=bt709:format=nv12,
         hwdownload,format=nv12,hwupload_cuda" 
    -c:v hevc_nvenc -b:v 8M "OUTPUT.hevc"

seems to work (around 200 FPS on an RTX 3080). However, I notice that it still uses one CPU core and the GPU usage is reported only as 60-70%. When I only resize without any filters I get around 400FPS with 100% GPU usage.

I suspect that the last hwdownload,format=nv12,hwupload_cuda statements are a problem, because this adds a detour through main memory. I tried just using hwupload_cuda instead without the hwdownload (like suggested here: https://stackoverflow.com/a/55747785/929037 in the filter example near the end of this answer), but then I got the following error:

Impossible to convert between the formats supported by the filter 'Parsed_tonemap_opencl_1' and the filter 'auto_scaler_0'
Error reinitializing filters!
Failed to inject frame into filter network: Function not implemented
Error while processing the decoded data for stream #0:0

Trying to use hwmap resulted in

Assertion dst->format == AV_PIX_FMT_OPENCL failed at C:/code/ffmpeg/src/libavutil/hwcontext_opencl.c:2814

Is it possible to avoid this additional hwdownload?

1

There are 1 answers

3
nyanmisaka On BEST ANSWER

Edit in 2022:

For those who are using Nvidia cards and want zero-copy HDR-to-SDR tone-mapping, you can now use the powerful Vulkan filter libplacebo that introduced in FFmpeg 5.0 to achieve this without needing the OpenCL filter.

libplacebo is the next-gen video renderer of the MPV player, which can perform high quality video processing including tone-mapping for HDR10 and DV content on your GPU. Since deriving from CUDA memory to Vulkan has been implemented, you can use libplacebo filter to chain with NVDEC, NVENC and other CUDA filters to get the best performance.

To get the additional Vulkan and libplacebo support, you must use ffmpeg that built with --enable-vulkan --enable-libshaderc --enable-libplacebo.

Prebuilt binaries can be get from https://github.com/BtbN/FFmpeg-Builds/releases

A snippet of cmd to do this:

./ffmpeg -threads 1 -hwaccel cuda -hwaccel_output_format cuda -i HDR.mp4 \
-vf "scale_cuda=w=1920:h=1080:interp_algo=bilinear,hwupload=derive_device=vulkan, \
libplacebo=tonemapping=auto:colorspace=bt709:color_primaries=bt709:color_trc=bt709:format=yuv420p:upscaler=none:downscaler=none:peak_detect=0, \
hwupload=derive_device=cuda" \
-c:v h264_nvenc -preset medium -profile:v high -b:v 8M -y SDR.mp4

What does it mean:

  1. Decode the video with NVDEC HW accelerator to CUDA memory
  2. Scale to video to 1080p with CUDA filter(bilinear algorithm)
  3. Derive from CUDA to Vulkan memory with hwupload
  4. Apply auto tone-map from HDR to SDR 8bit yuv420p without using libplacebo's built-in scalers for performance
  5. Derive from Vulkan to CUDA memory with hwupload
  6. Encode to H.264 1080p SDR 8M with NVENC encoder

Note that the hwupload here doesn't mean copy back to memory. Instead, in this specific CUDA-Vulkan pipeline, it does the same thing like the hwmap. The whole video filtering pipeline is happening on your GPU and VRAM.

upscaler=none:downscaler=none:peak_detect=0 These three options disabled some high quality up/down-scaling algorithms and the HDR peak detection function to trade off for better performance. You can remove them for the best quality.

For more fine-tuning options in libplacebo filter, see http://ffmpeg.org/ffmpeg-all.html#libplacebo


Original answer in 2021:

Nope at least for now.

Zero-copy texture sharing aka hwmap filter between Cuda and OpenCL devices is not available in ffmpeg until Nvidia releases an interop method for them.

https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__INTEROP.html

Intel and AMD have some OpenCL extensions for D3D11/VAAPI<->OpenCL interop and can split one shared image(e.g. NV12) into different planes(e.g. plane Y and UV). Such as cl_intel_va_api_media_sharing cl_intel_d3d11_nv12_media_sharing from Intel and cl_amd_planar_yuv from AMD.

As for Nvidia, they indeed have cl_nv_d3d11_sharing for D3D11<->OpenCL interop but I don't think it will work well when it comes to Cuda.

Another solution is to port the tone mapping algorithm as a Cuda filter but it'll take some times. Huge speed improvement can be expected once it is finished. You can use it easily like scale_cuda or overlay_cuda filter and so on.

I have seen Intel has already supported tonemap_vaapi filter through hardware function in their latest iGPUs. Not sure if Nvidia NVENC has a similar one in their ASIC.