Unable to get Decoding Capabilities by cuvidGetDecoderCaps CUDA SDK

638 views Asked by At

I have a server with Tesla T4 GPU. I am trying to decode an H264 video on GPU. I am using Cuda SDK to get CUVIDDECODECAPS (decoding capabilities of GPU) but it is returning 0 to MinWidth, MinHeight, MaxWidth, MaxHeight, and false to "bIsSupported". ie. This hardware doesn't support decoding on GPU. But according to this link T4 does support video decoding.

Below is the code snippet.

CUVIDDECODECAPS decodeCaps = {};
decodeCaps.eCodecType = _codec;
decodeCaps.eChromaFormat = _chromaFormat;
decodeCaps.nBitDepthMinus8 = videoFormat.nBitDepthMinus8;
cuSafeCall(cuCtxPushCurrent(ctx_));
cuSafeCall(cuvidGetDecoderCaps(&decodeCaps));

cuSafeCall(cuCtxPopCurrent(NULL));

Below is driver and cuda version

NVIDIA-SMI 440.118.02 Driver Version: 440.118.02 CUDA Version: 10.2 Nvidia Video codec SDK is 11.0.10

Does anyone have any idea what's wrong I am doing here?

1

There are 1 answers

1
Michael IV On BEST ANSWER

Each Nvidia Video SDK has minimal requirements for CUDA SDK and graphics driver version. If you open the SDK web page you will find this info:

NVIDIA Windows display driver 456.71 or newer NVIDIA Linux display driver 455.28 or newer DirectX SDK (Windows only) CUDA 11.0 Toolkit

At least on Linux, the related NVENC and NVDEC libraries are part of the driver distribution so the latest SDK headers cannot work with the old libs ( according to your driver version). You can download the older version of the Video SDK if you must use that specific driver.