Increasing achieved occupancy doesn't enhance computation speed linearly

534 views Asked by At

I had a CUDA program in which kernel registers were limiting maximum theoretical achieved occupancy to %50. So I decided to use shared memory instead of registers for those variables that were constant between block threads and were almost read-only throughout kernel run. I cannot provide source code here; what I did was conceptually like this:

My initial program:

__global__ void GPU_Kernel (...) {
    __shared__ int sharedData[N]; //N:maximum amount that doesn't limit maximum occupancy
    int r_1 = A; //except for this first initialization, these registers don't change anymore
    int r_2 = B;
    ...
    int r_m = Y;

    ...   //rest of kernel;
}

I changed above program to:

__global__ void GPU_Kernel (...) {
    __shared__ int sharedData[N-m];
    __shared__ int r_1, r_2, ..., r_m;
    if ( threadIdx.x == 0 ) {
        r_1 = A;
        r_2 = B;
        ...
        r_m = Y; //last of them
    }
    __syncthreads();
    ...   //rest of kernel
}

Now threads of warps inside a block perform broadcast reads to access newly created shared memory variables. At the same time, threads don't use too much registers to limit achieved occupancy.

The second program has maximum theoretical achieved occupancy equal to %100. In actual runs, the average achieved occupancy for the first programs was ~%48 and for the second one is around ~%80. But the issue is enhancement in net speed up is around %5 to %10, much less than what I was anticipating considering improved gained occupancy. Why isn't this correlation linear?

Considering below image from Nvidia whitepaper, what I've been thinking was that when achieved occupancy is %50, for example, half of SMX (in newer architectures) cores are idle at a time because excessive requested resources by other cores stop them from being active. Is my understanding flawed? Or is it incomplete to explain above phenomenon? Or is it added __syncthreads(); and shared memory accesses cost?

enter image description here

1

There are 1 answers

0
Robert Crovella On

Why isn't this correlation linear?

If you are already memory bandwidth bound or compute bound, and either one of those bounds is near the theoretical performance of the device, improving occupancy may not help much. Improving occupancy usually helps when niether of these are the limiters to performance for your code (i.e. you are not at or near peak memory bandwidth utilization or peak compute). Since you haven't provided any code or any metrics for your program, nobody can tell you why it didn't speed up more. The profiling tools can help you find the limiters to performance.

You might be interested in a couple webinars:

CUDA Optimization: Identifying Performance Limiters by Dr Paulius Micikevicius

CUDA Warps and Occupancy Considerations+ Live with Dr Justin Luitjens, NVIDIA

In particular, review slide 10 from the second webinar.