OpenGL Compute shader atomic operations

5.5k views Asked by At

I'm trying to write a deferred tiled renderer in line with the one DICE used for BF3 and I'm either not understanding what I'm doing or GLSL is pulling a fast one on me.

First part of the kernel is to calculate the max and min depth per tile, which I'm doing with this code.

#version 430

#define MAX_LIGHTS_PER_TILE 1024
#define WORK_GROUP_SIZE 16

struct PointLight
{
    vec3 position;
    float radius;
    vec3 color;
    float intensity;
};

layout (binding = 0, rgba32f) uniform writeonly image2D outTexture;
layout (binding = 1, rgba32f) uniform readonly image2D normalDepth;
layout (binding = 2, rgba32f) uniform readonly image2D diffuse;
layout (binding = 3, rgba32f) uniform readonly image2D specular;
layout (binding = 4, rgba32f) uniform readonly image2D glowMatID;

layout (std430, binding = 5) buffer BufferObject
{
    PointLight pointLights[];
};

uniform mat4 inv_proj_view_mat;

layout (local_size_x = WORK_GROUP_SIZE, local_size_y = WORK_GROUP_SIZE) in;

shared uint minDepth = 0xFFFFFFFF;
shared uint maxDepth = 0;


void main()
{
        ivec2 pixel = ivec2(gl_GlobalInvocationID.xy);

        //SAMPLE ALL SHIT
        vec4 normalColor = imageLoad(normalDepth, pixel);

        float d = (normalColor.w + 1) / 2.0f;

        uint depth = uint(d * 0xFFFFFFFF);

        atomicMin(minDepth, depth);
        atomicMax(maxDepth, depth);

        groupMemoryBarrier();

        imageStore(outTexture, pixel, vec4(float(float(minDepth) / float(0xFFFFFFFF))));
}

The scene looks like this if I draw the depth for each fragment.

enter image description here

Trying to draw minDepth results in a purely white screen and drawing maxDepth produces a black screen. Have I got my memory management/atomic functions wrong or are my drivers/GPU/Unicorn borked?

As a note, I have tried

atomicMin(minDepth, 0)

which ALSO produces a completely white image, which makes me very suspicious of what is really going on.

1

There are 1 answers

1
Bentebent On

Turns out using barrier() instead of groupMemoryBarrier() solved the problem. Why, I do not know so if anyone can enlighten me, that would be much appreciated.