Getting world position for deferred rendering light pass

2.5k views Asked by At

I have recently began to build some kind of deferred rendering pipeline for the engine I am working on but I'm stuck at reconstructing the world position from depth. I have looked at quite a few examples where the explain that you need either a world position texture or a depth texture to then use for the correct distance and direction calculation of the light.

My problem is that the so called position texture which assumably is the world position doesn't seem to give me correct data. Therefore I tried to find alternative ways of getting a world position and some have suggested that I should use a depth texture instead but then what?

To make it all more clear this picture shows the textures that I currently have stored:

Deferred rendering textures Position(Top left), Normal(Top right), Diffuse(Bottom left) and Depth(Bottom right).

For the light pass I am trying to use a method which works fine if used in the first pass. When I try the same method for the light pass with the exact same variables it stops working.

Here's my Geometry Vertex Shader:

#version 150

uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;

in vec4 in_Position;
in vec3 in_Normal;
in vec2 in_TextureCoord;

out vec3 pass_Normals;
out vec4 pass_Position;
out vec2 pass_TextureCoord;
out vec4 pass_Diffuse;

void main(void) {

        pass_Position = viewMatrix * modelMatrix * in_Position;
    pass_Normals = (viewMatrix * modelMatrix * vec4(in_Normal, 0.0)).xyz;
    pass_Diffuse = vec4(1,1,1,1);
    gl_Position = projectionMatrix * pass_Position;

}

Geometry Fragment shader:

#version 150 core

uniform sampler2D texture_diffuse;

uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;

in vec4 pass_Position;
in vec3 pass_Normals;
in vec2 pass_TextureCoord;
in vec4 pass_Diffuse;

out vec4 out_Diffuse;
out vec4 out_Position;
out vec4 out_Normals;

void main(void) {
    out_Position = pass_Position;
    out_Normals = vec4(pass_Normals, 1.0);
    out_Diffuse = pass_Diffuse;
}

Light Vertex Shader:

#version 150

in vec4 in_Position;
in vec2 in_TextureCoord;

out vec2 pass_TextureCoord;

void main( void )
{
    gl_Position = in_Position;
    pass_TextureCoord = in_TextureCoord;

}

Light Fragment Shader:

#version 150 core

uniform sampler2D texture_Diffuse;
uniform sampler2D texture_Normals; 
uniform sampler2D texture_Position;
uniform vec3 cameraPosition;
uniform mat4 viewMatrix;

in vec2 pass_TextureCoord;

out vec4 frag_Color;

void main( void )
{
    frag_Color = vec4(1,1,1,1);
    vec4 image = texture(texture_Diffuse,pass_TextureCoord);
    vec3 position = texture( texture_Position, pass_TextureCoord).rgb;
    vec3 normal = texture( texture_Normals, pass_TextureCoord).rgb;
    frag_Color = image;


    vec3 LightPosition_worldspace = vec3(0,2,0);

    vec3 vertexPosition_cameraspace = position;
    vec3 EyeDirection_cameraspace = vec3(0,0,0) - vertexPosition_cameraspace;

    vec3 LightPosition_cameraspace = ( viewMatrix * vec4(LightPosition_worldspace,1)).xyz;
    vec3 LightDirection_cameraspace = LightPosition_cameraspace + EyeDirection_cameraspace;

    vec3 n = normal;
    vec3 l = normalize( LightDirection_cameraspace );

    float cosTheta = max( dot( n,l ), 0);

    float distance = distance(LightPosition_cameraspace, vertexPosition_cameraspace);

    frag_Color = vec4((vec3(10,10,10) * cosTheta)/(distance*distance)), 1);
    }

And finally, here's the current result: Unexpected result

So My question is if anyone please can explain the result or how I should do to get a correct result. I would also appreciate good resources on the area.

1

There are 1 answers

0
Andon M. Coleman On

Yes, using the depth buffer to reconstruct position is your best bet. This will significantly cut down on memory bandwidth / storage requirements. Modern hardware is biased towards doing shader calculations rather than memory fetches (this was not always the case), and the instructions necessary to reconstruct position per-fragment will always finish quicker than if you were to fetch the position from a texture with adequate precision. Now, you just have to realize what the hardware depth buffer stores (understand how depth range and perspective distribution work) and you will be good to go.

I do not see any attempt at reconstruction of world/view space position from the depth buffer in the code your question lists. You are just sampling from a buffer that stores the position in view-space. Since you are not performing reconstruction in this example, the problem has to do with sampling the view-space position... can you update your question to include the internal formats of the G-Buffer textures. In particular, are you using a format that can represent negative values (this is necessary to express position, otherwise negative values are clamped to 0).

On a final note, your position is also view-space and not world-space, a trained eye can tell this immediately by the way the colors in your position buffer are black in the lower-left corner. If you want to debug your position/normal, you should bias/scale the sampled colors into the visible range:

([-1.0, 1.0] -> [0.0, 1.0])  // Vec = Vec * 0.5 + 0.5

You may need to do this when you output some of the buffers if you want to store the normal G-Buffer more efficiently (e.g. in an 8-bit fixed-point texture instead of floating-point).