I finally got my GBuffer working (well not really) but now I have some strange issues with it and i can't find out why.
When I draw the Normal texture to screen, the normals are always showing to me (blue color always pointing to camera). I don't know how to explain it correctly, so here are some screens:
(I think this is the problem why my lighting pass is looking pretty strange)
Here is how I create the GBuffer:
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo);
// generate texture object
glGenTextures(GBUFFER_NUM_TEXTURES, textures);
for (unsigned int i = 0; i < 4; i++)
{
glBindTexture(GL_TEXTURE_2D, textures[i]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, width, height, 0, GL_RGBA, GL_FLOAT, 0);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, GL_TEXTURE_2D, textures[i], 0);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
}
// generate depth texture object
glGenTextures(1, &depthStencilTexture);
glBindTexture(GL_TEXTURE_2D, depthStencilTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, width, height, 0, GL_DEPTH_COMPONENT, GL_FLOAT,NULL);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthStencilTexture, 0);
// generate output texture object
glGenTextures(1, &outputTexture);
glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB32F, width, height, 0, GL_RGB, GL_FLOAT, NULL);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT4, GL_TEXTURE_2D, outputTexture, 0);
GLenum Status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
assert(Status == GL_FRAMEBUFFER_COMPLETE);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
Here the Geometry Pass:
glEnable(GL_DEPTH_TEST);
glDepthMask(true);
glCullFace(GL_BACK);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo);
GLenum DrawBuffers[] = {GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2, GL_COLOR_ATTACHMENT3};
glDrawBuffers(GBUFFER_NUM_TEXTURES, DrawBuffers);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
camera.Render();
geoProgram->Use();
GLfloat mat[16];
glPushMatrix();
glTranslatef(0,0,-20);
glRotatef(rot, 1.0, 0, 0);
glGetFloatv(GL_MODELVIEW_MATRIX,mat);
glUniformMatrix4fv(worldMatrixLocation, 1, GL_FALSE, mat);
glutSolidCube(5);
glPopMatrix();
glPushMatrix();
glTranslatef(0,0,0);
glGetFloatv(GL_MODELVIEW_MATRIX,mat);
glUniformMatrix4fv(worldMatrixLocation, 1, GL_FALSE, mat);
gluSphere(sphere, 3.0, 20, 20);
glPopMatrix();
glDepthMask(false);
glDisable(GL_DEPTH_TEST);
And here the Geometry pass shader:
[Vertex]
varying vec3 normal;
varying vec4 position;
uniform mat4 worldMatrix;
void main( void )
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
position = worldMatrix * gl_Vertex;
normal = (worldMatrix * vec4(gl_Normal, 0.0)).xyz;
gl_TexCoord[0]=gl_MultiTexCoord0;
}
[Fragment]
varying vec3 normal;
varying vec4 position;
void main( void )
{
gl_FragData[0] = vec4(0.5, 0.5, 0.5, 1);//gl_Color;
gl_FragData[1] = position;
gl_FragData[2] = vec4(normalize(normal),0);
gl_FragData[3] = vec4(gl_TexCoord[0].st, 0, 0);
}
Sorry for the long question / code fragment, but I don't know what to do next, I checked everything with other GBuffer implementations but couldn't find the error.
//Edit:
Ok, seems you are right, the problem is not the gbuffer, but the lighting pass. I have played around with it much but cant get it working :(
Here is the lighting pass:
[vs]
void main( void )
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
[fs]
uniform sampler2D colorMap;
uniform sampler2D positionMap;
uniform sampler2D normalMap;
uniform sampler2D texcoordMap;
uniform vec2 screenSize;
uniform vec3 pointLightPostion;
uniform vec3 pointLightColor;
uniform float pointLightRadius;
void main( void )
{
float lightDiffuse = 0.5;
float lightSpecular = 0.7;
vec3 lightAttenuation = vec3(1.4, 0.045, 0.00075);
vec2 TexCoord = gl_FragCoord.xy / screenSize.xy;
vec3 WorldPos = texture2D(positionMap, TexCoord).xyz;
vec3 Color = texture(colorMap, TexCoord).xyz;
vec3 normal = texture(normalMap, TexCoord).xyz;
normal = normalize(normal);
vec3 lightVector = WorldPos - pointLightPostion;
float dist = length(lightVector);
lightVector = normalize(lightVector);
float nDotL = max(dot(normal, lightVector), 0.0);
vec3 halfVector = normalize(lightVector - WorldPos);
float nDotHV = max(dot(normal, halfVector), 0.0);
vec3 lightColor = pointLightColor;
vec3 diffuse = lightDiffuse * nDotL;
vec3 specular = lightSpecular * pow(nDotHV, 1.0) * nDotL;
lightColor += diffuse + specular;
float attenuation = clamp(1.0 / (lightAttenuation.x + lightAttenuation.y * dist + lightAttenuation.z * dist * dist), 0.0, 1.0);
gl_FragColor = vec4(vec3(Color * lightColor * attenuation), 1.0);
}
If necessary I could also post the sencil pass and/or all the other processing.
Looking at your solution, from the link you posted in the comments, the first thing that strikes my eye is: in the lighting pass vertex shader, you do
Then in the fragment shader you do
This has no sense at all.
Now, you are doing deferred shading, which is a technique typical of modern openGL (3.2+), that doesn't perform good an ancient hardware, and I think you used stuff form this tutorial, which is also modern openGL, so why do you use
glPushMatrix
and that kind of old stuff? Too bad, I've never learned old openGL, so I'm not always sure that I understand correctly your code.By the way, back to the geometry pass. In the vertex shader, you do
but then you have position in view space and normal in model space. (if the modelMatrix you pass to the shader is really the model matrix, because from your screenshot the normals seem to be in view space). Also, be careful, if the normals are not in view space, but in model space, you'll have to bias and scale them,
normal = 0.5f*(modelMatrix * vec4(gl_Normal,0.0)).xyz +1.0f;
. I'd just go forRemember, the important thing is that you have both position and normal in the same space. You can use either world space or view space, but then stick to your choice. In the lighting pass, just do
and be sure that
pointLightPostion
is in the same space you decided, by transforming it in your application, on the CPU side, and then passing it to openGL, already transformed.Also, I don't understand why you do
isntead of
That way you'll have an emissive component in your lighting with the color of your light and the diffuse and specular without it. It doesn't seem a nice choice, especially in deferred shading, where you can easily perform an ambient pass on the whole frame.
Hope I helped. Too bad I don't use glut and I can't build your code.
EDIT
To transform
pointLightPostion
(which I assume is in the world space already) to the view space, just do