Shape depending on viewangle in point light vertex or fragment shader

243 views Asked by At

After messing a lot with basic tutorials I am about to try an implementation of OpenGL lighting via vertex or fragment shader. Problem is, depending on camera view angle there is a shape in the background, while the result does not depend if I do it in vertex or fragment:

enter image description here

depending on the viewangle it looks a bit like a trapeze to the front and to the right it only shows up lit up surfaces if almost out of view (somehow reminds me a bit on a frustum).

Now my understanding of a point light is, that unlike a spotlight, it lights up everything in any direction and should not depend on camera as well.

I'm mostly following this tutorial at the moment: http://en.wikibooks.org/wiki/GLSL_Programming/GLUT/Diffuse_Reflection or http://en.wikibooks.org/wiki/GLSL_Programming/GLUT/Smooth_Specular_Highlights while this is now more the 2nd one.

Please don't wonder about that I am passing almost everything to fragment right now, it's just to experiment with it, as said, it does the same if not passing anything to the fragment shader except the diffuse color and doing everything in vertex like in the first tutorial. Also note that I also added a texture, which seems to do fine with the coords. Light position is currently in worldspace, but also tried in eye space. Most of the maths for the matrices I do in c++ using GLM and only passing it over.

FLOAT ViewMatrix[16]  =
{
    1.f, 0.f, 0.f, 0.f,
    0.f, -1.f, 0.f, 0.f,
    0.f, 0.f, -1.f, 0.f,
    0.f, 0.f, 0.f, 1.f
};
viewMat = glm::make_mat4(ViewMatrix);

// Identity
modelMat = glm::mat4(1.0f);

//gl_NormalMatrix ?
modelviewMat=modelMat*viewMat;
m_3x3_inv_transp = glm::inverseTranspose(glm::mat3(modelviewMat));

//viewMat inverted
viewMatinv = glm::inverse(viewMat);

projMat = glm::frustum(-RProjZ, +RProjZ, -Aspect*RProjZ, +Aspect*RProjZ, 1.0f, 32768.0);

My current vertex shaders look like this (leaving out any ambient or specular at the moment):

uniform sampler2D Texture0;
layout (location = 0) in vec3 v_coord;      // == gl_Vertex
layout (location = 1) in vec3 v_normal;


uniform mat3 m_3x3_inv_transp;              // == gl_NormalMatrix inverse transpose of modelviewMat
uniform mat4 projMat;
uniform mat4 viewMat;
uniform mat4 modelMat;
uniform mat4 modelviewMat;
uniform mat4 modelviewprojMat;              // == gl_ModelViewProjectionMatrix == projMat*viewMat*modelMat;
uniform mat4 viewMatinv;                    // == ViewMatrix inverse

out vec2 vTexCoords;
out vec3 vv_coord;
out vec3 vv_normal;
out vec3 vNormalDirection;

out mat3 vm_3x3_inv_transp;
out mat4 vprojMat;
out mat4 vviewMat;
out mat4 vmodelMat;
out mat4 vmodelviewMat;
out mat4 vmodelviewprojMat;
out mat4 vviewMatinv;

void main(void)
{
    //pass matrices also to fragment shader
    vprojMat            = projMat;
    vviewMat            = viewMat;
    vmodelMat           = modelMat;
    vmodelviewMat       = modelviewMat;
    vmodelviewprojMat   = modelviewprojMat;
    vm_3x3_inv_transp   = m_3x3_inv_transp;
    vviewMatinv         = viewMatinv;
    vv_coord            = v_coord;
    vv_normal           = v_normal;

    //Texture UV to fragment
    vTexCoords=TexCoords;

    //Texture UV Lightmap to fragment
    vLightTexCoords = LightTexCoords;

    vNormalDirection = m_3x3_inv_transp * v_normal;
    gl_Position = modelviewprojMat * vec4(v_coord, 1.0);
}

while the fragment shader currently looks like that:

uniform sampler2D Texture0;

in vec2 vTexCoords;
in vec2 vLightTexCoords;
in vec3 vv_coord;
in vec3 vv_normal;
in vec3 vNormalDirection;

in mat3 vm_3x3_inv_transp;
in mat4 vprojMat;
in mat4 vviewMat;
in mat4 vmodelMat;
in mat4 vmodelviewMat;
in mat4 vmodelviewprojMat;
in mat4 vviewMatinv;

struct LightInfo                                                           
{  
    vec4 LightLocation;                                                                    
    vec3 DiffuseLightColor;
    vec3 AmbientLightColor;
    vec3 SpecularLightColor;
    vec3 spotDirection;
    float AmbientLightIntensity;
    float SpecularLightIntensity;
    float constantAttenuation;
    float linearAttenuation;
    float quadraticAttenuation;
    float spotCutoff;
    float spotExponent;
};
uniform LightInfo gLight; 


struct material
{
  vec4 diffuse;
};
material mymaterial = material(vec4(1.0, 0.8, 0.8, 1.0));       


out vec4 FragColor; 

void main (void)  
{  
    vec4 color = texture2D(Texture0, vTexCoords);     

    vec3 normalDirection = normalize(vNormalDirection);
    vec3 lightDirection;
    float attenuation;
    vec3 positionToLightSource;

    if (gLight.LightLocation.w == 0.0) // directional light?
    {
        attenuation = 1.0; // no attenuation
        lightDirection = normalize(vec3(gLight.LightLocation));
    } 
    else // point light or spotlight (or other kind of light) 
    {
        positionToLightSource = vec3(gLight.LightLocation - vec4(vv_coord,1.0));
    }
    vec3 diffuseReflection =  vec3(gLight.DiffuseLightColor) * max(0.0, dot(normalDirection, lightDirection));

    FragColor = color*vec4(diffuseReflection,1.0);             
}

I also left out attenuation currently to simplify it, but with attenuation it doesn't work either. Setting it directional light, it seems to be fine the wall to the left is completely lit, also the light position seems to be correct as noticable on the sphere. positionToLightSource however seems to be the culprit, but since LightLocation is fix it must be vv_coord and I also tried any imaginable version of transformation with any available matrix no matter if it made sense just to see how it behaves, because I read in some questions here that putting vectors as color can help debugging, but I can't figure it out. How is an inverse transpose of modelviewMat of a vnormal supposed to look like? But in any case it doesn't seem to care about the viewangle...

My guesses are that not everything is in the same space (would be silly, but then I would be surprised because it works as directional light), or that for some reason f.e. normalDirection /vnormals ain't right - I am not entirely sure about it since I get the values from an old game engine for which this is supposed to work maybe some day and plain following some tutorials and sample code it works flawlessly.

To make a summary now, this is not only about the current problem, for which there is maybe no solution here since the source of it is maybe outside the scope of these shaders although I still hope that even that is the case someone is having an idea, but also how to debug things like this properly. I tried glsldevil as well, but without experience what could be right or wrong I feel still very helpless when it comes to debugging without any printf or something. To make an output of vectors as color is also very cool but as said I don't know what is "supposed to look right" - is there an archive of some sort for "valid debug outputs"? And please only suggestions for OpenGL3 or newer :)

1

There are 1 answers

4
Gnampf On

It appeared to be that the engine I was getting the normals information required some transform to screen coords first, so definitely not the problem of the shader. After multiplying vv_coords and LightLocation then with modMat to have it in the right space everything looks like expected. Nevertheless I'd be still interested if there are some archives or docs of some kind of this "debug by color" from inside glsl. Thanks anyways ;)

Later Edit, for anyone looking for some debugging hints: http://antongerdelan.net/opengl/debugshaders.html