Michael IV
Michael IV

Reputation: 11424

OpenGL point light moving when camera rotates

I have a point light in my scene. I thought it worked correctly until I tested it with the camera looking at the lit object from different angles and found that the light area moves on the mesh (in my case simple plane). I'm using a typical ADS Phong lighting approach. I transform light position into camera space on the client side and then transform the interpolated vertex in the vertex shader with model view matrix.

My vertex shader looks like this:

#version 420 

layout(location = 0)  in vec4 position;
layout(location = 1)  in vec2 uvs;
layout(location = 2)  in vec3 normal;

uniform mat4 MVP_MATRIX;
uniform mat4 MODEL_VIEW_MATRIX;
uniform mat4 VIEW_MATRIX;
uniform mat3 NORMAL_MATRIX;

uniform vec4 DIFFUSE_COLOR;

//=======  OUTS  ============//
out smooth  vec2 uvsOut;
out flat  vec4 diffuseOut;

out  vec3 Position;
out smooth vec3 Normal;


out gl_PerVertex
{
   vec4 gl_Position;
};

void main()
{
    uvsOut = uvs;
    diffuseOut  =  DIFFUSE_COLOR;
    Normal = normal;
    Position = vec3(MODEL_VIEW_MATRIX * position);

    gl_Position = MVP_MATRIX * position;
}

The fragment shader :

//====================  Uniforms  ===============================
struct LightInfo{

 vec4 Lp;///light position
 vec3 Li;///light intensity
 vec3 Lc;///light color
 int  Lt;///light type

};

const int MAX_LIGHTS=5;

uniform LightInfo lights[1];


// material props:
uniform vec3 KD;
uniform vec3 KA;
uniform vec3 KS;
uniform float SHININESS;
uniform int num_lights;

////ADS lighting method :

vec3 pointlightType( int lightIndex,vec3 position , vec3 normal) {

    vec3 n = normalize(normal);
    vec4 lMVPos = lights[0].Lp ;  //
    vec3 s = normalize(vec3(lMVPos.xyz) - position); //surf to light
    vec3 v = normalize(vec3(-position)); //
    vec3 r = normalize(- reflect(s , n));
    vec3 h = normalize(v+s);

    float sDotN = max( 0.0 , dot(s, n) );

    vec3 diff = KD *  lights[0].Lc * sDotN ;
    diff = clamp(diff ,0.0 ,1.0);

    vec3 spec = vec3(0,0,0);

    if (sDotN > 0.0) {
        spec = KS * pow( max( 0.0 ,dot(n,h) ) ,  SHININESS);
        spec = clamp(spec ,0.0 ,1.0);
    }

    return lights[0].Li *  ( spec+diff);
}

I have studied a lot of tutorials but none of those gives thorough explanation on the whole process when it comes to transform spaces.I suspect it has something to do with camera space I transform light and vertex position into.In my case the view matrix is created with

  glm::lookAt()

which always negates "eye" vector so it comes that the view matrix in my shaders has negated translation part.Is is supposed to be like that? Can someone give a detailed explanation how it is done the right way in programmable pipeline? My shaders are implemented based on the book "OpenGL 4.0 Shading language cookbook" .The author seems to use also the camera space.But it doesn't work right unless that is the way it should work ...

I just moved the calculations into the world space.Now the point light stays on the spot.But how do I achieve the same using camera space?

Upvotes: 0

Views: 3611

Answers (1)

Michael IV
Michael IV

Reputation: 11424

I nailed down the bug and it was pretty stupid one.But it maybe helpful to others who are too much "math friendly" .My light position in the shaders is defined with vec3 .Now , on the client side it is represented with vec4.I was effectively setting .w component of the vec4 to be equal zero each time before transforming it with view matrix.Doing so ,I believe , the light position vector wasn't getting transformed correctly and from this all the light position problems stems in the shader.The solution is to keep w component of light position vector to be always equal 1.

Upvotes: 2

Related Questions