wan
wan

Reputation: 190

Why the zebra stripe is so bad when analyzing using isophotes?

What I want to do is apply isophotes on a surface. See the picture below (generated by SolidWorks). Check this link for more information.

enter image description here

I use OpenGL shader technology to implement this effect. But I can not get a good effect shown the above picture. This is my method. First I create a texture comprised by black and white strips. And then look at the normal of an object sampled in the fragment I'm rendering, get the reflected ray, and use the coordinates to look up the color info from the texture I created at first.

Vertex Shader:

#version 150

uniform mat4 zwuProjModelViewMat;
uniform mat4 zwuModelViewMat;

in vec3 zwaPosition;   // position of vertex
in vec3 zwaNormal;     // normal of vertex

out vec3 zwvEcNormal;
out vec3 zwvEcVertex;

void main()
{
    gl_Position = zwuProjModelViewMat * vec4(zwaPosition, 1.0);

    vec4 ecVertex = zwuModelViewMat * vec4(zwaPosition, 1.0);
    zwvEcVertex = ecVertex.xyz / ecVertex.w;
    zwvEcNormal = normalize(mat3x3(zwuModelViewMat) * zwaNormal);

}

Fragment Shader:

#version 150

uniform sampler2D zwuEnvMap;  // the look-up texture

in vec3 zwvEcNormal;
in vec3 zwvEcVertex;

out vec4 outputColor;

void main()
{
    vec3 reflectDir = reflect(zwvEcVertex, normalize(zwvEcNormal));

    float m = 2.0 * sqrt(reflectDir.x * reflectDir.x +
                         reflectDir.y * reflectDir.y +
                        (reflectDir.z + 1.0) * (reflectDir.z + 1.0));

    vec2 index;
    index.t = (reflectDir.y / m) + 0.5;
    index.s = (reflectDir.x / m) + 0.5;

    vec3 envColor = texture2D(zwuEnvMap, index).rgb;

    outputColor = vec4(envColor, 1.0);
}

The below picture is what I get. The zebra strips are not very smooth. And by the way if I apply the shaders on a cube, I can not get any zebra strips at some view angles. enter image description here

How can I improve this method to get a better effect. or is there any better idea about implement this effect?

EDIT 2014-05-27 The texture picture I used is some like this. enter image description here

And the parameter I use:

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WARP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WARP_T, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_SPHERE_MAP);
glTexGeni(GL_T, GL_TEXTURE_GEN_MODE, GL_SPHERE_MAP);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);

Upvotes: 3

Views: 903

Answers (1)

Bartvbl
Bartvbl

Reputation: 2928

I don't think that using a texture is the right approach here, because you are throwing away a lot of image quality in the process. I'll explain that in a bit why that is, as well as how I think you should do the implementation of the fragment shader.

A virtual cylinder around a model

Imagine your (already transformed) model inside a huge cylinder. Here it is a single square. You are looking through this cylinder using a camera located at C.

When the model is rendered, the camera will shoot rays that will be reflected in the surface normals of your scene. These then hit either a dark area (at A) or a light area (at B) This is what you seem to be doing currently.

Next, what you do in your fragment shader is to find out where on the texture each ray "lands". In the setup the texture is wrapped around the inside of our virtual cylinder. In theory this approach is sound. In practice it does not quite hit the mark due to several reasons:

  • You have to generate the stripe texture in advance.
  • You are dependent on the texture sampler to return the colour that you want your dark shade to have.
  • Texels have to be sampled, which is a slow process.
  • Textures have pixels. As such you are limited in controlling how far stripes are be spaced apart and how wide each stripe is.

All these problems can go away by throwing out the texture and only using the fragment shader.

Let's take another look at the reflection ray from some piece of geometry, from the perspective of the camera:

A vector r inside a circle

Nor the r vector is the reflected ray. At this point we are only looking at the 2D plane, which means that the Z coordinate of this ray is set to 0. Notice that the vector makes an angle with the horizontal going through the starting point of the vector. It's marked by theta in the drawing. If we calculate this angle, we have in principle a value between 0 and 360 degrees.

Now note the pattern at the side of the cylinder. A new stripe will start every P degrees, where every stripe is D degrees long. If we want to know if a certain reflected ray points into a dark or light area, all we have to do is to calculate (theta mod P) < D.

In this way you can calculate which pixels should be considered dark without the need for a texture in the first place! You can vary the P and D variables to define the spacing and width of stripes, respectively.

Upvotes: 3

Related Questions