C.d.
C.d.

Reputation: 9995

Discarding some voxels in ray casting

I have a volume rendering implementation in shaders which uses the gpu raycasting technique. Basically I have a unit cube at the center of my scene. I render the vertices of the unit cube in my vertex shader and pass texture coordinates to the fragment shader like this:

in vec3 aPosition;
uniform mat4 uMVPMatrix;
smooth out vec3 vUV;
void main() {
   gl_Position = uMVPMatrix * vec4(aPosition.xyz,1);
   vUV = aPosition + vec3(0.5);
}

Since the unit cube coordinates goes from -0.5 to 0.5 I clamp the texture coordinates from 0.0 to 1.0 by adding 0.5 to them..

In the fragment shader I got the texture coordinate which is interpolated by the rasterizer:

...
smooth in vec3 vUV; // Position of the data interpolated by the rasterizer
...
void main() {
    ...
    vec3 dataPos = vUV;
    ...
    for (int i = 0; i < MAX_SAMPLES; i++) {
        dataPos = dataPos + dirStep;
        ...
        float sample = texture(volume, dataPos).r;
        ...//Some more operations on the sampled color
        float prev_alpha = transferedColor.a * (1.0 - fragColor.a);
        fragColor.rgb += prev_alpha * transferedColor.rgb; 
        fragColor.a += prev_alpha; //final color
        if(fragColor.a>0.99)
            break;
    }
}

My rendering works well.

Now I have implemented a selection algorithm, which is working fine with particles (real vertices in the world coordinates).

My question is how can I make it work with the volumetric dataset? Because only vertices I have is the vertices of the unit cube. Since the data points are interpolated by the rasterizer I don't know the real(world) coordinates of the voxels.

It's fair enough for me to get the center coordinates of the voxels and treat them like particles so I can omit or include the necesseary voxels (I guess vUV coordinates?) in the fragment shader.

Upvotes: 1

Views: 316

Answers (1)

StarShine
StarShine

Reputation: 2060

First you have to work out your sampled voxel coordinate. (I'm assuming that volume is your 3D texture). To find it you have to de-linearization it from dataPos into the 3 axis components in your 3D texture (w x h x d). So if a sample in MAX_SAMPLES has an index computed like ((z * d) + y) * h + x, then the coordinate can be found by..

z = floor(sample / (w * h))

y = floor((sample - (z * w * h)) / w)

x = sample - (z * w * h) - (y * w)

The floor operation is important to retrieve the integer index.

This is the coordinate of your sample. Now you can multiply it with the inverse of the mvp you used for the 4 vertices, this gives you the position (or the center, maybe you have to add vec3(0.5)) of your sample in world space.

This raises a new question however: see if you can rewrite your selection algorithm so that you don't have to jump through all the computations, and do the selection in screen-space rather than world space.

Upvotes: 2

Related Questions