Reputation: 3
I am trying to render 3D object the way it will look as a hologram or x-ray in webGL using three.js library. It needs to be transparent in the center (to see background and maybe later some objects will be inside this volume) and have bright opaque color at the edges. Faces of back hidden side shouldn't be rendered. I am very newbie in web-based graphics, so I don't know either I should work with GLSL shader or play with blending options. Sorry for silly question.
I can reach similar result using custom glow shader according to this tutorial. But it doesn't solve the problem with back faces. I got a sufficient appearance in Blender creating a shader which eliminate such faces by restricting light path by transparency depth greater than 0.5. There is nodes of my Blender material. Is there a way to do similar thing in webGL? The screenshots of current situation and expected (second line) is here.
Currently I use OBJLoader, WebGLRenderer and ShaderMaterial from three.js library. Material defines as the following. CustomShader.js:
const customBrainShader = () => { return {
uniforms:
{
"c": { type: "f", value: 1.0 },
"p": { type: "f", value: 1.9 },
glowColor: { type: "c", value: new THREE.Color(0xcfdfff) },
viewVector: { type: "v3", value: new Vector3(0, 100, 400) }
},
vertexShader: vertexShaderSource,
fragmentShader: fragmentShaderSource,
side: THREE.FrontSide,
blending: THREE.AdditiveBlending,
depthTest: true,
depthWrite: true,
opacity: 0.5
}};
export { customBrainShader };
Fragment shader:
uniform vec3 glowColor;
varying float intensity;
void main()
{
vec3 glow = glowColor * intensity;
gl_FragColor = vec4( glow, 1.0 );
}
Vertex shader:
uniform vec3 viewVector;
uniform float c;
uniform float p;
varying float intensity;
void main()
{
vec3 vNormal = normalize( normalMatrix * normal );
vec3 vCamera = vec3(0.0,0.0,1.0);
intensity = pow( c - dot(vNormal, vCamera), p );
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
Upvotes: 0
Views: 2665
Reputation: 1735
As it turns out, my comment idea wasn't nearly as bad as I thought. JSFiddle example
In order to create a depth Occlusion Culling inside you fragment shader, you need to create a unique WebGLRenderTarget
with depthBuffer enabled and depthTexture.
target = new THREE.WebGLRenderTarget( window.innerWidth, window.innerHeight );
target.texture.format = THREE.RGBFormat;;
target.stencilBuffer = false;
target.depthBuffer = true;
target.depthTexture = new THREE.DepthTexture();
target.depthTexture.type = THREE.UnsignedShortType;
Then, inside you animation loop, you need to render the regular scene, with meshes using a placeholder MeshBasicMaterial
, to the WebGLRenderTarget
, update your custom ShaderMaterial's uniform texture to use the newly rendered depthTexture.
Finally, you can assign the ShaderMaterial
to the mesh and render the scene normally.
function animate() {
requestAnimationFrame( animate );
// render scene into target
mesh.material = basicMaterial;
renderer.setRenderTarget( target );
renderer.render( scene, camera );
// update custom shader uniform
shaderMaterial.uniforms.tDepth.value = target.depthTexture;
// render scene into scene
mesh.material = shaderMaterial;
renderer.setRenderTarget( null );
renderer.render( scene, camera );
}
Inside your fragment shader, you need to compare the current fragment z-position with the respective pixel on the depthTexture. This requires few manipulations to get both z values in the same coordinate space.
#include <packing>
uniform sampler2D tDepth;
uniform vec3 glowColor;
uniform vec2 viewportSize;
uniform float cameraNear;
uniform float cameraFar;
varying float intensity;
float readDepth( sampler2D depthSampler, vec2 coord ) {
float fragCoordZ = texture2D( depthSampler, coord ).x;
float viewZ = perspectiveDepthToViewZ( fragCoordZ, cameraNear, cameraFar );
return viewZToOrthographicDepth( viewZ, cameraNear, cameraFar );
}
void main() {
float zDepth = readDepth( tDepth, gl_FragCoord.xy / viewportSize.xy );
float fragDepth = gl_FragCoord.z/gl_FragCoord.w/cameraFar;
if ( fragDepth > zDepth + 0.001 ) discard; // 0.001 offset to prevent self-culling.
vec3 glow = glowColor * intensity;
gl_FragColor = vec4( glow, 1.0 );
}
Upvotes: 0
Reputation: 8886
If I'm not mistaken, the effect below is what you are trying to achieve.
There are a couple interesting things going on here.
First, I'm setting renderer.autoClear = false
, which prevents the renderer from clearing its buffers between calls to renderer.render
. This allows calling that function multiple times to write to the buffers multiple times.
Next, I'm doing just that. I'm rendering the same scene twice. But you'll notice that the first time I render it, I'm setting a scene.overrideMaterial
, which replaces all of the materials in the scene with the override. I need to do this for reasons within the override material.
In the override material, I'm setting colorWrite: false
. This means that while the object will be "rendered," it won't draw any colors, so there is no visible effect (yet). It does write to the depth buffer, which is what we want, because the object is going to hide things behind it. It's like hiding something behind a magic piece of glass. (I also set the polygon offset here to avoid z-fighting, which is another topic entirely, so I won't go into any detail in this answer).
Finally, I render the scene again using the shader material you defined. The noColor
render is occluding shapes that should be occluded, so you don't get unwanted bleed-through when a front-face is behind another part of the mesh. Your shader handles the rest, creating the glow effect.
// Your shader code
const fragmentShaderSource = `
uniform vec3 glowColor;
varying float intensity;
void main()
{
vec3 glow = glowColor * intensity;
gl_FragColor = vec4( glow, 1.0 );
}
`
const vertexShaderSource = `
uniform vec3 viewVector;
uniform float c;
uniform float p;
varying float intensity;
void main()
{
vec3 vNormal = normalize( normalMatrix * normal );
vec3 vCamera = vec3(0.0,0.0,1.0);
intensity = pow( c - dot(vNormal, vCamera), p );
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
`
const customBrainShader = new THREE.ShaderMaterial({
uniforms: {
c: {
value: 1.0
},
p: {
value: 1.9
},
glowColor: {
value: new THREE.Color(0xcfdfff)
},
viewVector: {
value: new THREE.Vector3(0, 100, 400)
}
},
vertexShader: vertexShaderSource,
fragmentShader: fragmentShaderSource,
side: THREE.FrontSide,
opacity: 0.5
})
// male02 model from the three.js examples
const modelPath = "https://raw.githubusercontent.com/mrdoob/three.js/dev/examples/models/obj/male02/male02.obj"
const renderer = new THREE.WebGLRenderer({
antialias: true
})
renderer.autoClear = false
renderer.setSize(200, 200)
document.body.appendChild(renderer.domElement)
const scene = new THREE.Scene()
const camera = new THREE.PerspectiveCamera(28, 1, 1, 1000)
camera.position.set(0, 90, 500)
const cameraTarget = new THREE.Vector3(0, 90, 0)
camera.lookAt(cameraTarget)
const light = new THREE.PointLight(0xffffff, 1)
camera.add(light)
scene.add(camera)
function render() {
renderer.clear()
scene.overrideMaterial = noColor
renderer.render(scene, camera)
scene.overrideMaterial = null
renderer.render(scene, camera)
}
const axis = new THREE.Vector3(0, 1, 0)
const noColor = new THREE.MeshBasicMaterial({
colorWrite: false,
polygonOffset: true,
polygonOffsetUnits: 1,
polygonOffsetFactor: 1
})
function animate() {
requestAnimationFrame(animate)
camera.position.applyAxisAngle(axis, 0.0025)
camera.lookAt(cameraTarget)
render()
}
animate()
const loader = new THREE.OBJLoader()
loader.load(modelPath, (results) => {
results.traverse(node => {
if (node instanceof THREE.Mesh) {
node.material = customBrainShader
}
})
scene.add(results)
})
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/104/three.js"></script>
<script src="https://threejs.org/examples/js/loaders/OBJLoader.js"></script>
Upvotes: 1