Oganexon
Oganexon

Reputation: 13

How to render directly on a 3D texture efficiently in threejs / webgl?

I'm currently working on a fluid simulation. I am working in 3D and so are the inputs and outputs. Each shader takes one or more 3D samples and should ideally output 3D data.

Currently, I'm slicing the 3D cube and running the shader on each plane. This method works but then I need to copy the data from each 2D texture to the CPU to reconstruct a 3D texture and send it back to the GPU. The copying step is terribly slow and I think this method is not optimal.

const vertexShaderPlane = `#version 300 es

precision highp float;

uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;

in vec3 position;

out vec3 vPosition;

void main() {
    vPosition = position;
    gl_Position = projectionMatrix * modelViewMatrix * vec4( position.xy, 0., 1. );
}
`

const fragmentShaderPlane = `#version 300 es

precision highp float;
precision highp sampler3D;

uniform float uZ;
    
in vec3 vPosition;

out vec4 out_FragColor;
    
void main() {
    out_FragColor = vec4(vPosition.xy, uZ, 1.);
}`

const vertexShaderCube = `#version 300 es

precision highp float;

uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;

in vec3 position;

out vec3 vPosition;

void main() {
    vPosition = position;

    gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
`

const fragmentShaderCube = `#version 300 es

precision highp float;
precision highp sampler3D;

uniform sampler3D sBuffer;

in vec3 vPosition;

out vec4 out_FragColor;
    
void main() {
    vec4 data = texture(sBuffer, vec3(vPosition));

    out_FragColor = vec4(data);
}
`

const canvas = document.createElement('canvas')
const context = canvas.getContext('webgl2', { alpha: false, antialias: false })

const scene = new THREE.Scene()
const renderer = new THREE.WebGLRenderer({ canvas, context })

const cameras = {
  perspective: new THREE.PerspectiveCamera(50, window.innerWidth / window.innerHeight, 0.1, 50000),
  texture: new THREE.OrthographicCamera(-0.5, 0.5, 0.5, -0.5, 0, 1)
}

renderer.autoClear = false
renderer.setPixelRatio(window.devicePixelRatio)
renderer.setSize(window.innerWidth, window.innerHeight)

// cameras.perspective.position.set(2, 2, 2)

document.body.appendChild(renderer.domElement)

// Uniforms

const planeUniforms = { uZ: { value: 0.0 } }
const cubeUniforms = { sBuffer: { value: null } }

// Plane (2D)

const materialPlane = new THREE.RawShaderMaterial({
  uniforms: planeUniforms,
  vertexShader: vertexShaderPlane,
  fragmentShader: fragmentShaderPlane,
  depthTest: true,
  depthWrite: true
})

const planeGeometry = new THREE.BufferGeometry()
const vertices = new Float32Array([
  0, 0, 0,
  1, 0, 0,
  1, 1, 0,
  1, 1, 0,
  0, 1, 0,
  0, 0, 0
])
planeGeometry.setAttribute('position', new THREE.BufferAttribute(vertices, 3))

const plane = new THREE.Mesh(planeGeometry, materialPlane)
plane.position.set(-0.5, -0.5, -0.5)
scene.add(plane)

// Cube (3D)

const materialCube = new THREE.RawShaderMaterial({
  uniforms: cubeUniforms,
  vertexShader: vertexShaderCube,
  fragmentShader: fragmentShaderCube,
  depthTest: true,
  depthWrite: true,
  visible: false
})

const cube = new THREE.Group()
for (let x = 0; x < 32; x++) {
  const offset = x / 32
  const geometry = new THREE.BufferGeometry()
  const vertices = new Float32Array([
    0, 0, offset,
    1, 0, offset,
    1, 1, offset,
    1, 1, offset,
    0, 1, offset,
    0, 0, offset
  ])
  geometry.setAttribute('position', new THREE.BufferAttribute(vertices, 3))
  const mesh = new THREE.Mesh(geometry, materialCube)
  cube.add(mesh)
}

cube.position.set(-0.5, 0, -2)
cube.scale.set(0.5, 0.5, 0.5)
cube.rotation.set(1, 1, 1)
scene.add(cube)

// Computing Step

const texture2D = new THREE.WebGLRenderTarget(32, 32, { type: THREE.FloatType })
const planeSize = (32 ** 2 * 4)
const pixelBuffers = Array.from(Array(32), () => new Float32Array(planeSize))

const data = new Float32Array(planeSize * 32)
renderer.setRenderTarget(texture2D)
for (let i = 0; i < 32; i++) {
  materialPlane.uniforms.uZ.value = i / 32

  renderer.render(scene, cameras.texture)

  renderer.readRenderTargetPixels(texture2D, 0, 0, 32, 32, pixelBuffers[i]) // SLOW PART
  data.set(pixelBuffers[i], i * planeSize)
}

const texture3D = new THREE.DataTexture3D(data, 32, 32, 32)
texture3D.format = THREE.RGBAFormat
texture3D.type = THREE.FloatType
texture3D.unpackAlignment = 1

materialPlane.visible = false

// Display Step

materialCube.visible = true
cubeUniforms.sBuffer.value = texture3D
renderer.setRenderTarget(null)
renderer.render(scene, cameras.perspective)
<script src="https://threejs.org/build/three.min.js"></script>

I emphasize that the rendering works. It's just extremely slow because I have to execute a dozen shaders.

The potential solutions I found are the following:

EDIT:

I'm just looking for a way to speed up the process of copying 2D data to the CPU to make a 3D texture back on the GPU.

The real issue is renderer.readRenderTargetPixels that really slow down my render.

Upvotes: 1

Views: 2241

Answers (1)

user128511
user128511

Reputation:

As @ScieCode mentioned, you can't write to a 3D texture in WebGL/WebGL2 but you can use a 2D texture as 3D data. Imagine we have a 4x4x4 3D texture. We can store that in a 2D texture. That's 4 slices of 4x4. We might arrange those slices

00001111
00001111
00001111
00001111
22223333
22223333
22223333
22223333

To get a pixel from that 2D texture being used as 3D data

   ivec3 src = ...              // some 3D coord
   int cubeSize = 4;            // could pass in as uniform
   ivec2 slices = size / cubeSize;
   ivec2 size = textureSize(some2DSampler, 0);
   ivec2 src2D = ivec2(
      src.x + (src.z % slices.x) * cubeSize,
      src.y + (src.z / slices.x) * cubeSize);
   vec4 color = texelFetch(some2DSampler, src2D, 0);

If we render a single quad across the entire texture we know which pixel is currently being written in 3D with

  // assume size is the same as the texture above, otherwise pass it in
  // as a uniform

  int cubeSize = 4;            // could pass in as uniform
  ivec2 slices = size / cubeSize;
  ivec2 dst2D = ivec2(gl_FragCoord.xy);
  ivec3 dst = ivec3(
      dst2D.x % cubeSize,
      dst2D.y % cubeSize,
      dst2D.x / cubeSize + (dst2D.y / cubeSize) * slices.x);

the code above assumes a each dimension of the cube is the same size. Something more generic, say we had a 5x4x6 cube. We might lay that out as as 3x2 slices

000001111122222
000001111122222
000001111122222
000001111122222
333334444455555
333334444455555
333334444455555
333334444455555
   ivec3 src = ...                   // some 3D coord
   ivec3 cubeSize = ivec3(5, 4, 6);  // could pass in as uniform
   ivec2 size = textureSize(some2DSampler, 0);
   int slicesAcross = size.x / cubeSize.x; 
   ivec2 src2D = ivec2(
      src.x + (src.z % slicesAcross) * cubeSize,
      src.y + (src.z / slicesAcross) * cubeSize);
   vec4 color = texelFetch(some2DSampler, src2D, 0);

  ivec3 cubeSize = ivec3(5, 4, 6);  // could pass in as uniform
  ivec2 slicesAcross = size.x / cubeSize;
  ivec2 dst2D = ivec2(gl_FragCoord.xy);
  ivec3 dst = ivec3(
      dst2D.x % cubeSize.x,
      dst2D.y % cubeSize.y,
      dst2D.x / cubeSize.x + (dst2D.y / cubeSize.y) * slicesAcross);

Upvotes: 3

Related Questions