Reputation: 14534
Here are the documents for them:
InstancedMesh: https://threejs.org/docs/#api/en/objects/InstancedMesh
InstancedBufferGeometry: https://threejs.org/docs/#api/en/core/InstancedBufferGeometry
There are also some examples here: https://github.com/mrdoob/three.js/blob/dev/examples/webgl_buffergeometry_instancing.html
I know the basic concept about "instanced" in threejs and WebGL in general.
My current understanding is Mesh is made of Geometry
and Material
(e.g. const plane = new THREE.Mesh(geometry, material)
). Geometry does not contain color, but material does.
From the example above, I see they put color attribute in InstancedBufferGeometry
, which is very confusing... geometry should not have color, right? Am I wrong?
geometry.setAttribute( 'offset', new THREE.InstancedBufferAttribute( new Float32Array( offsets ), 3 ) );
geometry.setAttribute( 'color', new THREE.InstancedBufferAttribute( new Float32Array( colors ), 4 ) );
geometry.setAttribute( 'orientationStart', new THREE.InstancedBufferAttribute( new Float32Array( orientationsStart ), 4 ) );
geometry.setAttribute( 'orientationEnd', new THREE.InstancedBufferAttribute( new Float32Array( orientationsEnd ), 4 ) );
My question is if I want to render 1000s of square planes with different color and move independently, should I use InstancedMesh
or InstancedBufferGeometry
? Why? Can they be used together?
Upvotes: 6
Views: 3957
Reputation: 43427
Well, I have made my way through @pailhead's trilogy of Medium articles, thanks for that walkthrough, by the way!
The following is my understanding at this point of good situations to use InstancedMesh:
If some or all of your instances will change often, and if you have fewer than ~50K instances, and the differences between the instances can be described by a TRS transform (or color), then InstancedMesh should be a good fit for you.
If most of your instances are going to be static and you'll only update a few at a time, then your number of instances could be potentially unlimited (until you reach GPU rendering bottlenecks or VRAM size).
I arrived at these values through observing behavior from several examples, and from some poking around scaling up the instancecount on this: https://codesandbox.io/s/r3f-instanced-colors-8fo01
I figure this R3F sample has sufficient "extra stuff" involved to introduce enough overhead to more closely resemble a real app.
What I found by peeking at the performance tab in Chrome is that you'll spend most of your time in the main CPU thread (it may be possible to fork this work off to worker threads, keep in mind) crunching on the updateMatrix calls and things like that. Basically if you're trying to rotate each one of more than something like 50,000 objects on your main thread your app will not run nicely at all on a typical system, it's quite a lot of trig functions for your CPU to calculate each frame! This will be hugely CPU bound due to the InstancedMesh interface, which gives you the freedom to assign the transform (and optionally color) for each instance.
If you really need to transform each instance each frame, you may want to consider if you can offload your transformation computations to the GPU itself, the standard way to do that is with vertex shaders. The amount of additional computation that can be done with vertex shaders compared to one cpu thread is staggering.
Eliminating rotations will allow for a good deal more instances with InstancedMesh.
I'll give you a sense of what my current application is, so you see why this discussion is relevant. I want to render a 128^3 voxel space by instancing cubes, as it is one of the more straightforward methods for rendering voxels. Note if all these voxels are set, that is just shy of 2.1 million cubes. At 64 bytes per instance for a transform matrix, that's ~134MB of bus bandwidth consumed per frame. That simply will not do.
So when either the volume of data involved with the transform for instancing is too much or when the computational overhead of manipulating that data is too much, you'll have to fall back to the more low-level InstancedBufferGeometry approach and get down and dirty with the shaders.
This is the other thing about InstancedMesh. It allows you to avoid touching shaders.
With the voxels, there are a large number of techniques to bring the performance back to a very manageable place, probably ideal is greedy voxel meshing (sweet animation!) but even without getting that complicated, it's clearly possible to drive the bandwidth cost of streaming that data down by a factor of 20 using only 3 bytes of data to encode the position of each voxel rather than a full 64 byte matrix do it. You'll just need to use InstancedBufferGeometry like I did, to do it.
Upvotes: 7