Reputation: 1525
EDIT: I've narrowed down the problem to something evinced by the double-render of React in strict mode in development. Adding more details below, and also adding the "React" tag, since more general React knowledge might help in figuring this out.
To do animation of "lots" of particles, there's a technique which involves doing the animation on the GPU (GPGPU). In the world of ThreeJS, this involves encoding particle positions as texels, running computations in the shaders, and then reading the computed texture back for the updated position. There seems to be a well-known issue with reading and writing to the same texture in the same pass of the renderer, and a generally accepted solution of doing a "flip flop", where you read from one frame buffer, and write to the next one, and then flip them on the next frame.
ThreeJS has an example of this, "webgl gpu birds". Under the covers, this uses another example, "GPUComputationRenderer", which provides a generic way to do this, with an arbitrary number of "variables". I'm trying to replicate that functionality, simplified for two specific variables, but done in R3F (and, specifically, in NextJS, though I don't think that matters for my current problem).
I have it basically working ... except for the flip-flop :)
Everything is in a dynamically-loaded Drei view
to make it work in NextJS:
export const Model = forwardRef((props, ref) => {
// removed prop and ref code, as not relevant
return (
<View>
<Suspense fallback={null}>
<FboScene />
</Suspense>
</View>
)
})
Then, in FboScene
, create the the FBO targets and pass them to a child (where the magic will hopefully happen):
'use client';
import { useFBO } from '@react-three/drei';
import { FloatType, NearestFilter, RepeatWrapping, RGBAFormat } from 'three';
function SetFbo(size, options) {
const fbo = useFBO(size, size, options);
return fbo;
}
export const FboScene = forwardRef(({heros, ...wrapperProps}, ref) => {
const { height, width, multiplier } = HERO_THREE_SIZE; // 9, 16, 16
const posTargets = [];
const velTargets = [];
const fboOptions = {
minFilter: NearestFilter,
magFilter: NearestFilter,
format: RGBAFormat,
type: FloatType,
wrapS: RepeatWrapping,
wrapT: RepeatWrapping
};
// calculate final (screen) size in points
const sizeX = width * multiplier;
const sizeY = height * multiplier;
// find smallest ^2 square to contain the points
const fboDimension = 2 ** ((Math.ceil(Math.log2(sizeX * sizeY))) / 2);
[0, 1].forEach((i) => {
posTargets[i] = SetFbo(fboDimension, fboOptions);
velTargets[i] = SetFbo(fboDimension, fboOptions);
})
return (
<FboTargetWrapper
ref={wrapperRef}
targets={{ posTargets, velTargets }}
width={sizeX}
height={sizeY}
fboDimension={fboDimension}
/>
)
})
In FboTargetWrapper
, I create the scenes for the two "variables" (position and velocity), initialize everything, and loop through renders in R3F useFrame
:
'use client';
import {
forwardRef,
useEffect,
useImperativeHandle,
useMemo,
useRef,
useState
} from 'react';
import { createPortal, useFrame, useThree } from '@react-three/fiber';
import { Color, OrthographicCamera, Scene } from 'three';
import { FboPositionModel, FboVelocityModel, Points } from 'components/three/models';
export const FboTargetWrapper = forwardRef((props, ref) => {
const { targets, width, height, fboDimension, ...otherProps } = props;
const posRef = useRef();
const velRef = useRef();
const mainRef = useRef();
const [ fadeInDone, setFadeInDone ] = useState(false);
// expose API to parent modules
useImperativeHandle(ref, () => ({
postion: posRef.current,
velocity: velRef.current,
main: mainRef.current
}));
// set up the FBO
const PosScene = useMemo(() => {
const scene = new Scene();
return scene;
}, []);
const VelScene = useMemo(() => {
const scene = new Scene();
return scene;
}, []);
const fboCamera = useMemo(() => {
const camera = new OrthographicCamera(-1, 1, 1, -1, 0, 1);
return camera;
}, []);
// initialize the textures
const [ initPositions, ipColors, ipSizes, referenceCoords ] = useMemo(() => {
// create some points ... basically does the same thing as Drei `Stars`
return [
new Float32Array(starPoints),
new Float32Array(colors),
new Float32Array(sizes),
new Float32Array(coords)
]
}, [fboDimension, height, width]);
// initialize the FBOs
const { gl } = useThree();
useEffect(() => {
if (!(posRef.current || velRef.current)) return;
const rts = [0, 1];
rts.forEach((i) => {
gl.setRenderTarget(targets.posTargets[i]);
gl.render(PosScene, fboCamera);
gl.setRenderTarget(targets.velTargets[i]);
gl.render(VelScene, fboCamera);
gl.setRenderTarget(null);
})
}, [ fboCamera, gl, posRef, PosScene, targets, velRef, VelScene ]);
// You can't feed back from one shader into the same shader, so we flip
// between the two FBOs each frame
let currentTargetIndex = 0;
let nextTargetIndex;
useFrame((state) => {
nextTargetIndex = currentTargetIndex === 0 ? 1 : 0;
// XXX here's where I need the help
currentTargetIndex = nextTargetIndex;
});
return (
<>
{createPortal(<FboPositionModel
ref={posRef}
size={fboDimension}
initState={initPositions}
/>, PosScene)}
{createPortal(<FboVelocityModel
ref={velRef}
size={fboDimension}
initState={initPositions}
/>, VelScene)}
<Points
ref={mainRef}
positions={initPositions}
colors={ipColors}
sizes={ipSizes}
referenceCoords={referenceCoords}
xSize={width}
ySize={height}
/>
</>
)
})
Now, the // XXX here's where I need the help
is ... where I need help :)
I can confirm that the "initialization" is working and the first pass works to read the texture and render the Points
material, because the following works:
useFrame((state) => {
nextTargetIndex = currentTargetIndex === 0 ? 1 : 0;
// update the main positions
mainRef.current.points.material.uPositions = targets
.posTargets[currentTargetIndex].texture;
currentTargetIndex = nextTargetIndex;
});
Now, what the ThreeJS example does (probably best seen in three-stdlib GPUComputation Renderer) is
In each frame...Compute!
gpuCompute.compute();
Update texture uniforms in your visualization materials with the gpu renderer > outputmyMaterial.uniforms.myTexture.value = gpuCompute.getCurrentRenderTarget( posVar > ).texture;
Do your renderingrenderer.render( myScene, myCamera );
and the gpuCompute
function iterates through each "variable", updating it from the render target texture at currentTextureIndex
, then renders into nextTextureIndex
.
If I try and replicate that inside of useFrame
, e.g.,
useFrame((state) => {
nextTargetIndex = currentTargetIndex === 0 ? 1 : 0;
// Do "gpu compute", update from current then render into next
posRef.current.material.dtPosition = targets
.posTargets[currentTargetIndex].texture;
velRef.current.material.dtPosition = targets
.posTargets[currentTargetIndex].texture;
posRef.current.material.dtVelocity = targets
.velTargets[currentTargetIndex].texture;
velRef.current.material.dtPosition = targets
.velTargets[currentTargetIndex].texture;
// next position
state.gl.setRenderTarget(targets.posTargets[nextTargetIndex]);
state.gl.render(PosScene, fboCamera);
// next velocity
state.gl.setRenderTarget(targets.velTargets[nextTargetIndex]);
state.gl.render(VelScene, fboCamera);
// return rendering to the main target
state.gl.setRenderTarget(null);
// update the main positions
mainRef.current.points.material.uPositions = targets
.posTargets[currentTargetIndex].texture;
currentTargetIndex = nextTargetIndex;
});
it no longer works. On both Safari and Chrome it just renders ... nothing, but Chrome warns "GL_INVALID_OPERATION: Feedback loop formed between Framebuffer and active Texture." On the one hand, that kind of makes sense, but is really the whole thing we're trying to avoid here with the "flip-flop". I feel like I must be missing something very obvious, but can't find it. How am I supposed to actually do the flip-flop?
**EDIT: Additional info — ** through a series of judicious (read, "accidental") console.log
s, I've determined that the problem happens 100% of the time immediately after the 2nd render caused by being in strict mode in development. If I add a log both to the initialization code in the useEffect
, and add a log in the useFrame
loop that outputs the current frame, like so
// initialize the FBOs
const gl = useThree((state) => state.gl);
useEffect(() => {
if (!(posRef.current || velRef.current)) return;
console.log('initializing targets')
// ...
const [ frame, setFrame ] = useState(1);
useFrame((state) => {
if (frame < 5) {
console.log(`doing frame ${frame}`)
}
nextTargetIndex = currentTargetIndex === 0 ? 1 : 0;
// ...
currentTargetIndex = nextTargetIndex;
setFrame(frame + 1);
});
I pretty consistently get something like this:
initializing targets
FboTargetWrapper.js:236 doing frame 1
FboTargetWrapper.js:236 doing frame 2
FboTargetWrapper.js:112 initializing targets
FboTargetWrapper.js:236 doing frame 3
FboTargetWrapper.js:236 doing frame 4
localhost/:1 [.WebGL-0x1041ed1aa00] GL_INVALID_OPERATION: Feedback loop formed between Framebuffer and active Texture.Understand this warningAI
localhost/:1 [.WebGL-0x1041ed1aa00] GL_INVALID_OPERATION: Feedback loop formed between Framebuffer and active Texture.Understand this warningAI
"Pretty consistently" because it might be after the 3rd frame, or the 5th, but that seems to be some timing issue. If I let it run hundreds of frames, it never happens again.
I can hack a work-around, but given the "we render twice to catch errors" paradigm ... I suspect there's an error and would rather find it. Any React/R3F experts see what I'm doing wrong?
Upvotes: 0
Views: 59
Reputation: 11
Having a similar issue, except using gpucomputationrenderer from 'three/examples/jsm/misc/GPUComputationRenderer.js' and not the drei hook. Disabling StrictMode doesn't seem to fix it in my case. Only the first gpgpu variable texture actually updates and then the compute chain seems to break. It worked perfectly in the three.js version but something breaks in r3f.
Can't seem to find a solution atm. Found this implementation though: https://codesandbox.io/p/sandbox/admiring-christian-nnxq97?file=%2Fsrc%2FuseGPGPU.ts, and I followed it. The structure of implementation in the example works in the sandbox version but not in my app. Maybe this example could help you.
Upvotes: 1