Reputation: 97
i use requestAnimationFrame
in a js webgl project where i change the pixel color from black to white on each frame by accessing a texture that contains the content of the last frame ) but the framerate exceeds the Hz of my monitor and the flickering is inconsistent.
also i calculate the fps with the help of window.performance.now()
and the framerate value looks like
n_fps: 200
n_fps: 333.3333333333333
n_fps: 250
i thought the requestAnimationFrame
should sync the function call with the framerate of my monitor which is set to 239.96hz, but the flickering is inconsistent and the framerate sometimes exceeds 240fps, i cant figure out why but i suspect it has to do with v-sync.
Here some specs
OS
Distributor ID: Ubuntu
Description: Pop!_OS 22.04 LTS
GPU
lshw -c video
WARNING: you should run this program as super-user.
*-display
description: VGA compatible controller
product: Ellesmere [Radeon RX 470/480/570/570X/580/580X/590]
Monitor settings
xrandr --verbose
DisplayPort-0 connected primary 1920x1080+0+1200
...
TearFree: on supported: off, on, auto
...
the important part of my js code looks like this
//...
const gl = canvas.getContext(
'webgl2',
{
desynchronized: false, //trying to force vsync?
}
);
///...
function render() {
nid = requestAnimationFrame(render);
let n_ts_ms_now = window.performance.now();
let n_ms_delta = n_ts_ms_now - n_ts_ms;
// console.log(n_ms_delta)
console.log(`n_fps: ${1000/n_ms_delta})`);
n_ts_ms = n_ts_ms_now;
n += 1;
// if(n_ms_delta > n_ms_max){
const nextTextureIndex = 1 - currentTextureIndex;
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffers[nextTextureIndex]);
gl.viewport(0, 0, canvas.width, canvas.height);
gl.useProgram(program);
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
gl.enableVertexAttribArray(positionAttributeLocation);
gl.vertexAttribPointer(positionAttributeLocation, 2, gl.FLOAT, false, 0, 0);
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, textures[currentTextureIndex]);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
// Swap textures
currentTextureIndex = nextTextureIndex;
// Render to the canvas
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
gl.viewport(0, 0, canvas.width, canvas.height);
gl.bindTexture(gl.TEXTURE_2D, textures[currentTextureIndex]);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
console.log(n)
// }
}
what i tried:
xrandr --output DisplayPort-0 --set TearFree on
vblank_mode=1 google-chrome
and thenwhen i manually throttle the fps (commented out code) the flickering looks consistent.
Upvotes: 0
Views: 122
Reputation: 717
FPS means frames per second: the number of drawn frames in the last second. You don't count the number of frames, but estimate it, based on the delta time. This works fine if you know that delta time remains the same for all frames, but this is usually not the case. That's why we multiply for instance movement with delta time, because it can be different in each frame. For example, if your 1st frame takes 3ms, and the 2nd frame takes 5ms, you estimate about 333 FPS (1000/3) in the 1st frame, then 200 FPS (1000/5) in the 2nd frame. The 1st is higher than your monitor's refresh rate, and the 2nd is lower. But both can't be true at the same time, maybe you render 333 frames in the first second, maybe 200, maybe 240, or perhaps something else. You can't tell based on 2 delta times. You have to actually count it: increment a variable at each frame, and reset it to 0 after each second.
Most monitors have a fixed refresh rate, let's say 60Hz. This means, that every 1/60 second, it refreshes the screen, which is called the vertical blank. It draws the image by drawing the top row of pixels, then the row below, all the way down to the last row (that's why it's called vertical blank). It draws the rows pretty fast, so you can't see the individual rows appearing, but still, it takes some time. On the other hand, the GPU doesn't have this fixed refresh rate, when it finishes drawing (again, delta time changes from frame to frame), it switches between the old and the new image. If this switch happens during the vertical blank, your monitor draws some rows from one image, and then some from another which is called screen tearing (if your GPU is fast you can have multiple tears). To prevent screen tearing you can enable vertical synchronization (V-sync) that makes the GPU wait for the vertical blank, and switch images just before it (that's why it's called vertical synchronization, it syncs with the vertical blank). If your GPU misses the vertical blank, it'll wait for the next one. This means that if your delta time is about 18ms, it means that you'll have about 55 FPS without V-sync, but 30 FPS with V-sync. V-sync can get rid of screen tearing, but it can reduce FPS. So when V-sync is enabled, you can't render more frames than your screen's refresh rate.
Another thing to note is that WebGL tries to hide the async nature of the GPU. So when you call drawArrays
, you might think that when the function returned, it finished the drawing. However, this is not the case. When you call those functions, the driver doesn't execute them immediately, only puts the commands into a queue, and it'll start to execute them later. In more modern APIs, like Vulkan, or WebGPU, you have an explicit queue (or even queues) where you can record commands, and start the execution. But WebGL tries to hide it from you. This means, that when the browser calls the render
function to create the next frame on the CPU side, the GPU might still working on the last frame. So frames are interleaved on the CPU and the GPU.
Update based on the minimal reproducible example:
Your code has 2 main problems: first, you create your textures by calling setupTexture
and then initialize texture 0 with random numbers. However, just before calling requestAnimationFrame
, you call resizeCanvas
, which calls setupTexture
again. InsetupTexture
, when you call texImage2D
, your last parameter, the data is null. So you override the initialized data with this null. Null is fine if you write to the texture but leads to undefined behavior if you read from it. In my case, the driver creates a completely black texture, but on other systems, it can be different. One solution could be to move initialization to setupTexture
.
The second problem is the way you render. We have 2 black (undefined) textures. In the 1st frame, you render 2 triangles by reading from texture 0 and drawing to texture 1. In the shader, you write 1.0 to texture 1 if the read value is below 0.5, which is always true since the texture 0 is black. Now texture 0 is black, while texture 1 is white. Then you draw texture 1 to the canvas with the same shader which will produce a black canvas. In the 2nd frame, you read a white texture, which makes the other texture black. And then drawing the black texture to the canvas makes the canvas white. So in odd frames, the canvas will be black, while in even frames it'll be white, that's why you see flickering. And that flickering will be different on different systems: on your screen, there will be 120 black and 120 white frames a second, on my screen there will be 30 black, and 30 white, but if I unplug the charger, only 15 black and 15 white, so it depends on the FPS.
I'm not sure what you want to achieve, but if you want to animate something, you have to use time, you can't change things immediately to the opposite in every frame. You can do this, by computing the new value in the shader based on the current time (regardless of the previous value), or you can change the previous value, based on the delta time. Maybe this video about procedural noises can help you.
A little side note: it's not that important in a simple example like this, but you use a 4 channel texture to represent a grayscale value. So you store 4 times more data than necessary. Storing less data not only reduces VRAM usage, but perhaps more importantly, it reduces bandwidth usage. Also, when you want to change a texture's pixels from the CPU side, but don't want to create a new texture, you should use texSubImage2D
, instead of texImage2D
, because it's faster.
Upvotes: 0
Reputation: 97
minimal reproducable example, the flickering is consistent when setting my display to 144hz and using google-chrome browser
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>WebGL Cellular Automata</title>
<style>
body { margin: 0; overflow: hidden; }
canvas { display: block; }
</style>
</head>
<body>
<canvas id="glCanvas"></canvas>
<script>
const canvas = document.getElementById('glCanvas');
const gl = canvas.getContext(
'webgl2',
{
desynchronized: false, //
}
);
if (!gl) {
alert('Unable to initialize WebGL. Your browser may not support it.');
}
const vertexShaderSource = `
attribute vec2 a_position;
varying vec2 v_texCoord;
void main() {
gl_Position = vec4(a_position, 0.0, 1.0);
v_texCoord = (a_position + 1.0) / 2.0;
}
`;
const fragmentShaderSource = `
precision highp float;
uniform sampler2D u_texture;
varying vec2 v_texCoord;
void main() {
// Sample the previous frame
vec4 prevState = texture2D(u_texture, v_texCoord);
// TODO: Implement your cellular automata rules here
// This is a placeholder rule (inverting the color)
float newState = (prevState.x > .5) ? 0.0: 1.0;
gl_FragColor = vec4(vec3(newState), 1.);
}
`;
function createShader(gl, type, source) {
const shader = gl.createShader(type);
gl.shaderSource(shader, source);
gl.compileShader(shader);
if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
console.error('An error occurred compiling the shaders: ' + gl.getShaderInfoLog(shader));
gl.deleteShader(shader);
return null;
}
return shader;
}
const vertexShader = createShader(gl, gl.VERTEX_SHADER, vertexShaderSource);
const fragmentShader = createShader(gl, gl.FRAGMENT_SHADER, fragmentShaderSource);
const program = gl.createProgram();
gl.attachShader(program, vertexShader);
gl.attachShader(program, fragmentShader);
gl.linkProgram(program);
if (!gl.getProgramParameter(program, gl.LINK_STATUS)) {
console.error('Unable to initialize the shader program: ' + gl.getProgramInfoLog(program));
}
const positionAttributeLocation = gl.getAttribLocation(program, 'a_position');
const positionBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
const positions = [
-1, -1,
1, -1,
-1, 1,
1, 1,
];
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(positions), gl.STATIC_DRAW);
// Create and set up textures
const textures = [gl.createTexture(), gl.createTexture()];
const framebuffers = [gl.createFramebuffer(), gl.createFramebuffer()];
function setupTexture(texture, framebuffer) {
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, canvas.width, canvas.height, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, texture, 0);
}
setupTexture(textures[0], framebuffers[0]);
setupTexture(textures[1], framebuffers[1]);
// Initialize with random data
const initialData = new Uint8Array(canvas.width * canvas.height * 4);
for (let i = 0; i < initialData.length; i++) {
initialData[i] = Math.random() * 256;
}
gl.bindTexture(gl.TEXTURE_2D, textures[0]);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, canvas.width, canvas.height, 0, gl.RGBA, gl.UNSIGNED_BYTE, initialData);
let currentTextureIndex = 0;
let n = 0;
let n_ms_last = 0;
let n_fps = 240;
let n_ms_max = 1000/n_fps;
function render(n_ms) {
// console.log(`n_ms:${n_ms}`)
nid = requestAnimationFrame(render);
let n_ms_delta = n_ms-n_ms_last;
console.log(`n_fps: ${1000/n_ms_delta})`);
n_ms_last = n_ms
n += 1;
// if(n_ms_delta > n_ms_max){
const nextTextureIndex = 1 - currentTextureIndex;
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffers[nextTextureIndex]);
gl.viewport(0, 0, canvas.width, canvas.height);
gl.useProgram(program);
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
gl.enableVertexAttribArray(positionAttributeLocation);
gl.vertexAttribPointer(positionAttributeLocation, 2, gl.FLOAT, false, 0, 0);
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, textures[currentTextureIndex]);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
// Swap textures
currentTextureIndex = nextTextureIndex;
// Render to the canvas
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
gl.viewport(0, 0, canvas.width, canvas.height);
gl.bindTexture(gl.TEXTURE_2D, textures[currentTextureIndex]);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
// }
}
function resizeCanvas() {
canvas.width = window.innerWidth;
canvas.height = window.innerHeight;
setupTexture(textures[0], framebuffers[0]);
setupTexture(textures[1], framebuffers[1]);
gl.viewport(0, 0, canvas.width, canvas.height);
}
window.addEventListener('resize', resizeCanvas);
resizeCanvas();
let nid = 0;
nid = requestAnimationFrame(render);
</script>
</body>
</html>
i also have another question: in the end i want to make a cellular automata which means that i have to be able to access information from the last frame in the current frame. i know i can use frame buffers and as far as i understand this would be the rendering process:
Upvotes: 0