Reputation: 129
I am running an interactive browser based WebGL volume rendering. With mouse when I make interaction the function drawVolume() is called. To measure rendering time, I take time stamp at start and end. The difference gives me rendering time. Currently I am running it on two machines: client and Server
Following are the observations:
Server-machine: [NVIDIA GeForce GTX970] Render_time is approximately less than 2ms. Interaction is faster.
Client-machine:[NVIDIA Quadro K600] Render_time is approximately 10-11ms . Interactive display is slower, frames are updated slowly. And sometimes the display drivers stop working and display goes off. I need to restart the system.
I do not know, if this method is right for calculating rendering time. Somehow I feel, eventhough, the code is exectuted, at hardware level the rendered image is not yet displayed on browser window. How to get to know the respective frame updated for the respective interaction. If I get that status then maybe I can calculate correctly the rendering time.
drawVolume = function()
{
start3 = new Date().getTime();
gl.clearColor(0.0, 0.0, 0.0, 0.0);
gl.enable(gl.DEPTH_TEST);
gl.bindFramebuffer(gl.FRAMEBUFFER, gl.fboBackCoord);
gl.shaderProgram = gl.shaderProgram_BackCoord;
gl.useProgram(gl.shaderProgram);
gl.clearDepth(-50.0);
gl.depthFunc(gl.GEQUAL);
drawCube(gl,cube);
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
gl.shaderProgram = gl.shaderProgram_RayCast;
gl.useProgram(gl.shaderProgram);
gl.clearDepth(50.0);
gl.depthFunc(gl.LEQUAL);
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, gl.fboBackCoord.tex);
gl.activeTexture(gl.TEXTURE1);
gl.bindTexture(gl.TEXTURE_2D, gl.vol_tex);
gl.activeTexture(gl.TEXTURE2);
gl.bindTexture(gl.TEXTURE_2D, gl.tf_tex);
gl.uniform1i(gl.getUniformLocation(gl.shaderProgram, "uBackCoord"), 0);
gl.uniform1i(gl.getUniformLocation(gl.shaderProgram, "uVolData"), 1);
gl.uniform1i(gl.getUniformLocation(gl.shaderProgram, "uTransferFunction"), 2);
//Set Texture
drawCube(gl,cube);
end3 = new Date().getTime();
render_time=end3-start3;
console.log(render_time);
}
Upvotes: 0
Views: 790
Reputation: 8123
CPU/GPU interaction is asynchronous, this is implemented using driver internal command buffers in which all issued commands are buffered, thus there is no guarantee that your commands are executed by the time you call new Date().getTime()
(using Date.now()
and caching the uniform locations would be better btw.).
In most scenarios it's not desirable to force/wait for execution of given commands, however it is possible using finish
(man page) which introduces a sync point and blocks CPU side execution until all commands are executed. However as pointed out in gmans answer here this is not only measuring execution time of previously issued commands but also the time it takes to stall the pipeline.
Ideally one would use the EXT_disjoint_timer_query
extension that allows to measure execution time of given commands without stalling the pipeline, sadly though as of now(Oct 2016) this extension isn't available/exposed in pretty much any browser.
Upvotes: 2