Reputation: 741
I am currently working on a project that involves rendering LWJGL game scenes to a video stream instead of a window. I believe I can achieve that if I render a game scene to an intermediate format, such as a ByteBuffer. I am trying to extend LWJGL VoxelGame
demo as a proof of concept.
I have found a similar SO question and a forum post but I was not able to make that work. I am a beginner on OpenGL and LWJGL and I am struggling on finding comprehensible documentation on that.
On the start of the render loop (runUpdateAndRenderLoop
) the function glBindFramebuffer
is called. To my understanding it binds FBO to the current context so that any rendering will be directed to it.
I have tried using glGetTexImage
and glReadPixels
to populate a ByteBuffer but it didnt work. I have also tried that after glBlitFramebuffer
since I want to get the full rendered image to the ByteBuffer.
How can I render the current game scene to a ByteBuffer? Is there a better way of going about rendering game scenes to a video stream instead of an intermediate ByteBuffer?
private void runAndUpdateRenderLoop() {
// ...
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, width, height, 0, 0, width, height, GL_COLOR_BUFFER_BIT, GL_NEAREST);
byte[] pixels = new byte[width * height * 4];
ByteBuffer buffer = ByteBuffer.allocateDirect(pixels.length).order(ByteOrder.nativeOrder());
glGetTexImage(GL_TEXTURE_BUFFER, 0, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
glfwSwapBuffers(window);
}
Upvotes: 1
Views: 580
Reputation: 5797
The answer by @Blindy is 100% correct and you should accept it as an answer.
However, if you want code for a direct working solution, then insert the following after
glBlitFramebuffer(0, 0, width, height, 0, 0, width, height, GL_COLOR_BUFFER_BIT, GL_NEAREST);
but before
glfwSwapBuffers(window);
The code:
ByteBuffer bb = org.lwjgl.system.MemoryUtil.memAlloc(width * height * 4);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, bb);
// Test with stb_image to write as jpeg:
// org.lwjgl.stb.STBImageWrite.stbi_flip_vertically_on_write(true);
// org.lwjgl.stb.STBImageWrite.stbi_write_jpg("frame.jpg", width, height, 4, bb, 50);
org.lwjgl.system.MemoryUtil.memFree(bb);
This will result in the ByteBuffer bb
to hold the pixel data of the current frame.
However, as also noted by @Blindy , this is a severe GPU stall, since you are forced to wait for the current frame's data to be fully rendered and force a GPU->CPU transfer into your ByteBuffer. And Nvidia's drivers will also yell this to you when you enable debug message outputs:
Pixel-path performance warning: Pixel transfer is synchronized with 3D rendering.
Depending on your actual use-case, other approaches might be more useful, such as directly encoding a video from GPU memory (e.g. with NVENC).
Upvotes: 1
Reputation: 67449
There's a bunch of problems in your code, and that's without even seeing the core of the problem yet, since it's in the // ...
part:
You don't use glfwSwapBuffers
when you're trying to render in a headless setup, that's only needed when you render to the context back buffer and want to swap it to screen.
You don't want both glGetTexImage
and glReadPixels
, they both read data from graphics memory to system memory. In your case you're reading the texture data in buffer
then overwriting it with the backbuffer data, which should be empty since you never wrote to it.
You definitely do not want glBlitFramebuffer
, that's for copying from video memory to video memory. You seriously need to stop and read the documentation, throwing random functions at your video driver is a sure way to make it crash.
You need to enable OpenGL's debug layer validation, especially while trying all these random functions. And related, you need to check your frame buffer setup (glCheckFramebufferStatusEXT
), I'm willing to bet money you didn't, but you don't show your code.
You don't show your frame buffer bindings, so I can't tell if you're doing this for sure or not, but you need to bind the frame buffer as "read" (GL_READ_FRAMEBUFFER
) before reading from it. I only see a "draw" frame buffer binding, and you're only randomly clearing it.
And lastly, you should never stall the CPU on a read operation from video memory, especially when you have things you should be doing, like video encoding. Use pixel buffer objects in a round-robin fashion to double or triple buffer the transfer, so there's no stall.
Upvotes: 2