Reputation: 19110
I'm trying to read a 3D texture I rendered using an FBO. This texture is so large that glGetTexImage
results in GL_OUT_OF_MEMORY
error due to failure of nvidia
driver to allocate memory for intermediate storage* (needed, I suppose, to avoid changing destination buffer in case of error).
So I then thought of getting this texture layer by layer, using glReadPixels
after I render each layer. But glReadPixels
doesn't have layer index as a parameter. The only place where it actually appears as something that directs I/O to the particular layer is gl_Layer
output in the geometry shader. And that is for the writing stage, not reading.
As I tried simply doing the calls to glReadPixels
anyway after I render each layer, I only got the texels for layer 0. So glReadPixels
at least doesn't fail to get something.
But the question is: can I get arbitrary layer of a 3D texture using glReadPixels
? And if not, what should I use instead, given the above described memory constraints? Do I have to sample the layer from 3D texture in a shader to render the result to a 2D texture, and read this 2D texture afterwards?
*It's not a guess, I've actually tracked it down to a failing malloc
call (with the size of the texture as argument) from within the nvidia driver's shared library.
Upvotes: 2
Views: 888
Reputation: 19110
Yes, glReadPixels
can read other slices from the 3D texture. One just has to use glFramebufferTextureLayer
to attach the correct current slice to the FBO — instead of attaching the full 3D texture as the color attachment. Here's the replacement code for glGetTexImage
(a special FBO for this, fboForTextureSaving
, should be generated beforehand):
GLint origReadFramebuffer=0, origDrawFramebuffer=0;
gl.glGetIntegerv(GL_READ_FRAMEBUFFER_BINDING, &origReadFramebuffer);
gl.glGetIntegerv(GL_DRAW_FRAMEBUFFER_BINDING, &origDrawFramebuffer);
gl.glBindFramebuffer(GL_FRAMEBUFFER, fboForTextureSaving);
for(int layer=0; layer<depth; ++layer)
{
gl.glFramebufferTextureLayer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
texture, 0, layer);
checkFramebufferStatus("framebuffer for saving textures");
gl.glReadPixels(0,0,w,h,GL_RGBA,GL_FLOAT, subpixels+layer*w*h*4);
}
gl.glBindFramebuffer(GL_READ_FRAMEBUFFER, origReadFramebuffer);
gl.glBindFramebuffer(GL_DRAW_FRAMEBUFFER, origDrawFramebuffer);
Anyway, this is not a long-term solution to the problem. The first reason for GL_OUT_OF_MEMORY
errors with large textures is actually not lack of RAM or VRAM. It's subtler: each texture allocated on GPU is mapped to the process' address space (at least on Linux/nvidia
). So if your process doesn't malloc
even half of the RAM available to it, its address space may be already used by these large mappings. Add to this a bit of memory fragmentation, and you get either GL_OUT_OF_MEMORY
, or malloc
failure, or std::bad_alloc
somewhere even earlier than expected.
The proper long-term solution is to embrace the 64-bit reality and compile your app as 64-bit code. This is what I ended up doing, ditching all this layer-by-layer kludge and simplifying the code quite a bit.
Upvotes: 2
Reputation: 473262
If you have access to GL 4.5 or ARB_get_texture_sub_image, you can employ glGetTextureSubImage
. As the function name suggests, it's for querying a sub-section of a texture's image data. This allows you to read slices of the texture without having to get the whole thing in one go.
The extension seems fairly widely supported, available on any implementation that's still being supported by its IHV.
Upvotes: 4
Reputation: 51835
So once you got your 3D texture you can do this:
for (z=0;z<z_resolution_of_your_txr;z++)
{
render_textured_quad(using z slice of 3D texture);
glReadPixels(...);
}
Its best to match the QUAD size ot your 3D texture x,y resolutions and use GL_NEAREST
filtering...
This will be slow so if you are not on Intel and want to be more fast you can use render to 2D Texture instead and use glGetTexImage
on the target 2D texture instead of glReadPixels
.
Here example shaders for rendering slice z:
Vertex:
//------------------------------------------------------------------
#version 420 core
//------------------------------------------------------------------
uniform float aspect;
layout(location=0) in vec2 pos;
out smooth vec2 vpos;
//------------------------------------------------------------------
void main(void)
{
vpos=pos;
gl_Position=vec4(pos.x,pos.y*aspect,0.0,1.0);
}
//------------------------------------------------------------------
Fragment:
//------------------------------------------------------------------
#version 420 core
//------------------------------------------------------------------
uniform float slice=0.25; // <0,1> slice of txr
in smooth vec2 vpos;
uniform sampler3D vol_txr; // 3D texture unit used
out layout(location=0) vec4 frag_col;
void main()
{
frag_col=texture(vol_txr,vec3(0.5*(vpos+1.0),slice));
}
//---------------------------------------------------------------------------
So you need to change the slice uniform before each slice render. The rendering itself is just single QUAD covering the screen <-1,+1> while viewport matches the texture x,y resolution...
Upvotes: 0