user4028648
user4028648

Reputation:

How to blend many textures/buffers into one texture/buffer in OpenGL?

I have one big buffer (object) containing the MNIST dataset: many (tens of thousands) small (28x28) grayscale images, stored one-by-one in row-wise order as floats indicating pixel intensity. I would like to efficiently (i.e. somewhat interactively) blend these many images into one "average" image, where each pixel in the blended image is the average of all the pixels at that same position. Is this possible?

The options I considered are:

  1. Using a compute shader directly on the buffer object. I would spawn imgWidth * imgHeight compute shader invocations/threads, with each invocation looping over all images. This doesn't seem very efficient, as each invocation has to loop over all images, but doing it the other way (i.e. spawning numImages invocations and walking over the pixels) still has invocations waiting on each other.

  2. Using the graphics pipeline to draw the textures one-by-one to a framebuffer, blending them all over each other. This would still result in linear time, as each image has to be rendered to the framebuffer in turn. I'm not very familiar with framebuffers, though.

  3. Doing it all linearly in the CPU, which seems easier and not much slower than doing it on the GPU. I would only be missing out on the parallel processing of the pixels.

Are their other possibilities I'm missing. Is there an optimal way? And if not, what do you think would be the easiest?

Upvotes: 1

Views: 764

Answers (2)

tuket
tuket

Reputation: 3941

Most times we want to parallelize at the pixel level because there are many.

However, in your case there are not that many pixels (28x28).

The biggest number you have seems to be the number of images (thousands of images). So we would like to leverage that.

Using a compute shader, instead of iterating though all the images, you could blend the images in pairs. After each pass you would halve the number of images. Once the number of images gets very small, you might want to change the strategy but that's something that you need to experiment with to see what works best.

You know compute shaders can have 3 dimensions. You could have X and Y index the pixel of the image. And Z can be used to inxed the pair of images in a texture array. So for index Z, you would blend textures 2*Z and 2*Z+1.

Some implementation details you need to take into account:

  • Most likely, the number of images won't be a power of two. So at some point the number of images will be odd.
  • Since you are working with lots of images, you could run into float precission issues. You might need to use float textures, or addapt the strategy so this is not a problem.
  • Usually compute shaders work best when the threads process tiles of 2x2 pixels instead of individual pixels.

Upvotes: 1

Summit
Summit

Reputation: 2268

This is how i do it.

Render all the textures to the framebuffer , which can also be the default frame buffer.

Once rendering in completed.

Read the data from the Framebuffer.

glReadBuffer(GL_COLOR_ATTACHMENT0);
glBindBuffer(GL_PIXEL_PACK_BUFFER, w_pbo[w_writeIndex]);
// copy from framebuffer to PBO asynchronously. it will be ready in the NEXT frame
glReadPixels(0, 0, SCR_WIDTH, SCR_HEIGHT, GL_RGBA, GL_UNSIGNED_BYTE, nullptr);
// now read other PBO which should be already in CPU memory
glBindBuffer(GL_PIXEL_PACK_BUFFER, w_pbo[w_readIndex]);
unsigned char* Data = (unsigned char*)glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);

Upvotes: 0

Related Questions