w.brian
w.brian

Reputation: 17397

Reading custom texture format in a fragment shader

I'm creating a NES emulator, and I'm experimenting with the idea of offloading functionality (where appropriate) to the GPU. One idea that struck me is to output screen pixels to a buffer in a custom format in a way that encodes properties for each pixel such that the shader can determine how to display them. This would offload quite a bit of logic/branching from one of the hottest functions in the emulator.

Here's an example of what the encoding might look like for a single pixel, using 2 bytes per pixel:

Byte 1 (palette indices): bbbb ssss

Byte 2 (pixel properties): bbss rgbp

I'm relatively new to writing shaders, so my first questions is: Is this even possible? In the research I've done, it seems that textures need to be in a particular format in order to read color information from them at each pixel, and my custom 2-byte "color" format doesn't even represent a color in-and-of itself. If it is possible, what is the high-level approach that can be taken to accomplish this? I plan on using vulkan, but any approach that applies more generally (as it relates to shaders) is welcome.

Upvotes: 1

Views: 1103

Answers (1)

Nicol Bolas
Nicol Bolas

Reputation: 473537

Graphics has changed since 2000. Shaders are (mostly) arbitrary programs that compute values. That these values may sometimes be interpreted as colors is irrelevant. Shaders are programs; they do what you want them to.

Similarly, textures do not contain colors unless your shader chooses to interpret them as colors. Textures are nothing more than lookup tables for values. Again, these values might be colors, but they might not. It all depends on how your shader uses those values.

You can create textures which use integer formats (as opposed to normalized integers). You can then do whatever bit-manipulation you like within your shader. Nothing you described above would be particularly difficult for shaders to do.

In your case, the texture would probably be a 2 channel format, with 8-bits per channel, using unsigned integer values. In OpenGL, this format would be spelled GL_RG8UI: red/green (the name of the two channels), 8-bits per channel, and unsigned integer values. OpenGL may call these channels "red" and "green", but what matters is what your shader does with them, not what they're called.

In Vulkan, this format is spelled VK_FORMAT_R8G8_UINT: two 8-bit channels of unsigned integer values.

The principle issue you're going to have is not the image format or the shader. It's getting the data to the GPU in an efficient way.

In OpenGL, you have no choice but to constantly DMA data from CPU-accessible memory to GPU-accessible textures. In Vulkan, you may not have to do this.

Vulkan implementations can say that linear textures can be stored in memory that is both CPU and GPU accessible. They aren't required to provide this, but many can do so. In such implementations, there is no need for a DMA; the GPU can directly read what the CPU had written. You'll still need to double-buffer such images (to minimize GPU/CPU sync. You're writing to one while the GPU is reading the previous frame's data), but this ought to improve performance over the DMA version.

Of course, you'll still need a codepath for implementations where this is not possible.

Upvotes: 5

Related Questions