Mike5050
Mike5050

Reputation: 605

DirectX 11 Rendering Pipeline Using Different Shader Programs

I am currently reading Practical Rendering and Computation with Direct3D 11. And someone with minimal DirectX/OpenGL experience, I am finding it very informative, and giving me some great guidance. Everything makes sense, and seems straightforward (although, very heavy).

However, I am currently on Chapter 3(130+ pages in), and I am still confused about one simple thing.

Shader Programs, for example ... what if you wanted some of the resources to use a specific shader program, and another to use some other shader program. Or, does every single resource get processed the same? Resources here, refer to vertex/index data, or Texture1/2/3, or ... if I have a model I am rendering and want it to use specific shader programs in the pipeline.

Clearly there must be some way to have some input go down a different path for different rendering results.

But thus far, I have not seen it in the book (maybe I will after another 100 pages).

Apologies if this questions is already somewhere, not too sure how to frame the google search for something like this.

EDIT : For Clarity

Lets say I have a vertex shader called ShaderA_vs.hlsl, and another called ShaderB_vs.hlsl. Now, I have two object models loaded: ModelA and ModelB. I want ModelA to use ShaderA_vs, and ModelB to use ShaderB_vs

Upvotes: 1

Views: 1639

Answers (1)

Chuck Walbourn
Chuck Walbourn

Reputation: 41127

There are really two distinct pathways for GPU shader programming:

Programmable rendering

This model was introduced with Direct3D 8.1 and was an evolution of the older fixed-function legacy model of a "Transform & Light" portion and a "Texture blend cascade". The Vertex Shader was a shader program that replaced "Transform & Light", and the Pixel Shader replaced "Texture blend cascade".

To render a single object (point, line, or triangle), you set a Vertex Shader which is run once per point provided as the original data from a vertex buffer (and perhaps indexed by an index buffer). A hardware rasterizer then executes a Pixel Shader for every single pixel that is touched by the object as it is drawn on the 2D screen. For that particular combination of shaders and related state, you can draw a single pixel or 4 billion triangles at one time.

IF you want to change any state or any aspects of the shader programs themselves, you then have to submit a new draw. You can change some or all of the state between each draw, and in some cases you may need to explicitly clear some state to avoid validation warnings from the debug layer.

Efficient state management across the whole scene is a challenge, so be sure to avoid assuming state, relying on default state you didn't set, and assuming states persist from frame-to-frame between Present calls.

Even in the modern DirectX pipeline with all the decorations of hardware tessellation, geometry shader, and per-pixel physically based lighting is all extensions on this same basic concept.

For graphics rendering, this is generally a good conceptual model and maps well to people used to thinking in the old T&L + TextureBlending model. It can also be extended in numerous ways, and makes very efficient use of fixed-function hardware at key points to do things that are expensive to do in general computation like polygon scanline conversion, z-buffer depth culling, and alpha-blending.

Compute

People have used the previous model to do interesting non-graphics stuff with the traditional programmable pipeline where you don't really care at all about the rasterization or what the vertex shader is doing, You just draw a big rectangle over the whole render target, and you do a bunch of stuff in the pixel shader. You then treat the resulting render target as some kind of 'generic buffer'.

The problem here is that it's kind of a pain to map problems to this "drawing but doing something else" model, which is where we get "Compute Shaders" (DirectX calls this DirectCompute). In this model, you don't use vertex buffers, index buffers, points/lines/triangles, the rasterizer, or the texture blending at all. You just run a single shader that writes out a buffer. You control how many instances of the shader are run directly via Dispatch.

Vertex shaders, pixel shaders, compute shaders, etc. all can do basically the same stuff with looking up parts of resources (textures or treated as generic buffers), doing computation on them, and writing the result to some other video memory. The top-level invocation of the shaders different because of the implied inputs, but that's about it.

The one thing that compute shaders can do that other models of shaders cannot is (limited) communication with other shader instances via some small amount of shared memory. This breaks the extremely powerful multithreading benefit you get from the previous model, but makes it a lot easier to do certain problems. This stuff is tricky to do without causing major stalls, and you typically are required to figure out all the synchronization manually to get valid results.

I should mention that there's a third pathway which is what CUDA does, which is a system that hides all the low-level shader stuff and pretends to be C. It's still doing the same kind of thing as "compute" under the covers.

Upvotes: 1

Related Questions