v4vendeta
v4vendeta

Reputation: 63

Why more renderers puts direct lighting in compute shader instead of pixel shader?

In recent years, a lot of render team puts their direct lighting stage in compute shader instead of pixel shader, will that make rendering faster? Is that because the async compute technique parallelize some compute and raster operations?

Upvotes: 1

Views: 479

Answers (1)

Nicol Bolas
Nicol Bolas

Reputation: 473387

In deferred rendering, you have two passes. The geometry pass generates the material parameters (position, normal, albedo, etc), while the lighting pass performs lighting computations based on those material parameters for each light affecting that point on the surface.

When using the graphics pipeline, the lighting pass is "rendered" by drawing either full-screen quads (smaller quads that tile over the screen are used for performance reasons) or spheres that represent the world-space area affecting the light in question. Either way, these are just excuses for executing the fragment shader over some portion of the screen.

Using a compute shader to do the lighting pass gives a similar effect. However, it also allows you to use the benefits of being a compute shader. Invocations within a work-group can share data storage, for example. This can be useful particularly with SSAO techniques (while not technically local illumination, it only uses a bounded set of local data to do its computations). A work group can collectively load the information needed to do SSAO for all invocations, then read that data from local storage to do its job.

Note however that this is no guarantee of performance; it is simply one thing that might be helpful. Also, consider that tile-based renderers on mobile chips will perform quite awfully with such a renderer. TBRs gain their advantages when everything can live in tile memory; forcing it out to main memory where you can do a compute shader operation on it basically throws away that performance.

Upvotes: 2

Related Questions