Reputation: 8866
I'm raytracing in the WebGL fragment shader, planning on using a dynamically generated fragment shader that contains the objects in my scene. As I add an object to my scene, I will be adding some lines to the fragment shader, so it could get pretty large. How big can it get and still work? Is it dependent on the graphics card?
Upvotes: 4
Views: 1277
Reputation:
There are all kinds of limits on how big shaders can be. It's up the driver/gpu but it's also up to the machine and WebGL. Some WebGL implementations (chrome) run an internal timer. If a single GL command takes too long they'll kill WebGL ("Rats, WebGL hit a snag"). This can happen when a shader takes too long to compile.
Here's an example of a silly 100k shader where I ran a model through a script to generate a mesh in the shader itself. It runs for me on macOS on my Macbook Pro. It also runs on my iPhone6+. But when I try it in Windows the DirectX driver takes too long trying to optimize it and Chrome kills it.
It's fun to goof off and put geometry in your shader and it's fun to goof off with signed distance fields and ray marching type stuff but those techniques are more of a puzzle / plaything. They are not in any way performant. As a typical example this SDF based shader runs at about 1 frame per second fullscreen on my laptop, and yet my laptop is capable of displaying all of Los Santos from GTV5 at a reasonable framerate.
If you want performance you should really be using more standard techniques, putting your geometry in buffers and doing forward or deferred rendering.
Upvotes: 1
Reputation: 34498
The "safe" answer is that it depends on your hardware and drivers, but in practical terms they can be pretty crazy big. What JustSid said about performance does apply (bigger shader == slower shader) but it sounds like you're not exactly aiming for 60FPS here.
For a more in-depth breakdown of shader limits, check out http://en.wikipedia.org/wiki/High_Level_Shader_Language. The page is about Direct X shaders, but all the shader constraints apply to GLSL as well.
Upvotes: 3
Reputation: 25318
You don't want to create massive fragment shaders since they tend to drain the performance very fast. You should, if possible, do any calculation either already on the CPU or in the vertex shader, but not for every pixel.
Upvotes: 0