Charlie
Charlie

Reputation: 4329

What are the key differences between Open Shader Language and GLSL

I know a little about how GLSL works, but Open Shader Language (OSL) is new to me.

As I understand it, OSL is used as a shader language for film production grade CG renderers, and is intended to be a rival to the equivalent language (RSL) in the pioneering Renderman software suite.

For what little I understand, OSL, like GLSL, is a terse, C based DSL, with a number of primitive values and BIFs specifically written to handle operations on computer graphic primitives (like surfaces, pixels, vertices, etc).

But at first glance, there appears to be a large number of differences. OSL appears to be much easier to read than GLSL, as named functions appear to be a key feature of OSL for a start.

Another observation is in how GLSL, shaders are meant to be applied only at the point when vertices are drawn (vertex shaders) and when the scene is rasterised (fragment/pixel shaders). OSL however, appears to allow a great deal of freedom in when shader programs are invoked.

There are probably many other differences (there appears to be no reference to 'uniform' variables in OSL, but are they even needed in OSL programs?), but what I'm really after is a erudite comparison of the scope of both languages in relation to each other, and an explanation of what could be practically shared, if anything at all, between programs in either language.

I've searched a little for information online, but a clear comparison of the scope of the two languages in relation to each appear to be elusive.

Will appreciate the guidance.

Upvotes: 7

Views: 5361

Answers (1)

user755921
user755921

Reputation:

GLSL usually runs at real-time on the GPU while OSL usually runs offline on the CPU. That is a very rough statement, but explains why no vendor ever implemented the GLSL noise() primitive on consumer-grade hardware yet OSL has function calls that effectively integrate over a hemisphere.

Your points about function naming and per-primitive operation for GLSL seem wrong. You can have named functions in GLSL, and with compute shaders[0] you can operate on arbitrary buffer data.

Note that RSL is now obsolete, as the new RenderMan architecture uses OSL for pattern definitions and C++ for integrator and Bxdf plugins. There are many other renderers, both real-time and offline that use C++ as their shading language... Arnold and Metal come to mind.

In general though, I learned the most about shading languages from reading Gritz and Cortes books on RSL. They all have similar syntax, it is the fundamentals that are important. You might enjoy the first paper on programmable shading by Rob Cook:

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.93.9645&rep=rep1&type=pdf

Another good book is the Cg Tutorial, much more similar to GLSL than to OSL:

http://developer.download.nvidia.com/CgTutorial/cg_tutorial_chapter01.html

VEX is a similar vector language which is used for geometric and shading operations in the Houdini suite:

http://www.sidefx.com/docs/houdini/vex/_index

Lastly, now that both Vulkan and OpenGL support SPIR-V intermediate representation there will probably be more people designing their own shading language for real-time use:

https://www.khronos.org/news/press/khronos-releases-opengl-4.6-with-spir-v-support

[0] You can also operate on arbitrary data by using Transform Feedback (available even in WebGL) or just by treating input textures as global data. This technique was called GPGPU.

Upvotes: 8

Related Questions