Reputation: 3381
I want to implement a physical raytracer (i.e. with actual photons with a given wavelength), restricting myself to small scenes (like two spheres and an enclosing box), to do experiments. It's not meant to be fast but I'll optimize it later.
I'm currently gathering all I know about how photons interact with surfaces, i.e. they either reflect (get absorbed, then emitted again) or refract with a probability based on the surface's absorption spectrum and reflectivity/refractivity indices, and refraction is dependent on the wavelength (which naturally results in dispersion) etc...
I understand how shooting photons out of emissive materials (like "lights") and making them bounce around the scene until they happen to land into the camera produces an accurate result, but is unacceptably slow, thus the need to do it backwards (shoot photons from the camera)
But I'm having trouble understanding how surface interactions can be modelled "backwards" - for instance, if a photon coming from the camera hits the side of a red box, if the photon has a wavelength corresponding to red, it will be reflected, and all other wavelengths will be absorbed, which will produce a red color. But is the intensity of the color decided by taking many samples of very close photons, and checking which of them eventually collide with a light, and which don't? Because ultimately, either a photon hits a light or it doesn't (after a given number of bounces) - there is no notion of partial collision.
So basically my question is - is the intensity of the light received by a pixel a function of the number of photon samples for that pixel that actually make it to a light source, or is there something else involved?
Upvotes: 1
Views: 823
Reputation: 1374
It sounds like you want to do something called http://en.wikipedia.org/wiki/Path_tracing which is like raytracing, except it does not directly sample light sources when a direct ray from the camera hits a surface (causing it to be quite slow, but not as slow as shooting rays "forwards" from the light sources).
However you seem to confuse yourself by thinking of "reverse photons" coming from the camera which you assume to already have the properties ("the photon has a wavelength corresponding to red") you are actually trying to decide in the first place. To wrap your mind around this, you might want to read up on "regular" raytracing first. So think of rays from the camera that bounce through a scene up to a certain bounce depth or until they hit an object, at which point they directly sample light sources to see if they illuminate the object.
About your final question "Is the intensity of the light received by a pixel a function of the number of photon samples for that pixel that actually make it to a light source, or is there something else involved?" I'll refer you to http://en.wikipedia.org/wiki/Rendering_equation where you will find the rendering equation (the general mathematical problem all 3D graphics algorithms like raytracing try to solve) and a list with its limitations, which answers your question in the negative (i.e. other than the light source these effects are also involved in deciding the ultimate colour and intensity of a pixel):
- phosphorescence, which occurs when light is absorbed at one moment in time and emitted at a different time,
- fluorescence, where the absorbed and emitted light have different wavelengths,
- interference, where the wave properties of light are exhibited, and
- subsurface scattering, where the spatial locations for incoming and departing light are different. Surfaces rendered without accounting for subsurface scattering may appear unnaturally opaque.
Upvotes: 2