noOne
noOne

Reputation: 70

What is the focal length and image plane distance from this raytracing formula

I have a 4x4 camera matrix comprised of right, up, forward and position vectors. I raytrace the scene with the following code that I found in a tutorial but don't really entirely understand it:

for (int i = 0; i < m_imageSize.width; ++i)
{
    for (int j = 0; j < m_imageSize.height; ++j)
    {
        u = (i + .5f) / (float)(m_imageSize.width - 1) - .5f;
        v = (m_imageSize.height - 1 - j + .5f) / (float)(m_imageSize.height - 1) - .5f;

        Ray ray(cameraPosition, normalize(u*cameraRight + v*cameraUp + 1 / tanf(m_verticalFovAngleRadian) *cameraForward));

I have a couple of questions:

  1. How can I find the focal length of my raytracing camera?
  2. Where is my image plane?
  3. Why cameraForward needs to be multiplied with this 1/tanf(m_verticalFovAngleRadian)?

Upvotes: 1

Views: 1669

Answers (1)

Nico Schertler
Nico Schertler

Reputation: 32667

Focal length is a property of lens systems. The camera model that this code uses, however, is a pinhole camera, which does not use lenses at all. So, strictly speaking, the camera does not really have a focal length. The corresponding optical properties are instead expressed as the field of view (the angle that the camera can observe; usually the vertical one). You could calculate the focal length of a camera that has an equivalent field of view with the following formula (see Wikipedia):

FOV = 2 * arctan (x / 2f)
FOV  diagonal field of view 
x    diagonal of film; by convention 24x36 mm -> x=43.266 mm
f    focal length

There is no unique image plane. Any plane that is perpendicular to the view direction can be seen as the image plane. In fact, the projected images differ only in their scale.

For your last question, let's take a closer look at the code:

u = (i + .5f) / (float)(m_imageSize.width - 1) - .5f;
v = (m_imageSize.height - 1 - j + .5f) / (float)(m_imageSize.height - 1) - .5f;

These formulas calculate u/v coordinates between -0.5 and 0.5 for every pixel, assuming that the entire image fits in the box between -0.5 and 0.5.

u*cameraRight + v*cameraUp 

... is just placing the x/y coordinates of the ray on the pixel.

... + 1 / tanf(m_verticalFovAngleRadian) *cameraForward

... is defining the depth component of the ray and ultimately the depth of the image plane you are using. Basically, this is making the ray steeper or shallower. Assume that you have a very small field of view, then 1/tan(fov) is a very large number. So, the image plane is very far away, which produces exactly this small field of view (when keeping the size of the image plane constant since you already set the x/y components). On the other hand, if the field of view is large, the image plane moves closer. Note that this notion of image plane is only conceptual. As I said, all other image planes are equally valid and would produce the same image. Another way (and maybe a more intuitive one) to specify the ray would be

u * tanf(m_verticalFovAngleRadian) * cameraRight 
+ v * tanf(m_verticalFovAngleRadian) * cameraUp 
+ 1 * cameraForward));

As you see, this is exactly the same ray (just scaled). The idea here is to set the conceptual image plane to a depth of 1 and scale the x/y components to adapt the size of the image plane. tan(fov) (with fov being the half field of view) is exactly the size of the half image plane at a depth of 1. Just draw a triangle to verify that. Note that this code is only able to produce square image planes. If you want to allow rectangular ones, you need to take into account the ratio of the side lengths.

Upvotes: 3

Related Questions