yaku
yaku

Reputation: 3101

Calculating frame and aspect ratio guides to match cameras

I'm trying to visualize film camera crop and aspect ratio in Three.js. Please bear with me, it's a math problem, and I can't describe it in lesser words...

Instead of just using CameraHelper, I'm using three slightly modified CameraHelper objects for each camera. The helper lines can be seen when looking at a camera (cone), or when looking through a camera, the helper lines effectively create guide lines for the current camera.

So based on our actual preferred camera (frame helper), we need to calculate the monitor camera and adjust it's fov that it exactly fits inside frame camera. Then we also need to calculate the screen camera and adjust it's fov that the frame camera exactly fits inside.

My current solution appears almost correct, but there is something wrong. With long lenses (small fov, big focal length) it seems correct:

But at wide lenses (big fov, small focal length) the solution starts to break, there is extra space around the white monitor helper, for example:

So I think I'm calculating the various cameras wrong, although the result seems almost "close enough".

Here's the code that returns the vertical FOV, horizontal HFOV and aspect ratio, which are then used to configure the cameras and helpers:

// BLUE camera fov, based on physical camera settings (sensor dimensions and focal length)
var getFOVFrame = function() {
  var fov = 2 * Math.atan( sensor_height / ( focal_length * 2 ) ) * ( 180 / Math.PI );
  return fov;
}
var getHFOVFrame = function() {
  return getFOVFrame() * getAspectFrame();
}

// PURPLE screen fov, should be able to contain the frame
var getFOVScreen = function() {
  var fov = getFOVFrame();
  var hfov = fov * getAspectScreen();
  if (hfov < getHFOVFrame()) {
    hfov = getHFOVFrame();
    fov = hfov / getAspectScreen();
  }  
  return fov;
}
var getHFOVScreen = function() {
  return getFOVScreen() * getAspectScreen();
}

// WHITE crop area fov, should fit inside blue frame camera
var getFOVMonitor = function() {
  var fov = getFOVFrame();      
  var hfov = fov * getAspectMonitor();
  if (hfov > getHFOVFrame())   {
    hfov = getHFOVFrame();
    fov = hfov / getAspectMonitor();
  }
  return fov;
}
var getHFOVMonitor = function() {
  return getFOVMonitor() * getAspectMonitor();
}

var getAspectScreen = function() {
  return  screen_width / screen_height;
}

var getAspectFrame = function() {
  return  sensor_width / sensor_height;
}

var getAspectMonitor = function() {
  return monitor_aspect;
}

Why does this produce incorrect results when using large FOV / wide lenses? getFOVScreen and especially getFOVMonitor are the suspects.

Upvotes: 0

Views: 2438

Answers (1)

WestLangley
WestLangley

Reputation: 104763

Your equation var hfov = fov * getAspectScreen(); is not correct.

The relationship between the vertical FOV (vFOV) and the horizontal FOV (hFOV) are given by the following equations:

hFOV = 2 * Math.atan( Math.tan( vFOV / 2 ) * aspectRatio );

and likewise,

vFOV = 2 * Math.atan( Math.tan( hFOV / 2 ) / aspectRatio );

In these equations, vFOV and hFOV are in radians; aspectRatio = width / height.

In three.js, the PerspectiveCamera.fov is the vertical one, and is in degrees.

three.js r.59

Upvotes: 6

Related Questions