wouldnotliketo
wouldnotliketo

Reputation: 323

Three.js: split camera frustum (like in Cascade Shadow Mapping)

There's this technique called Cascade Shadow Mapping (a tutorial I've found useful: link), and I'm trying to do something similar in Three.js, except my case is even easier because the 'light' source is where the camera is.

CSM suggests splitting the camera frustum into several parts and creating a separate orthographic projection matrix for each part. I've been struggling with this. From what I've understood, first I need to find frustum corners, so I tried this:

function cameraToWorld(point, camera) {
    camera.updateWorldMatrix();
    return point.applyMatrix4(camera.matrixWorld);
}

function calculateCameraFrustumCorners(camera) {
    // this camera is an instance of PerspectiveCamera
    const hFOV = 2 * Math.atan(Math.tan(THREE.Math.degToRad(camera.fov) / 2) * camera.aspect);
    const xNear = Math.tan(hFOV / 2) * camera.near;
    const xFar = Math.tan(hFOV / 2) * camera.far;

    const yNear = Math.tan(THREE.Math.degToRad(camera.fov) / 2) * camera.near;
    const yFar = Math.tan(THREE.Math.degToRad(camera.fov) / 2) * camera.far;

    let arr = [new THREE.Vector3(xNear,      yNear,      camera.near),
               new THREE.Vector3(xNear * -1, yNear,      camera.near),
               new THREE.Vector3(xNear,      yNear * -1, camera.near),
               new THREE.Vector3(xNear * -1, yNear * -1, camera.near),
               new THREE.Vector3(xFar,       yFar,       camera.far),
               new THREE.Vector3(xFar  * -1, yFar,       camera.far),
               new THREE.Vector3(xFar,       yFar  * -1, camera.far),
               new THREE.Vector3(xFar  * -1, yFar  * -1, camera.far)];

    return arr.map(function (val) {
        return cameraToWorld(val, camera);
    });
}

and then I create a bounding box around frustum corners and use it to create an OrthographicCamera:

function getOrthographicCameraForPerspectiveCamera(camera) {
    const frustumCorners = calculateCameraFrustumCorners(camera);

    let minX = Number.MAX_VALUE;
    let maxX = Number.MIN_VALUE;
    let minY = Number.MAX_VALUE;
    let maxY = Number.MIN_VALUE;
    let minZ = Number.MAX_VALUE;
    let maxZ = Number.MIN_VALUE;
    for (let i = 0; i < frustumCorners.length; i++) {
        let corner = frustumCorners[i];
        let vec = new THREE.Vector4(corner.x, corner.y, corner.z, 1);

        minX = Math.min(vec.x, minX);
        maxX = Math.max(vec.x, maxX);
        minY = Math.min(vec.y, minY);
        maxY = Math.max(vec.y, maxY);
        minZ = Math.min(vec.z, minZ);
        maxZ = Math.max(vec.z, maxZ);
    }

    return new THREE.OrthographicCamera(minX, maxX, minY, maxY, minZ, maxZ);
}

and then I'd pass the orthographic camera matrix to the shader and use it for mapping the texture (the 'shadow map' texture, which I'm rendering using the orthographic camera that I get from the function above).

But this doesn't work: the bounding box coordinates that I get don't really make for good parameters for an orthographic camera, which usually accepts something of the form width / -2, width / 2, height / 2, height / -2, 0.1, 1000, which is not at all what I get from this code. Do I need to apply another transformation to the bounding box corners? Like, get their coordinates on screen and not in world space? I'm still not familiar enough with all the coordinate systems at play. Or am I calculating frustum corners wrong?

Upvotes: 1

Views: 369

Answers (1)

Rabbid76
Rabbid76

Reputation: 210918

The viewing volume at Orthographic projection is a cuboid. At Perspective projection the viewing volume would be a Frustum (a truncated pyramid).

Independent on the type of projection, in normalized device space the viewing volume is a cube where the left, lower, near corner is (-1, -1, -1) and the right, top, far corner is (1,1, 1).


If you want to know the corner points of the viewing volume, then define the points in normalized device space and transform them to world space`. The inverse projection matrix (.projectionMatrixInverse ) thrsnforms from normalized device space to view space and the inverse view matrix (.matrixWorld) transforms from view space to world space:

let ndc_corners = [
    [-1,-1,-1], [1,-1,-1], [-1,1,-1], [1,1,-1],
    [-1,-1, 1], [1,-1, 1], [-1,1, 1], [1,1, 1]];

let view_corners = []
for (let i=0; i < ndc_corners.length; ++i) {
    let ndc_v = new THREE.Vector3(...ndc_corners[i]);
    view_corners.push(ndc_v.applyMatrix4(camera.projectionMatrixInverse));
}

let world_corners = []
for (let i=0; i < view_corners.length; ++i) {
    let view_v = view_corners[i].clone();
    world_corners.push(view_v.applyMatrix4(camera.matrixWorld));
}

In THREE that can be simplified a lot, by using Vector3.unproject( camera : Camera):

let ndc_corners = [
    [-1,-1,-1], [1,-1,-1], [-1,1,-1], [1,1,-1],
    [-1,-1, 1], [1,-1, 1], [-1,1, 1], [1,1, 1]];

let world_corners = []
for (let i=0; i < ndc_corners.length; ++i) {
    let ndc_v = new THREE.Vector3(...ndc_corners[i]);
    world_corners.push(ndc_v.unproject(camera));
}

Upvotes: 1

Related Questions