Reputation: 5305
I want to augment the image of a stationary webcam with location based markers. This is to be added to an existing React app that uses three.js (through react-three-fiber) in other parts already, so these technologies are to be reused.
While it is quite eays to calculate the position of the markers (locations known) relative to the camera (location known), I'm struggling with the configuration of the camera in order to get a good visual match between "real" object and AR marker.
I have created a codesandbox with an artificial example that illustrates the challenge.
Here's my attempt at configuring the camera:
const camera = {
position: [0, 1.5, 0],
fov: 85,
near: 0.005,
far: 1000
};
const bearing = 109; // degrees
<Canvas camera={camera}>
<Scene bearing={bearing}/>
</Canvas>
Further down in the scene component I’m rotating the camera according to the bearing of the webcam like so:
...
const rotation = { x: 0, y: bearing * -1, z: 0 };
camera.rotation.x = (rotation.x * Math.PI) / 180;
camera.rotation.y = (rotation.y * Math.PI) / 180;
camera.rotation.z = (rotation.z * Math.PI) / 180;
...
Any tips/thoughts on how to get that camera configured for a good match of three.js boxes and real life objects?
Upvotes: 0
Views: 2354
Reputation: 76
As a GIS-developer I can give a few hints to this issue:
var xy = proj4( "EPSG:4326", "EPSG:3857", [ lon, lat ] );
var position = new THREE.Vector3( camXY[ 0 ] - markerXY[ 0 ], 0.0, markerXY[ 1 ] - camXY[ 1 ] );
Upvotes: 6