Reputation:
So i am trying to wrap my head around the transformation from a frame taken via the MediaFrameReference class. The thing i got is an image with an associated spatial coordinate system and the intrinsic camera parameters. To calculate the real world coordinate (X, Y, Z) of a pixel in the image (U, V) i thought it will be sufficient to access the 4x4Matrix which represents the transformation matrix from the RGB-camera to the world.
To do this i found the TryGetTransformTo(SpatialCoordinateSystem target) method. If i understand the documentation correctly i have the image coordinate system attached to the image so i would call
image.CoordinateSystem.TryGetTransformTo(worldCoordinateSystem)
But i just can't find the proper way to do get hold of the worldCoordinateSystem
.
Apparently there was a part in the locatable camera documentation which got erased.
I checked the example from Microsoft (HolographicFaceTracking), but i am horribly bad at interpreting c++ code. So this is not quite useful to me.
At Sketchy Experiments with HoloLens the programmer used
var unityWorldCoordinateSystem = Marshal.GetObjectForIUnknown(WorldManager.GetNativeISpatialCoordinateSystemPtr()) as SpatialCoordinateSystem;
, but it is declared as obsolete.
Using this information and extremly good google-search-skills i found this link.
With a similar question which got never answered.
The question i have is, what is the most efficient method to acquire the spatial coordinate system i need. I am using Unity 2018.4.22f1 and Visual Studio 2019.
Any help is very much appreciated. Kind regards.
Upvotes: 3
Views: 2698
Reputation: 2900
Recommend using PhotoCapture
class to capture the picture, the TryGetCameraToWorldMatrix method of PhotoCaptureFrame will make it easy to acquire the Matrix4x4 to be populated by the Camera to world Matrix at the time the photo was captured. You can refer to this link to learn more about how to use it: UnityEngine.Windows.WebCam.PhotoCapture
Follow the guides, when you call the async method TakePhotoAsync(onCapturedPhotoToMemoryCallback), you need to pass in a function as a parameter, it will be invoked once the photo has been stored to memory. And in this function, you will get a PhotoCaputrerFrame instance as a parameter. Finally, invoke PhotoCaptureFrame.TryGetCameraToWorldMatrix on this parameter to acquire a matrix to be populated by the Camera to world Matrix at the time the photo was captured.
Besides, for an early version of the MR documentation, it has demonstrated how to find or draw at a specific 3d location on a camera image with shader code, you can find it on GitHub commit history: mixed-reality-docs/locatable-camera.md.
Upvotes: 2