Reputation: 1201
I have one pixel in 1920*1080 color frame, and I need to know it's location in camera space in meters. I know I should use CoordinateMapper class, but the method CoordinateMapper.MapColorFrameToCameraSpace
documented here takes depth frame as input. I'm confused: shouldn't the input be a color frame? I want to map between color frame and camera space after all.
I think there's something eludes me, I appreciate it if anyone can make it clear. Thank you!
Upvotes: 1
Views: 2962
Reputation: 1387
The reason it doesn't ask the color frame is because it doesn't need it. This method maps every possible pixel in a color frame to its corresponding 3D coordinate. For that, it needs the depth frame, which is the one that contains 3D depth information, which allows the software to know where in 3D space each of the points of that 2D image would be (I don't know how they do it, but I imagine it can be done with raycasting). If you think about it, there is no way of reconstructing the 3D world from a simple image (which only contains color information in each point). If there was, there would be no need for Kinect at all, right? We could get depth information from simple cameras :)
Hope my answer helped you understand, if something isn't clear, feel free to ask.
Upvotes: 0
Reputation: 164
This is more a comment than an answer (but I don't have the rep to comment):
I believe the reason it requires a depth frame and not just a color frame is that camera space is three-dimensional, so it couldn't know that just from a 2D pixel location - it needs the depth.
Upvotes: 0
Reputation: 765
Check this out... This code is something I built for Halloween. It demonstrates (sort of) what you're looking for. The comments in the code help too.
http://github.com/IntStarFoo/KinectAStare
http://github.com/IntStarFoo/KinectAStare/blob/master/ViewModels/KinectBaseViewModel.cs
TrackedHead = body.Joints[JointType.Head].Position;
//This is an 'aproxometery' http://trailerpark.wikia.com/wiki/Rickyisms
// of the tracking direction to be applied to the eyeballs on
// the screen.
TrackedHeadX = (int)(TrackedHead.X * 10);
TrackedHeadY = (int)(TrackedHead.Y * -10);
// Really, one should map the CameraSpacePoint to
// the angle between the location of the eyes on
// the physical screen and the tracked point. And stuff. //This is the TrackedHead Position (in Meters)
//The origin (x=0, y=0, z=0) is located at the center of the IR sensor on Kinect
//X grows to the sensor’s left
//Y grows up (note that this direction is based on the sensor’s tilt)
//Z grows out in the direction the sensor is facing
//1 unit = 1 meter
//Body
//body.Joints[JointType.Head].Position.X;
//body.Joints[JointType.Head].Position.Y;
//body.Joints[JointType.Head].Position.Z;
//Kinect (0,0,0)
//Screen Eyes (?,?,?)
Upvotes: 0