Reputation: 318
I'm building an interactive floor. The main idea is to match the detections made with a Xtion camera with objects I draw in a floor projection and have them following the person.
I also detect the projection area on the floor which translates to a polygon. the camera can detect outside the "screen" area.
The problem is that the algorithm detects the the top most part of the person under it using depth data and because of the angle between that point and the camera that point isn't directly above the person's feet.
I know the distance to the floor and the height of the person detected. And I know that the camera is not perpendicular to the floor but I don't know the camera's tilt angle.
My question is how can I project that 3D point onto the polygon on the floor?
I'm hoping someone can point me in the right direction. I've been reading about camera projections but I'm not seeing how to use it in this particular problem.
Thanks in advance
UPDATE:
With the awnser from Diego O.d.L I was able to get an almost perfect detection. I'll write the steps I used for those who might be looking for the same solution (I won't get into much detail on how detection is made):
Step 1 : Calibration
Here I get some color and depth frames from the camera, using openNI, with the projection area cleared.
The projection area is detected on the color frames.
I then convert the detection points to real world coordinates (using OpenNI's CoordinateConverter
). With the new real world detection points I look for the plane that better fits them.
Step 2: Detection
I use the detection algorithm to get new person detections and to track them using the depth frames.
These detection points are converted to real world coordinates and projected to the plane previously computed. This corrects the offset between the person's height and the floor.
Hope this helps. Thank you again for the awnsers.
Upvotes: 2
Views: 765
Reputation: 183
Work with the camera coordinate system initially. I'm assuming you don't have problems converting from (row,column,distance)
to a real world system aligned with the camera axis (x,y,z)
:
calculate the plane with three or more points (for robustness) with the camera projection (x,y,z). (choose your favorite algorithm, i.e
Then Find the projection of your head point to the floor plane (example)
Finally, you can convert it to the floor coordinate system or just keep it in the camera system
From the description of your intended application, it is probably more useful for you to recover the image coordinates, I guess.
Upvotes: 1
Reputation: 179930
This type of problems usually benefits from clearly defining the variables.
In this case, you have a head at physical position {x,y,z}
and you want the ground projection {x,y,0}
. That's trivial, but your camera gives you {u,v,d}
(d being depth) and you need to transform that to {x,y,z}
.
The easiest solution to find the transform for a given camera positioning may be to simply put known markers on the floor at {0,0,0}, {1,0,0}, {0,1,0}
and see where they pop up in your camera.
Upvotes: 0