Reputation: 81
I'm working with the new face tracking SDK of Kinect (Microsoft Official), and I noticed that there's difference in detection between c++ and c#-wpf example: the first one is way faster in recognition than the second (the one I want to use, actually). In the c++ version the face tracking is almost on the fly, while in the wpf one it starts ONLY when I put my entire body (so the entire skeleton) in the FOV of Kinect.
Did anyone found out why? I noticed that the skeletonframe provided shows the property "Trackingmode = default", even though I set the kinect skeleton stream on seated.
colorImageFrame.CopyPixelDataTo(this.colorImage);
depthImageFrame.CopyPixelDataTo(this.depthImage);
skeletonFrame.CopySkeletonDataTo(this.skeletonData);
// Update the list of trackers and the trackers with the current frame information
foreach (Skeleton skeleton in this.skeletonData)
{
if (skeleton.TrackingState == SkeletonTrackingState.Tracked
|| skeleton.TrackingState == SkeletonTrackingState.PositionOnly)
{
// We want keep a record of any skeleton, tracked or untracked.
if (!this.trackedSkeletons.ContainsKey(skeleton.TrackingId))
{
this.trackedSkeletons.Add(skeleton.TrackingId, new SkeletonFaceTracker());
}
// Give each tracker the upated frame.
SkeletonFaceTracker skeletonFaceTracker;
if (this.trackedSkeletons.TryGetValue(skeleton.TrackingId,
out skeletonFaceTracker))
{
skeletonFaceTracker.OnFrameReady(this.Kinect,
colorImageFormat,
colorImage,
depthImageFormat,
depthImage,
skeleton);
skeletonFaceTracker.LastTrackedFrame = skeletonFrame.FrameNumber;
}
}
}
The code is the one provide my microsoft with the 1.5 SDK.
Upvotes: 2
Views: 2390
Reputation: 41
I finally figured it out and made a post on MSDN forums regarding what else needs to be done to get this working.
Hope that helps!
Upvotes: 1
Reputation: 81
I had some information in other forums, specifically here (Thanks to this guy (blog)):
Basically, in the c++ example all the methods to track the face are used, both color+depth and color+depth+skeleton, while in the c# only the latter is used. So it only starts when you stand up.
I did some tests, but the other method is still not working for me, I did some modification to the code but with no luck. Here is my modification:
internal void OnFrameReady(KinectSensor kinectSensor, ColorImageFormat colorImageFormat, byte[] colorImage, DepthImageFormat depthImageFormat, short[] depthImage)
{
if (this.faceTracker == null)
{
try
{
this.faceTracker = new Microsoft.Kinect.Toolkit.FaceTracking.FaceTracker(kinectSensor);
}
catch (InvalidOperationException)
{
// During some shutdown scenarios the FaceTracker
// is unable to be instantiated. Catch that exception
// and don't track a face.
//Debug.WriteLine("AllFramesReady - creating a new FaceTracker threw an InvalidOperationException");
this.faceTracker = null;
}
}
if (this.faceTracker != null)
{
FaceTrackFrame frame = this.faceTracker.Track(
colorImageFormat,
colorImage,
depthImageFormat,
depthImage,
Microsoft.Kinect.Toolkit.FaceTracking.Rect.Empty);
//new Microsoft.Kinect.Toolkit.FaceTracking.Rect(100,100,500,400));
this.lastFaceTrackSucceeded = frame.TrackSuccessful;
if (this.lastFaceTrackSucceeded)
{
if (faceTriangles == null)
{
// only need to get this once. It doesn't change.
faceTriangles = frame.GetTriangles();
}
this.facePointsProjected = frame.GetProjected3DShape();
this.rotationVector = frame.Rotation;
this.translationVector = frame.Translation;
this.faceRect = frame.FaceRect;
this.facepoints3D = frame.Get3DShape();
}
}
}
frame.TrackSuccessful is always false. Any idea?
Upvotes: 1