Reputation: 1247
Is there any documentation explaining how should I use DTW (dynamic time warping) and with Kinect? I need to record (as in this demo) a gesture and later use the recorded gesture to apply a command to Simple Open-NI
. I've downloaded KinectSpace code (pde file), however, I'm having issues understanding how it is supposed to be working.
From wikipedia:
int DTWDistance(char s[1..n], char t[1..m], int w) {
declare int DTW[0..n, 0..m]
declare int i, j, cost
w := max(w, abs(n-m)) // adapt window size (*)
for i := 0 to n
for j:= 0 to m
DTW[i, j] := infinity
DTW[0, 0] := 0
for i := 1 to n
for j := max(1, i-w) to min(m, i+w)
cost := d(s[i], t[j])
DTW[i, j] := cost + minimum(DTW[i-1, j ], // insertion
DTW[i, j-1], // deletion
DTW[i-1, j-1]) // match
return DTW[n, m]
}
What is the meaning of return DTW[n, m]
?
Should all the gestures be evaluated during the draw() method call? Can any optimisation be applied here?
Upvotes: 0
Views: 1943
Reputation: 1247
Implementation using Kinect and DTW with Processing.
gh/jonathansp/KinectRemoteControl
Upvotes: 0
Reputation: 2854
amnon.owed just posted this great tutorial at Processing forum. Maybe it can help you:
http://www.creativeapplications.net/processing/kinect-physics-tutorial-for-processing/
here part of his post:
My latest tutorial for CreativeApplications.net has just gone live. It's about using a Kinect to interact with geometry on your screen. Several Processing libraries are used (SimpleOpenNI, v3ga, Toxiclibs & PBox2D) to achieve this effect. It's a hands on tutorial so the main content is made up of three fully commented code examples. These exampes will also show you how you can turn a silhouette blob into a polygon, which is useful for many things, even besides 2D physics interaction.
Upvotes: 1