dazz
dazz

Reputation: 21

Kinect & Processing - Convert position of joint as mouse x and mouse y?

I'm currently using an XBOX KINECT model 1414, and processing 2.2.1. I'm hoping to use the right hand as a mouse to guide a character through the screen.

I managed to draw an ellipse to follow the right hand joint on a kinect skeleton. How would I be able to figure out the position of that joint so that I could replace mouseX and mouseY if needed?

Below is the code that will track your right hand and draw a red ellipse over it:

import SimpleOpenNI.*;

SimpleOpenNI  kinect;

void setup()
{
// instantiate a new context
kinect = new SimpleOpenNI(this);
kinect.setMirror(!kinect.mirror());
// enable depthMap generation 
kinect.enableDepth();

// enable skeleton generation for all joints
kinect.enableUser();

smooth(); 
noStroke(); 

// create a window the size of the depth information
size(kinect.depthWidth(), kinect.depthHeight());

}



void draw()
{

// update the camera...must do
kinect.update();

// draw depth image...optional
image(kinect.depthImage(), 0, 0); 

background(0);


// check if the skeleton is being tracked for user 1 (the first user that detected)
if (kinect.isTrackingSkeleton(1))
{   
int joint = SimpleOpenNI.SKEL_RIGHT_HAND;

// draw a dot on their joint, so they know what's being tracked
drawJoint(1, joint);

PVector point1 = new PVector(-500, 0, 1500);
PVector point2 = new PVector(500, 0, 700);
}
}

///////////////////////////////////////////////////////

void drawJoint(int userID, int jointId) {
// make a vector to store the left hand
PVector jointPosition = new PVector();
// put the position of the left hand into that vector
kinect.getJointPositionSkeleton(userID, jointId, jointPosition);
// convert the detected hand position to "projective" coordinates that will match the depth image
PVector convertedJointPosition = new PVector();
kinect.convertRealWorldToProjective(jointPosition, convertedJointPosition);
// and display it
fill(255, 0, 0);

float ellipseSize = map(convertedJointPosition.z, 700, 2500, 50, 1);
ellipse(convertedJointPosition.x, convertedJointPosition.y, ellipseSize, ellipseSize);
}

//////////////////////////// Event-based Methods

void onNewUser(SimpleOpenNI curContext, int userId)
{
println("onNewUser - userId: " + userId);
println("\tstart tracking skeleton");

curContext.startTrackingSkeleton(userId);
}

void onLostUser(SimpleOpenNI curContext, int userId)
{
println("onLostUser - userId: " + userId);
}

Any kind of links or help will be very appreciated, thanks!

Upvotes: 1

Views: 1599

Answers (2)

George Profenza
George Profenza

Reputation: 51867

It's slightly unclear what you're trying to achieve. If you simply need the position of the hand in 2D screen coordinates, the code you posted already includes this:

  1. kinect.getJointPositionSkeleton() retrieves the 3D coordinates
  2. kinect.convertRealWorldToProjective() converts them to 2D screen coordinates.

If you want to be able to swap between using kinect tracked hand coordinates and mouse coordinates, you can store the PVector used in the 2D conversion as a variable visible to the whole sketch which you updated either by kinect skeleton if it is being tracked or mouse otherwise:

import SimpleOpenNI.*;

SimpleOpenNI  kinect;

PVector user1RightHandPos = new PVector();
float ellipseSize;

void setup()
{
// instantiate a new context
kinect = new SimpleOpenNI(this);
kinect.setMirror(!kinect.mirror());
// enable depthMap generation 
kinect.enableDepth();

// enable skeleton generation for all joints
kinect.enableUser();

smooth(); 
noStroke(); 

// create a window the size of the depth information
size(kinect.depthWidth(), kinect.depthHeight());

}



void draw()
{

    // update the camera...must do
    kinect.update();

    // draw depth image...optional
    image(kinect.depthImage(), 0, 0); 

    background(0);


    // check if the skeleton is being tracked for user 1 (the first user that detected)
    if (kinect.isTrackingSkeleton(1))
    {   
        updateRightHand2DCoords(1, SimpleOpenNI.SKEL_RIGHT_HAND);
        ellipseSize = map(user1RightHandPos.z, 700, 2500, 50, 1);
    }else{//if the skeleton isn't tracked, use the mouse
        user1RightHandPos.set(mouseX,mouseY,0);
        ellipseSize = 20;
    }

    //draw ellipse regardless of the skeleton tracking or mouse mode 
    fill(255, 0, 0);

    ellipse(user1RightHandPos.x, user1RightHandPos.y, ellipseSize, ellipseSize);
}

///////////////////////////////////////////////////////

void updateRightHand2DCoords(int userID, int jointId) {
    // make a vector to store the left hand
    PVector jointPosition = new PVector();
    // put the position of the left hand into that vector
    kinect.getJointPositionSkeleton(userID, jointId, jointPosition);
    // convert the detected hand position to "projective" coordinates that will match the depth image
    user1RightHandPos.set(0,0,0);//reset the 2D hand position before OpenNI conversion from 3D
    kinect.convertRealWorldToProjective(jointPosition, user1RightHandPos);
}

//////////////////////////// Event-based Methods

void onNewUser(SimpleOpenNI curContext, int userId)
{
    println("onNewUser - userId: " + userId);
    println("\tstart tracking skeleton");

    curContext.startTrackingSkeleton(userId);
}

void onLostUser(SimpleOpenNI curContext, int userId)
{
    println("onLostUser - userId: " + userId);
}

Optionally, you can use a boolean to swap between mouse/kinect mode when testing.

If you need the mouse coordinates simply to test without having to get in from of the kinect all the time, I recommend having a look at the RecorderPlay example (via Processing > File > Examples > Contributed Libraries > SimpleOpenNI > OpenNI > RecorderPlay). OpenNI has the ability to record a scene (including depth data) which will make it simpler to test: simply record an .oni file with the most common interactions you're aiming for, then re-use the recording when developing. All it would take to use the .oni file is using a different constructor signature for OpenNI:

kinect = new SimpleOpenNI(this,"/path/to/yourRecordingHere.oni"); 

One caveat to keep in mind: the depth is stored at half the resolution (so the coordinates with need to be doubled to be on par with the realtime version).

Upvotes: 1

jubueche
jubueche

Reputation: 793

In your case I would recommend, that you use the coordinates of the right hand joint. This is how you get them:

foreach (Skeleton skeleton in skeletons) {
    Joint RightHand = skeleton.Joints[JointType.HandRight];

    double rightX = RightHand.Position.X;
    double rightY = RightHand.Position.Y;
    double rightZ = RightHand.Position.Z;
}

Be aware of the fact that we are looking at 3 dimensions so you will have a x,y and z coordinate.

FYI: You will have to insert these lines of code in the event handler SkeletonFramesReady. If you still want the circle around it have a look at the Skeleton-Basics WPF Example in the Kinect SDK's.
Does this help you?

Upvotes: 1

Related Questions