Reputation: 1450
I’m getting into the world of Mocap and rigging logic and I’ve built my own Mocap rig (albeit not very accurate) which outputs the local position of all main joints of a person.
The output is in very raw form, it’s a stream of JSON with 29 joints in an array with x,y,z positional coordinates frame by frame.
My outcome is to capture all frames, and thus all joint positions as one file, and pass into a skeleton rig driver such as Blender, so I can start to manipulate an armature rig from my Mocap. However the conversion between positional, and skeleton rotational joints (IK?) has thrown me.
The next logical step would be to convert the JSON into a standard Mocap file such as BVH (to then later import into Blender or such) - however my understanding of BVH format is that it uses rotational data in a skeleton hierarchy. I am confident that I can recreate the BVH hierarchy as I already know each joint and where they sit within a skeleton, however I only know the position and not rotation. It seems that I’m missing a step in processing raw positional into joint rotational.
Any advise would be greatly appreciated.
Upvotes: 1
Views: 174