We had to cut Gavit’s character related features (for now) but since characters are somewhat important in games and stories I’m looking into alternate means of showing figures and communicating intentions.
The most obvious solution is posing ragdolls, as seen in the following video:
I used the Razer Hydra in both hands to control pairs of reach targets for ragdoll bones: Head and hips, left and right hands, left and right feet. (Those are the red/green wireframe blobs on the video.) The pairs were selected using buttons while the camera was rotated with the thumbstick.
I learned the following lessons during this mini project:
- Input with 6 degrees of freedom is nice although movement perpendicular to the screen is hard to judge.
- Controlling two things at once does not help posing. I ended up focusing on one hand while tried to keep the other in place.
- 6 control points for a human body is the absolute minimum. Controls for shoulder, chest, elbows and knees would’ve made work faster and the result better. (Why didn’t I just copy Motionbuilder…)
- Clay-like, very tight joints would’ve made things easier but I couldn’t set up the physics asset that way. (For some reason 0,0,0 velocity target with 9999999 force had no effect at all… o_O )
- Actor relative positioning (as opposed to camera relative one) is confusing.
So it seems that ragdoll posing can be helpful provided that the poser rig does the following:
- Select and work with only one control point.
- One hand manipulates control point, other adjusts view.
- Choice between local and camera relative transformations.
(So it would be a poor man’s Motionbuilder.)
Animating a character real-time will need a different, more marionette like setup, which will be the challenge for the next few weeks, after I finished one more prototype in the pipeline: an Arcade Volleyball homage.