Guided rigid bodies
I previously discussed replaying user input and in this article I’ll cover a related feature: recording and replaying rigid body motion.
The basics are quite simple: dump transformations of a physics driven actor to a file and then convert that data into a matinee, like before. However simply replaying the motion track with a mover has an inherent problem: impact and slide effects, defined in physical materials, won’t happen because no actual collisions occur. All the special effects (impact dust puff, damage decals, sparks, etc) would need to be placed and timed manually. That would be a bad workflow. Fortunately the solution is rather straightforward:
Let’s constrain a KActor to a mover: the mover replays the given motion data while tries to keep the physics driven actor on the same path. The guidance is gentle since it doesn’t work against the physics engine, only corrects small inconsistencies so the result is the same every time.
In the following video I actually used an invisible mover to replay matinee data and placed a constraint on the map to link it to the KActor but later on that won’t be necessary: our own physics driven actor (GVKActor) will spawn its own constraint and in there will adjust the LinearPositionDrive and AngularPositionDrive settings.
Anyway here is the proof of concept map:
However there is something wrong: on hitting the wall the balls sometimes don’t produce the dust puff impact effect. The source of the problem is the fixed rate motion data was sampled: although transformations were recorded 60 times a second, roughly half of the collisions still happened between measurements.
The dashed line indicates the original path of an object while the orange curve is the matinee data reconstructed from the samples. The collision happened while we were not looking so we’re left with a flat part keeping the object away from the surface it originally hit.
I came up with two ideas I yet to implement. The first one is pretty obvious: let the physics actor report its status every time its RigidBodyCollision event occurs. However past experiences show that it’s not a 100% reliable, I’ll have to see if it’s good enough.
The second potential solution is trying to guess the time and place of the collision based on the data we have.
First we generate velocity vectors from the samples then we scan for a potential collision: we know that physics driven actors generally move in a smooth, gradual fashion so the difference between two samples will be small unless a collision happened. For example A and B vectors are kinda different but not much. After comparing B to C/D we can be pretty sure that a collision happened there.
The next step is extrapolating two new velocity vectors, and extension of B and D in the example, and finding their intersection (or mid-point where they are closest to each other). That point is our guess which might not be exactly precise but a lot better than what we had before.
I’ll implement these solutions after I’m done with one of the earliest, yet still unfinished prototypes I have in the pipeline. Stay tuned.