Advertisement

Precisely controlling velocity of rigidbodies that are using joints

Started by January 13, 2020 04:19 AM
8 comments, last by JoeJ 4 years, 5 months ago

Second time this week I've come here with a tough (for me!) physics problem.

I am working on a VR project where the player has physical hands that directly interact with the world (I am using Unity, but my issue is likely engine agnostic). Previously, the hands were a single rigidbody with several box colliders attached to roughly model out the shape of the hand (like a paddle). To make interactions more intuitive, the system was improved to instead be a static collider for the palm/wrist area, and then a series of rigidbodies for each digit, connected to the palm with ConfigurableJoint (for anyone unfamiliar, Unity uses PhysX, and a ConfigurableJoint is just the most customizable joint it offers). An overview of the results can be seen here (with issues).

To move the hand, its velocity/angular velocity is set every frame to interpolate towards the VR controller's position/orientation (interpolation is for gameplay purposes—it is not physically accurate). When no joints are connected to the hand, this works correctly with no issues. However, when joints are connected to the hand, the calculated per-frame velocity is now incorrect, as the extra mass from the joints is not taken into account. This causes the hand to sway around the goal, instead of smoothing moving to it.

https://gfycat.com/colossalremotekob

To resolve this, I set the mass scale of each joint to be very high. In Unity, this means that the parent joint is treated as much heavier than the child joints. This way, the digits' mass does barely affects. This resolves the swaying, and looks excellent in many conditions, but because the digits are treated as having very low mass, they become very unstable during collisions.

https://gfycat.com/unluckybogusbichonfrise

The ideal solution would have none of the swaying, but all of the stability. In order to do this, the code that sets the velocity of the hand (and digits) must take into account the effect the objects connected to them will have. Is it a tractable problem to calculate this? I don't think this requires IK of any sort, but I'm not entirely sure where to begin, so I was hoping someone here would have some insight.

Thanks for any help,

Erik

You could try PID controller to calculate opposing force/torque which will result restricted velocity/angular velocity.

More specifically you should use PI controller for velocity (with Derivative coefficient equal 0).

Regards, Mikhail

Advertisement

You also could iterate all bodies of the hand and sum up mass and inertia to get an approximated single body representing the multi body system. Then calculate forces using this imaginary single body.

Inertia is the complicated part here, because the usual approximation of representing it with 3 values on 3 orthogonal axis can not do this exactly for multiple bodies. You would need to calculate those directions before so you get a best fit, maybe using some least squares method to get lines / planes representing the bodies mass distribution, or maybe there is a tool function in PhysX to do it. (I never tried this - instead i loop over all bodies each time i need an inertia value of a certain queried direction.)

So i propose you start trying just a constant value for all 3 inertia values, so treating the hand as a sphere, but see how accumulated mass and COM alone already improves your situation or not.

Have you tried using a ‘fixed joint’? You could control a single body like you did before, but with disabled collisions. And have a fixed joint from this single body to the root body of the actual multi body hand. (Never used PhysX, so no idea how this works.)

Looking closer at this ‘Configurable Joint’ i think this offers the proper solution.

The proposals above do not account for the weight of a body you lift up, and this really becomes a problem if you lift up a multi body system like a ragdoll.

I see the conf joint has options to set target velocity or target position. This way the constraint solver of PhysX would solve the problem for you, including all the fingers and the lifted ragdoll bodies. And that's what a physics engine should do.

But i'm unsure if given target velocity can be set directly from the VR inputs, because usually a joint is between two bodies.

I'm using the Newton physics engine, and here it works by setting one of those two bodies to zero, so no parent. Ah - i see it works the same way here, quote:

"Connected Body

The Rigidbody
object which the joint connects to. You can set this to None to indicate that the joint attaches to a fixed position in space rather than another Rigidbody."

So this should be exactly what you want. Set none as parent, the hand root as child, and then given velocity / position target from VR should drive the hand as correctly as PhysX can do.

But i guess that's what you already do and it does not work? (Using velocity should work better than using position)

Hey, thanks very much for all the replies. I'll hopefully have some time to work on the problem soon, and will post my results here.

Not sure if you got anywhere with this.

One way to solve the problem without having to iterate over all the fingers and objects attached to the hand, and anything else that's going on, is to solve an optimisation problem by making multiple solves of the upcoming timestep (or, more likely, something like up to 0.5 seconds into the future) for different control inputs. In your case you have 6 control inputs (vel/angVel of the hand, or perhaps position/orientation of a target).

For example, you could do a couple of iterations of gradient descent:

  1. Record the state s0 of the system at time t0
  2. Simulate out to t1 with controls c
  3. Evaluate the new state with some cost function (i.e. deviation of the hand from its target).
  4. roll back the simulation to state s0
  5. Simulate out to t1 with controls c+delta - e.g. delta is a tiny change to the vel.x control
  6. Evaluate the new state with the cost function - by comparing with the value in step 3 you get d(cost)/d(delta)
  7. Go back to 4 and do it again with a different (orthogonal) delta.
  8. At the end of all this you get a “direction” in which you can adjust your controls that results in cost decreasing.
  9. Now do a line search along that direction until you find a minimum cost.
  10. This gives you a new control c - go back to step 5 and repeat a few more times.
  11. Eventually call it a day, and you will have an improved control, and a state s1 at time t1 which approximately minimises the cost.

You will have spotted that this is obviously a lot more expensive than a single simulation step - however if you've only got a couple of hands to worry about you should be able to easily do this in real time - especially as most computers will run at least 8 threads in parallel.

In your case I would imagine that gradient descent is enough on its own, because the controls it needs to find are just a modification of your initial guess. If the system needs to explore more in order to find a good set of controls, then you can use more stochastic sampling before doing gradient descent, or use something like CMA (there are libraries that implement that).

For example, this is doing something similar - except on a whole character. All the motion is generated as a result of trying to minimise the deviation of the character from a static standing pose. There is no code that encapsulates any idea of “avoidance” or “standing” - it all just emerges form the physics and the cost function. Obviously it's a bit wobbly as I wanted it to run in real time (on an 8-year old PC) and couldn't use many iterations - but controlling a whole character is a lot harder than controlling a couple of hands with appendages, so I'm confident it would work OK for you.

Advertisement

@mrrowl Interesting approach : )

… gives me opportunity to show my guy once more. Unfortunately still no time to continue work on this, but one day i will… : )

Following up on this (finally!):
For our uses, I managed to get away with tweaking the values enough to minimize the swaying with maintaining stability. I also used some heuristics based on gameplay behaviour, like limiting the digit velocity when grabbing and releasing objects to prevent them from moving too quickly and pushing away nearby objects. Overall, pretty lucky to get something that works well in-game without having to dig too much deeper beyond the original algorithm…

…but that might not last forever. To improve on this, @mrrowl and @joej I think are right on the money with having an iterative approach. I'm confident it should work well in Unity too, since I recently purchased this package (for character physics) and it's doing more or less exactly as described in @mrrowl's post (I'm already using an optimization algo for aim assist, so I really should have thought of that!). Thanks to both of you for your great replies.

If I do end up revisiting the controls, I'll make sure to follow up with my results.

Hehe, looking the videos to this Unity plug in, the more realistic characters behave, the more fun it is to see if bad things happen to them. Triggers true malicious joy :D

This topic is closed to new replies.

Advertisement