Have you ever come across questions like:

  • How to make physics and animation work together in Unity3D?
  • How to combine physic movement with animation?
  • How to animate physics?
  • How to physicate animation?

typical questions folks post on forums and get nothing but verbal diarrhea in response

And so on… Gamedev forums are literally flooded with such questions. Well, it means that people deal with it and they have to deal with it. Perhaps, even you had similar situations. And I guess, you found no solution! Perhaps…

Soo, what’s the problem? Shortly: the problem is Unity itself – in 2k19 it still has no apropriate way to combine key-frame animated movement and physically-produced movement. And yes, I’m aware of kinematic (or cinematik) rigid bodies in Unity and ‘Animate Physics’ option in update mode list and so on, but this shit never works or works like a monkey eating its own poop. So, I guess, we need a working tool, which allows to combine animation and physics. I won’t say I’ve invented one, but here’s my approach.

First, we want to be able to have a predefined sequence, where values and other stuff are changed (as we set). For this purpose we can engage Untiy’s animation system, other alternatives are a far more hemorrhoidy. So, we can use the default animation system, but only to modify those values, which are not touched by the physics engine. Second, we can perfrom different tricks on RB component with animation system: we can change rigid body’s mass, its drag coefficient, even set or remove constraints. Sounds neat. However, when we come to changing its position, we would still have nothing to do with it. However, we can do all stuff to other object and somehow link our object to that. For example, we can drive one object with animations and make another object follow it, while being moved by physics forces. Strange, huh? Nope. Ever seen cat being fooled by a laser pointer? I think we can borrow this idea..

Not only funny, but very useful as a concept too

So, basically, we can use some object as a target and another object (which we need to move) will have to move toward this target. This is the easiest case, we’ll only need to know exact distance between an object and the target and then we’ll be able ot apply some amount of force to move the object by this value. Nothing extraordinary here.

Well, here’s my plan…

All target’s transformations are performed by an animation sequence. There’s nothing more. Actually, the target is just an empty gameobject with Animatior component attached to it (with coresponding animation sequences). Object’s stuffing looks a bit different. First, the object has a collider and a rigidbody, which allow it to interact in a physically-driven manner. However, it’s not enough. We have to make some custom component (or use existing) to be able to move object. So I made one and called it LBRBPhysFloating.

Component layout for the ‘object’

This component may look sophisticated, but all it does is to apply needed amount of forece to the object. The word ‘floating’ stands here for the floating kind of movement, nothing more. Anyways, all the key concepts of this components are the following: we need to find target’s location, and subtract from it our rigidbody’s location, then we need to apply equal amount of force. The corresponding snippet is below, full lisiting is here.

protected void PerformMovement()
 Vector3 delta;
 //Get the distance
 delta = GetFloatLocation() - _rigidbody.position;
 //Condition statement is not necessary, AddForce only matters here
 if (delta.magnitude >= 0.1f)
  _rigidbody.AddForce (delta.normalized * Mathf.Clamp (delta.magnitude, 0, MaxVelocity), ForceMode.VelocityChange);
 //This strange conditions are made only to allow object to 'float'
 if (Vector3.Dot(_rigidbody.velocity.normalized, delta.normalized) < 0)
  _rigidbody.AddForce (-_rigidbody.velocity.normalized * Mathf.Clamp 
  (_rigidbody.velocity.magnitude, 0, MaxVelocity), 

Basically, you can put any code at the top, but the AddForce procedure should be on its place

Of course, everything should be called from the FixedUpdate procedure. Also, you may want to clarify some things like ForceMode and ForceMode.VelocityChange, which does all the physic-based magic and maybe you would like to learn about all the differences between ForceMode.VelocityChangeand FForceMode.Acceleration. This is explained here.

So, what we’ve got? Let’s see. Here I’ve made a simple scene with all necessary content: a green gameobject is our ‘object‘ (the one to move), the red ellipse is our ‘target‘. As I move the target around (in play in editor mode, thanks, Unity), the object follows it. The most important thing here is that the object handles physical interactions with other objects (on the second gif).

So, here it is…

Next important thing is animate the target, so you won’t have to pull it yourself every time of course. In this example I wanted to use a simple, plain, built-in Animator component. Why not? Nothing more, just made a simple trajectory and… And it worked! Yea, it worked nice. I didn’t even expect such a satisfactory result. You can watch the video below to admire.



Hey. Have you ever tried to develop a complex behaviour system for your project? I mean a system, which provides some really complex functionality, for example, like system that handles character’s actions. Just a bit more complicated stuff than handling the ‘famous triplet’ – actions like run, jump and shoot. Well, tried to (some time ago), but it turned out to be not very successfull. So, I think this time it could be a succsess!

So, why do one need a complex behaviour system in his project? Reasons may be different, but in most cases one needs an integrated solid-working system. Which means that all things should be logically bind to each other and every aspect of a character-world interaction process should be very clear and failsafe. For example, if we have a character, which is able to pick things up (like taking objects from the floor and placing them into inventory) then in most cases we would like to restrict any movement and maybe some other interaction with the environment. Why? Simply because not doing so may cause a lot of bugs, it may ruin the seamlessness of gameplay and so on. Yes, the easiest way is to jumpstart like: I’ll just make some if-statements in the code and everything would work fine. But, no. No, no, no! Nien, nien, nien!!!

Everything just won’t work! Why? Because we obviously have several independent systems working simultaneously: the physics system (that handles movement), the animation system (that handles animation), your gameplay-related system (which handles your items, inventory and so on). Binding them together with several lines of code is just impossible… In may cases. I mean it. You’ll just have to write lines of code to check all conditions such may be the following: if character is moving or not, if item is moving or not, if other animation is playing and so on. Doing this in ‘plain’ and ‘simple’ way would lead to having either overbloated code files or intermixed code (where logically different things mix toghether). And also you’ll be having time debugging that code.

Thus, we have to make some other system, which first handles all other stuff on a lower level of abstraction and only then makes some gameplay-related stuff, which appeares only on higher level of abstraction. There are many models of systems, that can be potentially implemented this way, but in my opinion (and not only mine) the State Machine model would fit best.

I’m aware about tonns of State Machine implementations (a good half of them — in the Asset Store), but most of them are very strange and unconvinient, and the others I just don’t like personnaly. Then I decided to make my owe State Machine-based behaviour system. So, it’s time to invent some bicycle!

The idea is pretty simple: implement a state machine, where each state is an action, transition rules are obviously defined as permissions for each action to be activated or not. However, I decided to make a step away from the classic implementation scheme: I made each state independent with its own transition rules. So, the base class is LBTransitiveAction, which handles all the basic stuff like activating and deactivating itself following predefined transitions (links to other actions). Any action is derived from this basic class, so all the usefull stuff is implemented directly and all low-level stuff is kept in the base class. All actions are stored in a special component – the LBActionManager, which attached to the gameobject (the character).

For example, I made the LBCharacterAnimatedAction, which handles animaions on characters (many actions depend on animations, therefore they are derived from this one). It has only one significant procedure — the ActivateAnimation function, which (obviously) plays the animation from animator. It also checks every tick if the animation has ended or has changed to another in TickActive function, which is called every tick, while the action is active. Also it has a CreateAssetMenu keyword, which allows it to be created in the editor (kind of usefull stuff).

The most interesting thing here is what Unity allows user to make in the editor. Saying shortly: using editor features may either lead to a total mess, or develop an elegant solution. The core idea of my implementation is to make actions availiable at design-time, so the designer could fix existing or create new actions from a certain template. Yes, the word template is the key word here. During design time, you can go to create menue and select some template for a new action and then fill it with exact content like transitions, animations and so on. Finally, you can add this action to an action manager to make it work.

Here’s my character state sheet. Well, a part of it, which defines grounded and airbourne actions, availiable at the moment. As you can see, I’ve made it really fragmented, therefore, there are eight basic actions, which control character ground-air movement. However, most of actions (or states) are instanced from a template. And yes, I haven’t done any real graphical representation of my system (like others’ graphs or visual-programming things or something like this), so, that’s the only thing to view at the moment.

A character state sheet with explanations. However, it’s a pretty old sample, currently, there are some changes and updates in it.

So, the whole thing seems to work. Animations are a bit craggy, also there are some physics bugs (you know they are not rare in Unity). But I’ve played
around with this for some time… And I can say it’s far more better than the previous one. Separate states make the system easier to control and expand, also separation of animation between states makes it smoother, allowing blending and crossfading.