Vert Pushing - An Introduction to 3D Animation for Games

Bringing characters and objects to life is the main task of an animator, but the process of breathing life into a pile of pixels and verts involves many facets that span the gap between art and code.

It all starts with a spark; the inspiration or idea for a new character, creature, or … well anything really. After that spark the idea needs to be made tangible in the form of a concept before being modeled and textured. In order to make the job of a rigger easier modelers build characters into a neutral pose (T-pose), which allows for better deformation in areas like the shoulder, hip, elbows, and knees. Once that is complete it is ready for the rigger to get their hands dirty.

(Here is a simple model of a cartoony eyeball I made. No specific concept was used in this case. Instead of a T-pose I modeled the eyelids in a fairly neutral resting state to allow for better deformation.)

While not all animators do their own rigging, it is the keystone to any good animation. Without a rig a character is just a sack of polygons with lots of potential. The process of rigging starts by building a character's joints, i.e bones. Having good references for anatomy is always helpful when placing joints. The human body is a complex machine driven by 206 bones, 360 joints, 640 muscles, and miles of veins and arteries. Game characters are hollow shells of polygonal geometry driven by 1-200 strategically placed joints that are meticulously skinned to the model's vertices. The process of skinning, or weight-painting, involves defining the region of the mesh that a given joint can influence and determines how a model will deform. The rigging artist's job isn’t done after just creating the joints though.

(Placing joints and painting weights require thoughtful planning of the desired look, and a basic understanding of anatomy. I knew I needed the eye to be able to blink and look around. I also added a few joints to the brow and cheek to help the eye emote.)


After the joints have been placed and skinned to the mesh, the rigger now creates a set of animation controls that the animator can use to start bringing life to the character. Control rigs can be thought of as the strings on a puppet. The rigger will define how a control object drives a joint and the animator can then manipulate the control, which drives the joint, which drives the mesh. A good control rig gives an animator a lot of control over the character and allows for highly expressive animations.

(Here you can see that the red control object is driving the movement of the eye joint which has been bound to the mesh. Moving the control object results in movement of the eye. I then set up more advanced controls so that the eyelid joints move slightly with the eye as it rotates to create a lens “pushing” effect, and mapped the blink to the scale of the yellow control object.)

Now the animation can begin! The actual process of animation for games comes in many forms, but starts from a simple set of steps. Using the control rig to pose the character and start setting keyframes is the first step of the actual animation process. Keyframes are the core of an animation, they are the defining poses that determine the timing and spacing or a movement. After creating a pose and setting a keyframe the process repeats, always keeping in mind the frame before and the next pose you want to make. Game characters require full movesets of animations that are composed of hundreds of individual animations that can be played based on player input or context. From creating looping walk cycles, object interactions, or full cinematic sequences, the process of animating doesn’t change much. Game animators have to be adaptable and always keep the games design in mind. A character who is running through a field in a dream world would probably not run the same as the character escaping a burning building. Game animations are also heavily reusable by nature and must be made to not look bad when seen thousands of times. To break up some of the monotony of heavily reused animations, additive animations and animation layers can be a saving grace. There are many different styles and approaches to animation, but it is always important to keep the basic principles in mind. (I’ll go into the principles of animation another time)

(After setting some keys on the animation controls we can start to see the model come to life.)

After a character has been created and animated the skeletal mesh (the mesh and its skinned joints) and animations (the joints with the animation data baked into them) are imported into the game engine. As is the case with many aspects of game development, the animation pipeline is always changing and improving. Animators are being given more and more control, and thus, more and more responsibility. One major improvement to the animation pipeline in most modern engines (Unity 3D, UE4, UDK, CryEngine) are state machines. State machines are logic trees  that store animation states, and operate based on input to cause a change. Animators and programmers work together to set up logic connections inside of the state machine to define how animations will play based on user input using a set of parameters. State machines can get quite complex, but are a fast and efficient way to visualize the amount of information needed to make your character run, jump, swim, and crawl.

(Example of a simple State Machine from Unity’s Documentation. )


Animation is an important aspect of modern games, and is an area that has seen large improvements in recent years. It is an exciting area of game development that always offers a fresh challenge and fun solutions to a number of problems.

Ryan Mohler, Animator

Ryan Mohler, Animator