This blogs continues from Part 1.
As covered in the last research blog, the pipeline for a game character can be non-linear in order to increase efficiency. This is true for the animation side of things as many aspects can be worked on before and during the modelling of the character.
On a quick side note, I will briefly discuss the differences between animation film and game. Animation for video games is quite different from films or movies as it is an interactive form of entertainment as opposed to a passive form (Sanders, 2015). The animation itself is meant to be interacted with, not just viewed. In addition to this, the camera is not locked down and directed as it is in a film (Masters, 2013). This means that animation must look good and the curves must be smooth from all possible angles (Masters; Sanders). Additionally, the transitions between every possible action combination must be considered. This is quite different from films in which animators can ‘hide’ certain aspects of the animation or ‘cheat’ (for example, by breaking the rig in a way which looks good from a particular angle). Of course, certain parts of a game, such as cut scenes, might be passive and in larger studios these animations are handled by a separate “cinematic” department (Dahl, 2015).
Additionally, video game animation tends to be heavily focused on body mechanics due to the media itself (Masters, 2015).
There are variety of aspects that need to be consider before planning the gameplay animation. These considerations from the type of game you are creating and the constraints involved with the project. For example, the animation process will be extremely different if the game is 2D as opposed to 3D. Additionally, animations will vary on the type of camera used – third person will differ to isometric (Masters, 2013).
One really important thing to consider is the importance of responsiveness to the gameplay (Masters, 2013). As Tobias Dahl (2015) stated: “Gameplay comes first!” For example, a fast-paced military shooter demands an instantaneous response while a puzzle game may not. How responsive an action needs to be will impact the timing and amount of anticipation of an animation (Masters; Sanders). Nothing is more frustration to a player then pressing the attack button and having the character slowly draw their sword. Due to this, animations not only need to be responsive but also fun and engaging (Dahl, 2015).
Another major aspect to consider is style the game is trying to achieve. For example, many AAA game companies try to create characters and environments with a high level of realism (Micu, 2013). However, this can be extremely time consuming to create manually so methods such as motion capture is used to assist animation (Masters, 2013).
The level of interaction in the gameplay is also important to consider (Sanders, 2015). If the level of interaction is high this can be a huge strain on the animators. Dahl (2015) states that it is better to use short cycles than long sequences. Cutting down on the variety and length of animations can be achieved through the following considerations: Can the player use / interact with a wide variety of things? Do these interactions require unique animations? Can we blend or layer animations to achieve this? What can be reused?
Additional constraints include: platform, poly-count, software, engine, real-time rendering, processing power, programming, application of physics, and, of course, the triple constraints of time, money and quality.
Once the constraints have been taken into consideration, the direction of the animation will need to be considered. The type of game, desirable style and constraints will all factor into the animation direction (Dahl, 2015). A good example of two games with varying art direction are the current Fallout games: Fallout 4 and Fallout Shelter. Both are set in the same world with the same lore. However, Fallout 4 is a high-powered PC and console game with realistic 3D while Fallout Shelter is a 2.5D mobile game with a cartoonish style. As games they have very different goals and different constraints dictated by gameplay and platform. These differences make for very different animation styles but both styles of animation suit their respective game.
By now, the direction and style of the animation should be clear. From here, a comprehensive list can be written up. This list should breakdown all actions that need to be animated into their respective segments. Each segment will be corrected named according to a naming convention and should be categorized as looping or forward. This will aid both the animators and game programmers.
To do this consider the goals for the animation: what is it trying to achieve? What is its purpose? What are the gameplay constraints or demands? How long does it need to be? For example, a ‘tank’ character will have slower attacks (longer animations) that exaggerate weight and force to demonstrate strength. (Dahl, 2015).
The next stage is animation. This is an iterative process, meaning that it is repeated several times: each cycle improving on the last, bringing the animation closer to the final result. Going from the breakdown list, animators will find or create reference images and videos (Dahl, 2015). From this the animation is blocked out “quick and dirty”. Animation may be hand keyed in programs such as Maya or Max or it may first come from motion capture footage and adjusted to suit. As I have not used motion capture, I typically use stepped keys when blocking as I find that it helps with have dynamic poses. It is then exported and moved into the game engine, in this case Unreal. Any bugs or major issues are tweaked. Then the animation is implemented and tested. In this stage the timing, responsiveness, and general feel is examined and the animation is tested from all angles. It is then reviewed (this may be a group process) and the next iteration begins. This may require going back to reference or re-shooting motion capture footage. This process is repeated until the animation is finalized and ready to be implemented. (Dahl, 2015).
As mentioned above, you will be exporting the animated model many times throughout the multiple animation iterations. This is done by simply exporting the model as an FBX file with the following settings ticked: Smoothing Groups, Triangulate, Animation, Baked Animation, Deformed Model, Skins and Blend Shapes (Epic Games, 2015). This will allow the character to be correctly and easily imported into the Unreal Engine.
As demonstrated on the Epic Games (2015) website, to import the animated model into the Unreal Engine simply import the FBX with the above settings. Fully implementing the model as a controllable character with blend spaces and particle effects requires the use of either Blueprints, the Unreal alternative to scripting (Epic Games, 2015). As this is a rather large topic in its own right, I plan to research and write it up separately.
Autodesk. (2014). Export a Scene to Unreal Engine 4. Retrieved from
Dahl, T. (2015). Action: The Animator’s Process [Video]. Retrieved from
Epic Games. (2015). Creating a Blend Space. Retrieved from
Epic Games. (2015). FBX Best Practices. Retrieved from
Epic Games. (2015). FBX Animation Pipeline. Retrieved from
Epic Games. (2015). Setting Up a Character. Retrieved from
Micu, V. (2013). Jonathan Cooper on Taking Chances, Being Pushed Out of Your Comfort Zone, And Assassin’s Creed III. Retrieved from
Masters, M. (2011). From the 80s to Now: The Evolution of Animation in Video Games. Retrieved from
Masters, M. (2013). How Animation for Games is Different from Animation for Movies. Retrieved from
Sanders, A. (2015). Animating for Video Games vs. Animating for Movies. Retrieved from
Skyrim Screenshot [Image]. (2012). Retrieved from
Wyatt, D. (2015). The Art of Cutscenes. Retrieved from