Category Archives: Game Character

You can play the game!

I have fixed up the last couple of issues with my game (exit button, reset button, horrible lighting and bloom) and have put a playable build up online.


You can download and play the game here:


Integrating Sound with Animation Inside Unreal

I choose to integrate sound with my run animation inside the Unreal engine. I did this through the blueprint and notify systems. Firstly, I grabbed a couple of free sound effects and imported them into my project (quickly learning that a whole bunch of file types are not supported).

Then I dropped in some background ambience. This was very easy but required a blueprint setup because I wanted it to loop.


I then opened up my run animation, worked out where the sound should be played and set a ‘notify’ on the correct frame. This notify allowed me to attach a sound file (in this case a footstep) to the animation frame where she makes contact with the ground.

5In addition to this, I added a gong sound effect to the beacons activating. This was a little more complicated and required using Blueprints (Unreal’s node based scripting) to get it working properly.


Finally, I made a musical sound play once all the beacons had been found. Again, this was much more complicated. Ben and I went through a couple of tutorials to work out how exactly that could be done.

6The video below shows how the sounds work in the ‘game’.

These sound effects are not perfect at all but it does helps make the scene feel complete.

Setting Up a Character, Environment, Interactables, Particles and GUI in Unreal


Having finally finished my character and created some very rough, very basic animations I began the process of setting everything up in Unreal. As I hope to work on games in the future, and because the game industry in Australia requires you to be an all-rounder, I decided to start from scratch and setup a character without using any of the templates. To do this I followed a comprehensive tutorial on the Unreal site.


The movement and control setup for my character

Through this tutorial I learnt how to set up a character, a camera, controls and animation using Blueprints (Unreal’s version of scripting). In hindsight I should have set up a different type of camera as the one I currently have highlights the lack of strafing animation. I did look into changing this but it required much more complicated Blueprint work and I currently lack the time to learn this.


The camera setup

The animation system is really cool and allows you to smoothly blend between different animations depending on the player’s speed or direction. As I just had a shitty idle animation and a basic run, my blend space was really simple.


The blend space between the idle and run

While this was all relatively straight forward it was time consuming and fiddly. For example, after everything was finally set up, my animations would not work. I checked and double checked everything but nothing fixed the issue. Finally I found a single node that had not been linked up.


Once I had a moving character, I decided to create an environment for her to move through. At first I planned to use just a basic plane but decided a small labyrinth would be much cooler. I quickly created a labyrinth in 3DsMax and imported into the game.


I then set about creating collision boxes for all the walls and the floor. This was quite fiddly and took some time as I had to individual place and size each collision box. Additionally, the camera seemed to interact weirdly with the collision boxes (like shaking when the player went to close to a wall).


Individually placed collision boxes (purple)

This was due to the collision boxes being slightly larger than the walls. I was then informed that, with a single click, you can use an objects faces as its collision boxes but it is much heavier when playing the game. As my game is very light to run, I decided to use this method. This basically fixed the issues with the camera.


Finally, I was able to run around the maze.

Interactables and Particles

At this point I decided that game was too empty and I needed something for players to interact with. Using the shapes (Unreal’s equivalent of primitives) I placed several pyramids around the maze as something for the player to find.


I wanted a glowy particle effect to be trigger when the player got close to the pyramid. To do this, I first created my own particle effect. Using the default fire as a base, I stripped out all the emitters except for the embers and tweaked the settings until I was happy with it.


Particle editor and viewport in Unreal

I then set up a collision box as a trigger around the pyramid. This was set it up so that once the player tripped the trigger the particles would activate and a sound of a gong would play.


Beacon with particles and collision box

Graphic User Interface (GUI)

Finally, I wanted to include some UI elements into the game to give it some sort of objective. This was done almost entirely through the blueprint system. Below is the final UI setup I have used in the game.


Firstly, I added a ‘counter’ that essentially counted how many beacons were active out of the total number of beacons. (The UI and interactables are set up in a way so that I can add more beacons and the UI and end state will adjust to suit). This required some complex blueprint work.


Then, with Ben’s help, I added some end game text. It simply says “All pyramids found!”. The trick was to have it hidden until the number of beacons activated matched the total number of beacons. Again, this was a little more complicated.


Lastly, I set up a timer so that the player can see how long it took them to complete the game. Once again I used blueprints to complete this.


Final Outcome

With all of these elements in, I had created an extremely basic game. It is not perfect and not even a vertical slice of  a real game. However, I have achieved my goal: complete a game character production pipeline from design to implemention.

Below is a video of the gameplay as it currently is. I need to still add in a couple of things like a start menu, exit button and a skybox.

Outputting Files that Work in the Unreal Engine

Through the course of my game character project I have kept the following technical specifications in mind so that my assets work efficiently in the Unreal game engine.


Throughout my entire game character project I have been mindful of how my assets will import and run in a game engine. For this reason I have kept everything to a relatively low poly. My character is around 3000 polys which I should have reduced more, especially in the eyes and horns.

erikaMy environment (a quickly created labyrinth) was originally over 8000 polys which I managed to reduce to around 300 polys.

maze02While the poly count is not going to be so critical in my mini demo/test it was good practice for the future as larger projects will require stricter poly counts.

The Importance of Resetting X-Forms

I had a couple of issues with importing my labyrinth file into the Unreal engine as I had forgotten to reset my X-forms. Once I had worked this out and fixed it everything went smoothly but it is something I will need to remember for the future.

Exporting and Importing FBX Files

Once my character was finished and had a basic run and idle animation I exported it into the Unreal engine. I did this by exporting the two animations as separate FBX files where the animation was baked in. This worked well when I imported the animations: when importing FBX files Unreal asks you if it is a skeletal mesh and, if so, does share a skeleton with one already in the project.

SkeletalMeshFBXOptionsThis allowed me to use the same skeleton for the run and idle animations which enabled me to create a blend space between the two.

blendExporting and importing the labyrinth was even simpler as I exported as a FBX (with no animation) and imported it as a static mesh.

Iterative Files

The Unreal engine has the ability to update assets currently in the scene with a newly imported asset. All that is required is for it to have the same file name (you can also do it manually if you have differently named files). For this reason I kept a standard naming convention for my files allowing me to quickly and easily update my textures and animations.

Game Character: Skinning

So, after finishing the rig, I began the horrible next step: skinning. I hate skinning because it is fiddly, tedious and never seems to work for me.

03However, Steve told me about the Geodesic Voxel Binding and Heatmap tools in 3DsMax 2016. These tools are not included in the main download of Max but you can get them if you install the Service Package 1 and then Extension 1.

The Geodesic voxel binding is magic and literally saved me an hour or so of correcting vertex weights. It is included inside the Skin modifier. That is, you simply add a Skin modifier and add the bones (as you normally would). With no settings adjusted, the modifier will attempt to mold the mesh to the bones but is usually very inaccurate.

01To use the Geodesic voxel binding you scroll down to Weight Properties, select Voxel and click the little “…” box on the right. Another dialogue box should appear. With this you can control the falloff of the binding and the maximum number of bones that can have an influence on any given vertex (if you leave it as 0 the program will work out if out for you). You can also just the “accuracy” of the binding, 64 being the lowest.

CaptureI tested this on my model with the default settings at the lowest resolution.

02As you can see it has already fixed one of the biggest issues. Of course, it still needs much adjusting. I fiddled around with the voxel binding settings until I found something that worked and applied that to my model at a high resolution. From there I adjusted the individual vertex weights in the problem areas until I had fixed all the issues.

Below is a short video of my skinned model:

After uploading this, I noticed an issue with the belt and the hip moving inwards and clipping through the body. I have since fixed these problems.

Game Character: Animation Friendly Rig

I am continuing to work through the production pipeline of creating a game character. Having finished the modelling stage, I moved onto the next step: rigging.

Initially I had hoped to create a skeleton in Maya and then create a rig. However, as I have not used Maya before, this proved extremely difficult, frustrating and I was running out of time. So, in order to be able to finish this pipeline this Trimester, I have instead created a custom CAT in 3DsMax. This was much more efficient as I have worked with CAT rigs before.

I began by adding a CAT parent.01Then I added a hub bone (the pelvis) and some legs. The CAT system is really effective because if you create one leg in full you can simply copy and “paste mirror” for the other leg.

02I then continued to add bones for the rest of the body. Including all the finger bones and an additional bone for her bag. I made sure to colour the bones in a way that makes it easy to see what is what: the left side is pink, right side is green, central bones are blue and the bag is yellow.

13From here, I added up-nodes, gizmos and IK targets to allow for easy animation. Again, I made sure to keep the same colour system. 08I like to make sure that the controllers are larger than the model, so that there will be no issues when animating (such as being unable to find a finger gizmo).

07I created extra gizmos around the knee and elbow up-nodes, so they are easy to see and grab. Additionally, I used squares for the knees and circles for the elbows, as I have had issues in the past when they get mixed up.

I added text to the hand IK targets and made them slightly brighter shades of green and pink so that they are easy to find and use.

11Finally, I added a rectangle gizmo to the shoulders. These will help to see what position the shoulders are current in (as horizontal is the neutral or starting position).

12Finally, my custom, animation-friendly rig is finished and ready for me to use.


How Limitations of the Medium Influenced Video Game Art

The history of video game graphics is relatively short spanning just 57 years (Brown, 2015). In the eyes of film or photography, games are still in their infancy. However, video games have dramatically developed in this short amount of time both graphically and as a form of entertainment.

They have developed from simple mechanics displayed with moving light…


Tennis For Two, heralded as the grandfather of video games, was developed in 1958 on an analogue computer using a cathode-ray tube and oscilloscope to display the game.

…to photorealistic 3D characters and worlds.


Hellblade, currently still in development, uses detailed 3D scanning in combination with modelling and texturing to create realistic graphics.

Because of the medium itself, video game art is constrained by graphical capacity and hardware limitations. Due to this, game art tends to change and adapt with technological advancements. However, some styles maintain popularity over time.


Rogue Legacy (2013) utilises 2D pixel sprites in a randomly generated play space.

To see how video game art has adapted and changed over time, and how this influences current practices, we must go back to the beginning.


In the early days of video games, the graphics were extremely limited. During the early 1970’s, games were limited to simple shapes and a polar palette of black and white (Brown, 2015; C.L., 2011).


Pong (1972) is a famous example of this.

The ‘art’ was merely a representation of the vague narrative given to the game’s mechanics. In this sense, game art of this era was about maximum communication with minimum graphics.


Space Invaders (1978) uses slightly more complicated images.

The limitations of black and white prevented detailed game art at this time. It wasn’t till the late 70’s that development of arcade hardware allowed for colour (Brown, 2015; C.L., 2011).


Although not the first, Galaxian (1979) was considered the first successful game to use colour.

This allowed for multi-coloured sprites, providing artists with a larger range of tools and allowing for more detailed games. By the 1980’s, coloured pixel graphics were considered the norm (Brown, 2015). Although vector graphics were used for some games, the ability of pixels to render complex scenes with detailed, filled shapes secured their dominance (Brown, 2015).


Asteroids (1979) is probably the most famous vector graphics game.

During the 80’s, the majority of video games were using 2D coloured sprites to depict characters and enemies (Brown, 2015). As the hardware developed beyond 8-bit, so too did the complexity of the graphics. However, game art was still about working with or around the limitations of the hardware to convey enough information to the player (Brown, 2015). Due to this, characters were created with simple, bold designs and limited movement (Cobbett,2009). Characters often had only a few sets of animation with little to no follow through or anticipation.


Mario (1985) famously wears a hair because his hair was too hard to animate.

Over the course of the 80’s, more colours became available to artists and sprites became more detailed and complex (C.L., 2011). As hardware capabilities increased, games were able to have more detailed environments and backgrounds (Brown, 2015). This allowed artists to develop complete worlds with distinct aesthetics.


Golden Axe (1989) featured detailed characters and environments with an isometric view, allowing the player to move in four directions.

Towards the end of the 80’s and, through the early 90’s, some developers began the awkward transition into 3D graphics. In these early days, 3D graphics were limited to wireframe rendering (Corbett, 2009). Much like the early days of game, artists were forced to reduce complexity and favour communication through simple forms.


Elite (1984) is considered a pioneer of 3D with its wireframe visuals.

Graphic capability eventually improved beyond wireframe, allowing 3D models to have flat shading, but it was considered ugly compared to the detailed 2D graphics at the time.


With stunning graphics, detailed characters and a wide variety of animation, Street Fighter II (1991) still holds up today and secured the ongoing popularity of 2D fighting games.

Similarly, during the 90’s, 2D games were experimenting with multimedia technology like digitised sprites and full motion video (Corbett, 2009). Digitised sprites were considered a new wave by some and became popular thanks to games like Mortal Kombat (Brown, 2015). However, full motion video, due to compression and resolution limitations, was quickly dropped.


Mortal Kombat (1992) featured ‘realistic’ graphics and gore, causing the controversy that lead to its success.

Despite its general ugliness, 3D was quickly becoming popular but the hardware was not up to scratch (Brown, 2015). To compensate, many games incorporated 2D sprites in a 3D world.


With a 3D environment, 3D lighting and 2D sprites, Doom (1993) was extremely impressive at the time and paved the way for first-person shooters as we know them.

This paved the way for the next era of game art.


From the mid to late 90’s, hardware developed enough to allow fully 3D games to be developed. This provided another change and challenge for artists: the characters, animations and environments must look good from all angles and in extremely low poly (Brown, 2015).


Mario 64 (1996) maintains a pleasing aesthetic and considered a pioneer of true 3D.

Another consideration that artists and animators had to make, was the reaction and speed of the animations (Brown, 2015). As fast-paced first person shooters rose in popularity, consumers were expecting believable yet fast actions. Artists had to compensate believability and anticipation for reactivity of animations.


Heralded as the first true 3D first person shooter Quake, (1996), was critically acclaimed at the time.

As 3D games became dominant, two distinct streams emerged: realistic and stylised (Brown, 2015).


Regardless of aesthetic, realism is often toted as the best (Brown, 2015). This is not always true. In fact, when looking back at older games, those with ‘realistic’ graphics (at the time) feel outdated and often fall into the uncanny valley.


Seaman (1999) is a virtual pet game and potentially the creepiest game known to man.

In reaction to this, a lot of games were created with stylised graphics. This was often done through use of cell shading and stylised or ‘cartoony’ characters.


Jet Set Radio (2000) used cell shading and bright colours to create a stylised aesthetic.

Currently, we can produce extreme realism in terms of visuals, lighting and physics.


The latest Tomb Raider (2015) features realistic hair simulation rendered in real-time.

While video game art is still bound by graphical and hardware limitations, it is no longer forced to have maximum communication for minimum visuals (Corbett, 2009).


So how does this long, detailed and well researched history influence the current practices for creating realistic video game art?

Well, as mentioned before, video games do not have the same limitations that they once had. For realistic games, we face a new issue. Games need to feel realistic: players will expect everything from reload animations, dynamic grass simulation and varied action, hit and death animations. Additionally, they will want reactivity and speed, which sometimes opposes the realism.

Assassin’s Creed games are renowned for detailed and varied parkour movement.

While this is achievable it might be well out of scope, forcing the artists and developers to find ways to cheat or work around this. This has changed the way the art and animation is created within the industry.

One method that artists use to create diverse environments quickly and efficiently, is through modular development of assets:

Workflow06In order to achieve ‘true realism’ many companies have begun using motion capture as a more efficient way to get realistic animation (Dahl, 2015).


Motion capture for The Last of Us (2013).

And even facial motion capture for subtle expressions.

02Other methods, such as digital scanning, are being used to achieve photo realistic 3D models (Ninja Theory, 2015).


Body scan for Hellblade (still in development).

While polygon count is still an issue, it is no longer the major limiting factor. Models too detailed to be featured in the game can be baked out at a normal map and projected onto a lower poly model (Ward, 2013).

266963_367010970057889_1979841288_oAdditionally, extreme texture detail can be achieved with the help of software such as the Quixel Suite.

maxresdefault2The development of realistic games is always tied to technology and will continue to be so. The future of game art will depend on the next leap or trend of video games themselves.


Many current games are preferring to employ a stylised aesthetic. This might be due to a multitude of factors:

  • To avoid the bleeding edge and eventual aging of realism
  • To be able to run on portable devices such as mobile
  • To stay within a smaller, indie budget
  • To have a particular art style
  • Because it suits the game better

The current popularity in indie or ‘retro-like’ games has seen a rise in 2D stylised graphics.


VVVVV (2010) is a critically acclaimed pixel puzzle platformer.

Current technology allows these sorts of games to run a lighting fast speeds. Thus giving them a competitive edge on their realistic peers.


Skullgirls (2012), a fast-paced fighting game, uses beautiful, 2D animation.

Additionally, lessons from the history of games allow these to be created with a high degree of fidelity and a modern understanding of game design (Brown, 2015).


FEZ (2012) allows players to move in 3D with a ‘2D’ pixel aesthetic.

Similarly, some games break the mould and experiment with new forms of stylisation.


Called ‘1-Bit’ or ‘dither-punk’, Return to the Obra Dinn (in development) returns to a monochromatic style with 3D graphics.

This is an exciting era of video games. The indie development scene currently gains as much attention as AAA titles and there is a balance between realistic and stylised games.

I don’t know where video game art with venture to next but I am happy to be along for the journey.


Brown, S. (2015). A Brief History of Graphics [Video]. Retrieved from

C.L. (2011). The Colourful History of Video Games. Retrieved from

Cobbett, R. (2009). The Evolution of Gaming Graphics. Retrieved from

Dahl, T. (2015). Action: The Animator’s Process [Video]. Retrieved from

Masters, M. (2014). From the 80’s to Now: The Evolution of Animation in Video Games. Retrieved from

Moss, R. (2015). Lucas Pope and the rise of the 1-bit ‘dither punk’ aesthetic. Retrieved from

Ninja Theory. (2015). Hellblade Development Diary 17: A New Body [Video]. Retrieved from

Ward, A. (2011). How to create character models for games: 18 top tips. Retrieved from

Ward, A. (2013). Game Character Creation Series. Retrieved 2nd October, 2015, from–cg-31010

Game Character: New Hair

I had previously created the hair for my character out of splines and a hair modifier. However, this was not working so well and it would not export to Maya (or Unreal). So I redid her hair in polygons.


This was done very quickly and is quite basic. I would love to have time to fix this but I really don’t think I will be able to. However, it is working at the moment.

Game Character: Initial Textures

I have created a set of initial textures for my character. This is basically a draft that will help me to place the lines / paint later on. Hopefully, if I have enough time this trimester, I will  iterate on it to get a painterly style.

TahliaTextureHow it looks on the model:


Game Character: Final Design for Texturing

Below is the final character design with details and colour palette:


I will be using this as a reference when I begin to texture my model properly. I chose a earthy, forest colour palette and gave her some tattoos to add a bit of detail to the model.

Game Character Unwrap and Base Texture

Having finished my model, I began the unwrapping process. I choose to unwrap symmetrically as this will save time during texturing and I am running out of time…

I didn’t really have any issues doing this, except for the hands. The hands were extremely fiddly and I think this was because of my shitty hand modelling. Otherwise, it was all pretty easy to unwrap. Below is the unwrapped character with the checkerboard texture applied:

unwrapThe maps are not packed perfectly, but, as I said, I am trying not to spend too much time on this stage:

mapFrom here, I created an earthy / forest colour palette and filled in the base colours for my texture.

baseI will be treating the textures as an iterative process and will be returning to them after the rigging and blocking out is complete.

Game Character: Final details for model

Today I added the last details and finalised my model. Firstly, I had to do something about her face – it just looked wrong and a little scary. The tutorial I followed to create her face was for a realistic male model; I think this is why she looks so odd here.

01 (3)From here I made several adjustments to the existing model: I made the nose thinner, moved the corners of the mouth up, moved the eyes in a bit and adjusted their shape and adjusted her jaw line. Finally, and most crucially, I added eyelashes and eyebrows.

01 (2)I think this is already much better and should (hopefully) only improve with texturing.

Next, I created her horns. This took me quite a long time and a lot of messing around. Luckily, Steve showed me how to use the “Extend along Spline” tool in class. This worked out quite well and was easy to use.

66Lastly, I worked on her hair. For this I tried several different methods: extending along a spline, box modelling the strands and rendering splines as polys. None of this worked well. Finally, I found a tutorial on using splines and the Hair and Fur modifier. To begin with I created some splines:

44 Then added the Hair and Fur modifier and played around with it. The modifier is a bit fiddly, but the dynamics are really cool.

33It still needs some work: the hair still seems to clip through the head a little, I needs a couple more splines and I need to adjust the settings so it is not so stringy. However, I like how it is working at the moment and think that I will definitely use this method. Additionally, I want to use some nice hair shaders and materials.

hair-test[Sorry for the shitty GIF, don’t have Premiere on my home computer]

From here I can finally start the unwrapping, texturing and rigging stages.

Game Character: Modelling

Using my model sheet (below), I began to model my character.


Before I began, I looked at several different tutorials on how to model a character. Because of these tutorials, I started by using cylinders for the torso, arms and legs. This was new for me as I am used to box modelling and I found that it worked out much better. I will definitely be using this technique in the future.


At this stage I had finished the torso (with smoothing groups) and was at the point of connecting the arms to the shoulders. I was using symmetry mode at this point (and through most of the process).


From there I continued, adding a waist and legs.


This was the final body mesh minus the hands, feet and head. At this stage I went back to the joints and fixed up the topography of the knee, wrist and elbow joints.


After this, I began modelling the hands. In the tutorials, they started by modelling the hands separately and attaching them after.


In my opinion, the hands worked out OK considering I have never modeled hands before. Again, I adjusted the topography to give the knuckles the proper joints.

06From here I blocked in the feet. I took less care with these, as the feet will be viewed the least.

feetFinally, I began to work on the face and head of the character. I have never modeled a character’s face in such detail before so it took me a much, much longer time that expected. Unfortunately, my model sheet was lacking detail in the face which definitely hindered my workflow. However, I followed an excellent tutorial which helped me a whole lot. The tutorial began the face with several cylinder caps which I adjusted to suit.

faceAfter literally hours, I had finished the head. She still looks scary / horrifying. I am not sure if this is the lack of hair (which I will be completing later), lack of eyebrows or simply my inexperienced and fumbled attempt at modeling a face.

head4I added a coloured multi/sub-object texture in Max to help see how the final product would look.

head3Finally, I added the clothing edges and her bag and belt.

bagAt this point I am happy with her model and design. Now I can start the process of giving her a skeleton and rig. I will add her hair and horns later on, as these are not essential for the rig.

Tutorials used for reference:

Taylor, J. (2013). Maya Character modeling tutorial, part 2 – Hands and Feet [Video]. Retrieved from

Taylor, J. (2014). Maya HEAD MODELING for ANIMATION tutorial [Video]. Retrieved from

Taylor, J. (2015). MAYA 2016 FEMALE BODY character modeling tutorial [Video]. Retrieved from

Ward, A. (2013). Game Character Creation Series. Retrieved 2nd October, 2015, from–cg-31010

Game Character: Animation Planning

For my game character I plan on animating four basic actions: idle, walk, run and jump. These animations are the minimum requirement for most video games and would also fulfill the brief. From these actions I created my animation breakdown list (which can be found in my student folder):


Game Character: Concept and Design

For my game character production pipeline, I plan on creating a character that would suit a stylized RPG. I wanted to design an earth-magic user so I could use earthy tones and green particle effects (for the magic). Below are my concepts for Tahlia:

conceptsI realize that this is a basic ass character design and the horns don’t work unless she is a full satyr etc. but I was mainly inspired by the comic book series Saga and how Fiona Staples creates humanoid characters with animal features:

saga-no-killing-e1412318361134-720x340Moving on, I created the following model sheet based on my initial concept:

I usually find that model sheets do not include a back view of the character. However, I have included it as I find it helpful, especially for the placement of muscles. For how I will begin the process of modelling.


Marcotte, J and Staples, F. (2014). Interview: Saga Artist Fiona Staples.

Game Character Production Pipeline (Part 2)

Production Pipeline Research(1)This blogs continues from Part 1.

This blog will be covering the animation and implementation sections of the pipeline. Again, I will working through this pipeline in a linear manner as I am working on it alone.

As covered in the last research blog, the pipeline for a game character can be non-linear in order to increase efficiency. This is true for the animation side of things as many aspects can be worked on before and during the modelling of the character.

Production Pipeline Research(2)On a quick side note, I will briefly discuss the differences between animation film and game. Animation for video games is quite different from films or movies as it is an interactive form of entertainment as opposed to a passive form (Sanders, 2015). The animation itself is meant to be interacted with, not just viewed. In addition to this, the camera is not locked down and directed as it is in a film (Masters, 2013). This means that animation must look good and the curves must be smooth from all possible angles (Masters; Sanders). Additionally, the transitions between every possible action combination must be considered. This is quite different from films in which animators can ‘hide’ certain aspects of the animation or ‘cheat’ (for example, by breaking the rig in a way which looks good from a particular angle). Of course, certain parts of a game, such as cut scenes, might be passive and in larger studios these animations are handled by a separate “cinematic” department (Dahl, 2015).

Additionally, video game animation tends to be heavily focused on body mechanics due to the media itself (Masters, 2015).

Production Pipeline Research(3)There are variety of aspects that need to be consider before planning the gameplay animation. These considerations from the type of game you are creating and the constraints involved with the project. For example, the animation process will be extremely different if the game is 2D as opposed to 3D. Additionally, animations will vary on the type of camera used – third person will differ to isometric (Masters, 2013).

One really important thing to consider is the importance of responsiveness to the gameplay (Masters, 2013). As Tobias Dahl (2015) stated: “Gameplay comes first!” For example, a fast-paced military shooter demands an instantaneous response while a puzzle game may not. How responsive an action needs to be will impact the timing and amount of anticipation of an animation (Masters; Sanders). Nothing is more frustration to a player then pressing the attack button and having the character slowly draw their sword. Due to this, animations not only need to be responsive but also fun and engaging (Dahl, 2015).

Another major aspect to consider is style the game is trying to achieve. For example, many AAA game companies try to create characters and environments with a high level of realism (Micu, 2013). However, this can be extremely time consuming to create manually so methods such as motion capture is used to assist animation (Masters, 2013).

The level of interaction in the gameplay is also important to consider (Sanders, 2015). If the level of interaction is high this can be a huge strain on the animators. Dahl (2015) states that it is better to use short cycles than long sequences. Cutting down on the variety and length of animations can be achieved through the following considerations: Can the player use / interact with a wide variety of things? Do these interactions require unique animations? Can we blend or layer animations to achieve this? What can be reused?

Additional constraints include: platform, poly-count, software, engine, real-time rendering, processing power, programming, application of physics, and, of course, the triple constraints of time, money and quality.

Production Pipeline Research(4)

Once the constraints have been taken into consideration, the direction of the animation will need to be considered. The type of game, desirable style and constraints will all factor into the animation direction (Dahl, 2015). A good example of two games with varying art direction are the current Fallout games: Fallout 4 and Fallout Shelter. Both are set in the same world with the same lore. However, Fallout 4 is a high-powered PC and console game with realistic 3D while Fallout Shelter is a 2.5D mobile game with a cartoonish style. As games they have very different goals and different constraints dictated by gameplay and platform. These differences make for very different animation styles but both styles of animation suit their respective game.

Production Pipeline Research(5)By now, the direction and style of the animation should be clear. From here, a comprehensive list can be written up. This list should breakdown all actions that need to be animated into their respective segments. Each segment will be corrected named according to a naming convention and should be categorized as looping or forward. This will aid both the animators and game programmers.

To do this consider the goals for the animation: what is it trying to achieve? What is its purpose? What are the gameplay constraints or demands? How long does it need to be? For example, a ‘tank’ character will have slower attacks (longer animations) that exaggerate weight and force to demonstrate strength. (Dahl, 2015).

Production Pipeline Research(9)The next stage is animation. This is an iterative process, meaning that it is repeated several times: each cycle improving on the last, bringing the animation closer to the final result. Going from the breakdown list, animators will find or create reference images and videos (Dahl, 2015). From this the animation is blocked out “quick and dirty”. Animation may be hand keyed in programs such as Maya or Max or it may first come from motion capture footage and adjusted to suit. As I have not used motion capture, I typically use stepped keys when blocking as I find that it helps with have dynamic poses. It is then exported and moved into the game engine, in this case Unreal. Any bugs or major issues are tweaked. Then the animation is implemented and tested. In this stage the timing, responsiveness, and general feel is examined and the animation is tested from all angles. It is then reviewed (this may be a group process) and the next iteration begins. This may require going back to reference or re-shooting motion capture footage. This process is repeated until the animation is finalized and ready to be implemented. (Dahl, 2015).

As mentioned above, you will be exporting the animated model many times throughout the multiple animation iterations. This is done by simply exporting the model as an FBX file with the following settings ticked: Smoothing Groups, Triangulate, Animation, Baked Animation, Deformed Model, Skins and Blend Shapes (Epic Games, 2015). This will allow the character to be correctly and easily imported into the Unreal Engine.

As demonstrated on the Epic Games (2015) website, to import the animated model into the Unreal Engine simply import the FBX with the above settings. Fully implementing the model as a controllable character with blend spaces and particle effects requires the use of either Blueprints, the Unreal alternative to scripting (Epic Games, 2015). As this is a rather large topic in its own right, I plan to research and write it up separately.


Autodesk. (2014). Export a Scene to Unreal Engine 4. Retrieved from

Dahl, T. (2015). Action: The Animator’s Process [Video]. Retrieved from

Epic Games. (2015). Creating a Blend Space. Retrieved from

Epic Games. (2015). FBX Best Practices. Retrieved from

Epic Games. (2015). FBX Animation Pipeline. Retrieved from

Epic Games. (2015). Setting Up a Character. Retrieved from

Micu, V. (2013). Jonathan Cooper on Taking Chances, Being Pushed Out of Your Comfort Zone, And Assassin’s Creed III. Retrieved from

Masters, M. (2011). From the 80s to Now: The Evolution of Animation in Video Games. Retrieved from

Masters, M. (2013). How Animation for Games is Different from Animation for Movies. Retrieved from

Sanders, A. (2015). Animating for Video Games vs. Animating for Movies. Retrieved from

Skyrim Screenshot [Image]. (2012). Retrieved from

Wyatt, D. (2015). The Art of Cutscenes. Retrieved from

Game Character Production Pipeline (Part 1)

Game Character Production Pipeline(2)With my specializations, I will combining the modelling and animation projects into a single production pipeline for a game character. Therefore, I have researched the most common industry practices regarding the pipeline.

Through my research, I have tried to piece together how the production pipeline would work. For the specialization project, I will be working alone. This means my production pipeline will be straight-forward and look like the one above.

Production Pipeline Research(7)

However, depending on the level of detail, time frame and team size, different people can be working on different things simultaneously in order to save time and be efficient.

Game Character Production Pipeline(4)Before production on the game character begins, the art direction should work with the game designers to create an art bible, write some lore, define character abilities etc in order to help define what characters should look like and be like to fit into the game world. From here, concept art for the character can be created. When creating concepts, the priority is speed and quantity into order to explore a variety of different ideas and looks (Anhut, 2014).

Game Character Production Pipeline(5)According to Anhut (2014) there are some common misconceptions about concept art. A lot of art labeled as “concept art” is created after the final character design has been finalized for promotion and marketing. This confusion between actual concept art and promo art can cause workflow and time issues as the concept artists are forced to create “publishable” concept art (Anhut, 2014). For this reason, it is essential quickly create concept art to design interesting characters that suit the game.

Game Character Production Pipeline(6)When the character design has be defined, a turnaround sheet is created. This image should be suitable for modelling: character’s shaped and outlines should be clear, with enough detail to model but no unnecessary lighting, line-work or coloring. The turnaround sheet will be brought into the modelling program and used as a reference.

Game Character Production Pipeline(8)To begin the character modelling a base mesh is created. Usually this is created out of ‘primitives’ and adjusted so that it has the basic shape of the character with the lowest amount of detail possible (Ward, 2013).

Game Character Production Pipeline(9)From there, detail is added to the base mesh in order to create the hi-res (and hi-poly) version of the model. There are two common ways to add this detail, the method you choose will depend on your skill set, your familiarity with different software and the software and tools you have access to. One way to do this is through subsurface division in Max, Maya or Blender (Ward, 2013). Ward (2013) states that this method is really efficient as the base model, hi-res and retopologizing can all be handled in a single program. An alternative method is to use sculpting programs such as ZBrush. This method can ensure a extremely high level of detail by may make for more complex topology(Antonio, 2010). From my research, it seems that both ways are equally popular.

Once the hi-res version is complete, it is saved as a separate file.

Game Character Production Pipeline(10)The model is then taken and retopologizied. This is the process of simplifying a model and removing excess geometry (Ward, 2013). For example, if a shirt was added on top of the torso, the ‘skin’ beneath the shirt can be removed. There are multiple plugins and external tools that do this and can help with workflow. During this process, it is important test the normal map (which will be generated from the hi-res model). If the topography has been simplified or changed too much, the normal map will not work (Ward, 2013). In addition to this, it is important to check that the joints can still deform correctly. Once this stage is complete, the game ready model is finished.

At this stage the final game model is taken and unwrapped (Ward, 2013). This is done through adding seams and relaxing the maps. This is pretty standard and how you do it will mostly depend on symmetry and level of detail.

Using the UV map, the model can be textured model according to the character design. This can be one of the most time costly parts of the pipeline depending on your level of detail.

Using the hi-res model we can generate normal, specular, crevice and AO maps and bake them to lo-res model (Ward, 2013). This allows detail to be ‘added’ onto the model without the topology being adjusted. Once again, different games may require different maps.

Game Character Production Pipeline(14)Depending on the materials of the character, shaders can then be applied for great effect.

Game Character Production Pipeline(15)The next stage is building the character’s skeleton out of bones in Max or Maya (Ward, 2013). This can get very complicated so a simplified version may be good for a game character. Depending on your team, can be started by another team member once the low-poly model is complete in order to increase the efficiency of the workflow.

Game Character Production Pipeline(16)So finally the character should be modeled and movable. In order to ensure that the mesh does not break during animation, the model must be skinned to the skeleton (Ward, 2013). This can be handled in Max or with “Paint Skin Weights” in Maya (Ward, 2013).

Game Character Production Pipeline(17)The last step before animation is to set up an animation friendly rig. This is done by setting up controls and FK/IK targets for the limbs and joints (Ward, 2013). Additionally, depending on the game and level of detail, a full facial rig may be added. At this stage the game character should fully ready for animation and implementation.

Referring back to this workflow chart it can be seen that how much time can be saved by having different people work on different things simultaneously. This can help streamline a project by using time efficiently and by coming across issues earlier rather than later.


Alchemist Model [Image]. (2014). Retrieved from

Anhut, A. (2014). Let’s Get Real About Concept Art. Retrieved 2nd October, 2015, from

Antonio, L. (2010). Character Creation for Videogames. Retrieved 2nd October, 2015, from

Crimson Viper [Image]. (2009). Retrieved from

Diamant, R. & Simantov, J. (2011). Uncharted 2: Character Pipeline. Retrieved from

Fisher, A. (2013). Create a Game Character. Retrieved 2nd October, 2015, from

Michelle, L. (2011). Female Character for Games. Retrieved 2nd October, 2015, from

Simantov, J. & Yates, J. (2011). Uncharted Animation Workflow. Retrieved 2nd October, 2015, from

Street Fighter Concepts [Image]. (2009). Retrieved from

Ward, A. (2011). How to create character models for games: 18 top tips. Retrieved from

Ward, A. (2013). Game Character Creation Series. Retrieved 2nd October, 2015, from–cg-31010