I have fixed up the last couple of issues with my game (exit button, reset button, horrible lighting and bloom) and have put a playable build up online.
I choose to integrate sound with my run animation inside the Unreal engine. I did this through the blueprint and notify systems. Firstly, I grabbed a couple of free sound effects and imported them into my project (quickly learning that a whole bunch of file types are not supported).
Then I dropped in some background ambience. This was very easy but required a blueprint setup because I wanted it to loop.
I then opened up my run animation, worked out where the sound should be played and set a ‘notify’ on the correct frame. This notify allowed me to attach a sound file (in this case a footstep) to the animation frame where she makes contact with the ground.
In addition to this, I added a gong sound effect to the beacons activating. This was a little more complicated and required using Blueprints (Unreal’s node based scripting) to get it working properly.
Finally, I made a musical sound play once all the beacons had been found. Again, this was much more complicated. Ben and I went through a couple of tutorials to work out how exactly that could be done.
These sound effects are not perfect at all but it does helps make the scene feel complete.
Having finally finished my character and created some very rough, very basic animations I began the process of setting everything up in Unreal. As I hope to work on games in the future, and because the game industry in Australia requires you to be an all-rounder, I decided to start from scratch and setup a character without using any of the templates. To do this I followed a comprehensive tutorial on the Unreal site.
Through this tutorial I learnt how to set up a character, a camera, controls and animation using Blueprints (Unreal’s version of scripting). In hindsight I should have set up a different type of camera as the one I currently have highlights the lack of strafing animation. I did look into changing this but it required much more complicated Blueprint work and I currently lack the time to learn this.
The animation system is really cool and allows you to smoothly blend between different animations depending on the player’s speed or direction. As I just had a shitty idle animation and a basic run, my blend space was really simple.
While this was all relatively straight forward it was time consuming and fiddly. For example, after everything was finally set up, my animations would not work. I checked and double checked everything but nothing fixed the issue. Finally I found a single node that had not been linked up.
Once I had a moving character, I decided to create an environment for her to move through. At first I planned to use just a basic plane but decided a small labyrinth would be much cooler. I quickly created a labyrinth in 3DsMax and imported into the game.
I then set about creating collision boxes for all the walls and the floor. This was quite fiddly and took some time as I had to individual place and size each collision box. Additionally, the camera seemed to interact weirdly with the collision boxes (like shaking when the player went to close to a wall).
This was due to the collision boxes being slightly larger than the walls. I was then informed that, with a single click, you can use an objects faces as its collision boxes but it is much heavier when playing the game. As my game is very light to run, I decided to use this method. This basically fixed the issues with the camera.
Finally, I was able to run around the maze.
At this point I decided that game was too empty and I needed something for players to interact with. Using the shapes (Unreal’s equivalent of primitives) I placed several pyramids around the maze as something for the player to find.
I wanted a glowy particle effect to be trigger when the player got close to the pyramid. To do this, I first created my own particle effect. Using the default fire as a base, I stripped out all the emitters except for the embers and tweaked the settings until I was happy with it.
I then set up a collision box as a trigger around the pyramid. This was set it up so that once the player tripped the trigger the particles would activate and a sound of a gong would play.
Finally, I wanted to include some UI elements into the game to give it some sort of objective. This was done almost entirely through the blueprint system. Below is the final UI setup I have used in the game.
Firstly, I added a ‘counter’ that essentially counted how many beacons were active out of the total number of beacons. (The UI and interactables are set up in a way so that I can add more beacons and the UI and end state will adjust to suit). This required some complex blueprint work.
Then, with Ben’s help, I added some end game text. It simply says “All pyramids found!”. The trick was to have it hidden until the number of beacons activated matched the total number of beacons. Again, this was a little more complicated.
Lastly, I set up a timer so that the player can see how long it took them to complete the game. Once again I used blueprints to complete this.
With all of these elements in, I had created an extremely basic game. It is not perfect and not even a vertical slice of a real game. However, I have achieved my goal: complete a game character production pipeline from design to implemention.
Below is a video of the gameplay as it currently is. I need to still add in a couple of things like a start menu, exit button and a skybox.
Through the course of my game character project I have kept the following technical specifications in mind so that my assets work efficiently in the Unreal game engine.
Throughout my entire game character project I have been mindful of how my assets will import and run in a game engine. For this reason I have kept everything to a relatively low poly. My character is around 3000 polys which I should have reduced more, especially in the eyes and horns.
I had a couple of issues with importing my labyrinth file into the Unreal engine as I had forgotten to reset my X-forms. Once I had worked this out and fixed it everything went smoothly but it is something I will need to remember for the future.
Once my character was finished and had a basic run and idle animation I exported it into the Unreal engine. I did this by exporting the two animations as separate FBX files where the animation was baked in. This worked well when I imported the animations: when importing FBX files Unreal asks you if it is a skeletal mesh and, if so, does share a skeleton with one already in the project.
The Unreal engine has the ability to update assets currently in the scene with a newly imported asset. All that is required is for it to have the same file name (you can also do it manually if you have differently named files). For this reason I kept a standard naming convention for my files allowing me to quickly and easily update my textures and animations.
So, after finishing the rig, I began the horrible next step: skinning. I hate skinning because it is fiddly, tedious and never seems to work for me.
However, Steve told me about the Geodesic Voxel Binding and Heatmap tools in 3DsMax 2016. These tools are not included in the main download of Max but you can get them if you install the Service Package 1 and then Extension 1.
The Geodesic voxel binding is magic and literally saved me an hour or so of correcting vertex weights. It is included inside the Skin modifier. That is, you simply add a Skin modifier and add the bones (as you normally would). With no settings adjusted, the modifier will attempt to mold the mesh to the bones but is usually very inaccurate.
To use the Geodesic voxel binding you scroll down to Weight Properties, select Voxel and click the little “…” box on the right. Another dialogue box should appear. With this you can control the falloff of the binding and the maximum number of bones that can have an influence on any given vertex (if you leave it as 0 the program will work out if out for you). You can also just the “accuracy” of the binding, 64 being the lowest.
As you can see it has already fixed one of the biggest issues. Of course, it still needs much adjusting. I fiddled around with the voxel binding settings until I found something that worked and applied that to my model at a high resolution. From there I adjusted the individual vertex weights in the problem areas until I had fixed all the issues.
Below is a short video of my skinned model:
After uploading this, I noticed an issue with the belt and the hip moving inwards and clipping through the body. I have since fixed these problems.
I am continuing to work through the production pipeline of creating a game character. Having finished the modelling stage, I moved onto the next step: rigging.
Initially I had hoped to create a skeleton in Maya and then create a rig. However, as I have not used Maya before, this proved extremely difficult, frustrating and I was running out of time. So, in order to be able to finish this pipeline this Trimester, I have instead created a custom CAT in 3DsMax. This was much more efficient as I have worked with CAT rigs before.
I began by adding a CAT parent.Then I added a hub bone (the pelvis) and some legs. The CAT system is really effective because if you create one leg in full you can simply copy and “paste mirror” for the other leg.
I then continued to add bones for the rest of the body. Including all the finger bones and an additional bone for her bag. I made sure to colour the bones in a way that makes it easy to see what is what: the left side is pink, right side is green, central bones are blue and the bag is yellow.
From here, I added up-nodes, gizmos and IK targets to allow for easy animation. Again, I made sure to keep the same colour system. I like to make sure that the controllers are larger than the model, so that there will be no issues when animating (such as being unable to find a finger gizmo).
I created extra gizmos around the knee and elbow up-nodes, so they are easy to see and grab. Additionally, I used squares for the knees and circles for the elbows, as I have had issues in the past when they get mixed up.
The history of video game graphics is relatively short spanning just 57 years (Brown, 2015). In the eyes of film or photography, games are still in their infancy. However, video games have dramatically developed in this short amount of time both graphically and as a form of entertainment.
They have developed from simple mechanics displayed with moving light…
…to photorealistic 3D characters and worlds.
Because of the medium itself, video game art is constrained by graphical capacity and hardware limitations. Due to this, game art tends to change and adapt with technological advancements. However, some styles maintain popularity over time.
To see how video game art has adapted and changed over time, and how this influences current practices, we must go back to the beginning.
In the early days of video games, the graphics were extremely limited. During the early 1970’s, games were limited to simple shapes and a polar palette of black and white (Brown, 2015; C.L., 2011).
The ‘art’ was merely a representation of the vague narrative given to the game’s mechanics. In this sense, game art of this era was about maximum communication with minimum graphics.
The limitations of black and white prevented detailed game art at this time. It wasn’t till the late 70’s that development of arcade hardware allowed for colour (Brown, 2015; C.L., 2011).
This allowed for multi-coloured sprites, providing artists with a larger range of tools and allowing for more detailed games. By the 1980’s, coloured pixel graphics were considered the norm (Brown, 2015). Although vector graphics were used for some games, the ability of pixels to render complex scenes with detailed, filled shapes secured their dominance (Brown, 2015).
During the 80’s, the majority of video games were using 2D coloured sprites to depict characters and enemies (Brown, 2015). As the hardware developed beyond 8-bit, so too did the complexity of the graphics. However, game art was still about working with or around the limitations of the hardware to convey enough information to the player (Brown, 2015). Due to this, characters were created with simple, bold designs and limited movement (Cobbett,2009). Characters often had only a few sets of animation with little to no follow through or anticipation.
Over the course of the 80’s, more colours became available to artists and sprites became more detailed and complex (C.L., 2011). As hardware capabilities increased, games were able to have more detailed environments and backgrounds (Brown, 2015). This allowed artists to develop complete worlds with distinct aesthetics.
Towards the end of the 80’s and, through the early 90’s, some developers began the awkward transition into 3D graphics. In these early days, 3D graphics were limited to wireframe rendering (Corbett, 2009). Much like the early days of game, artists were forced to reduce complexity and favour communication through simple forms.
Graphic capability eventually improved beyond wireframe, allowing 3D models to have flat shading, but it was considered ugly compared to the detailed 2D graphics at the time.
Similarly, during the 90’s, 2D games were experimenting with multimedia technology like digitised sprites and full motion video (Corbett, 2009). Digitised sprites were considered a new wave by some and became popular thanks to games like Mortal Kombat (Brown, 2015). However, full motion video, due to compression and resolution limitations, was quickly dropped.
Despite its general ugliness, 3D was quickly becoming popular but the hardware was not up to scratch (Brown, 2015). To compensate, many games incorporated 2D sprites in a 3D world.
This paved the way for the next era of game art.
From the mid to late 90’s, hardware developed enough to allow fully 3D games to be developed. This provided another change and challenge for artists: the characters, animations and environments must look good from all angles and in extremely low poly (Brown, 2015).
Another consideration that artists and animators had to make, was the reaction and speed of the animations (Brown, 2015). As fast-paced first person shooters rose in popularity, consumers were expecting believable yet fast actions. Artists had to compensate believability and anticipation for reactivity of animations.
As 3D games became dominant, two distinct streams emerged: realistic and stylised (Brown, 2015).
Regardless of aesthetic, realism is often toted as the best (Brown, 2015). This is not always true. In fact, when looking back at older games, those with ‘realistic’ graphics (at the time) feel outdated and often fall into the uncanny valley.
In reaction to this, a lot of games were created with stylised graphics. This was often done through use of cell shading and stylised or ‘cartoony’ characters.
Currently, we can produce extreme realism in terms of visuals, lighting and physics.
While video game art is still bound by graphical and hardware limitations, it is no longer forced to have maximum communication for minimum visuals (Corbett, 2009).
So how does this long, detailed and well researched history influence the current practices for creating realistic video game art?
Well, as mentioned before, video games do not have the same limitations that they once had. For realistic games, we face a new issue. Games need to feel realistic: players will expect everything from reload animations, dynamic grass simulation and varied action, hit and death animations. Additionally, they will want reactivity and speed, which sometimes opposes the realism.
Assassin’s Creed games are renowned for detailed and varied parkour movement.
While this is achievable it might be well out of scope, forcing the artists and developers to find ways to cheat or work around this. This has changed the way the art and animation is created within the industry.
One method that artists use to create diverse environments quickly and efficiently, is through modular development of assets:
And even facial motion capture for subtle expressions.
While polygon count is still an issue, it is no longer the major limiting factor. Models too detailed to be featured in the game can be baked out at a normal map and projected onto a lower poly model (Ward, 2013).
Many current games are preferring to employ a stylised aesthetic. This might be due to a multitude of factors:
The current popularity in indie or ‘retro-like’ games has seen a rise in 2D stylised graphics.
Current technology allows these sorts of games to run a lighting fast speeds. Thus giving them a competitive edge on their realistic peers.
Additionally, lessons from the history of games allow these to be created with a high degree of fidelity and a modern understanding of game design (Brown, 2015).
Similarly, some games break the mould and experiment with new forms of stylisation.
This is an exciting era of video games. The indie development scene currently gains as much attention as AAA titles and there is a balance between realistic and stylised games.
I don’t know where video game art with venture to next but I am happy to be along for the journey.
Brown, S. (2015). A Brief History of Graphics [Video]. Retrieved from
C.L. (2011). The Colourful History of Video Games. Retrieved from
Cobbett, R. (2009). The Evolution of Gaming Graphics. Retrieved from
Dahl, T. (2015). Action: The Animator’s Process [Video]. Retrieved from
Masters, M. (2014). From the 80’s to Now: The Evolution of Animation in Video Games. Retrieved from
Moss, R. (2015). Lucas Pope and the rise of the 1-bit ‘dither punk’ aesthetic. Retrieved from
Ninja Theory. (2015). Hellblade Development Diary 17: A New Body [Video]. Retrieved from
Ward, A. (2011). How to create character models for games: 18 top tips. Retrieved from
Ward, A. (2013). Game Character Creation Series. Retrieved 2nd October, 2015, from
I had previously created the hair for my character out of splines and a hair modifier. However, this was not working so well and it would not export to Maya (or Unreal). So I redid her hair in polygons.
This was done very quickly and is quite basic. I would love to have time to fix this but I really don’t think I will be able to. However, it is working at the moment.
Having finished my model, I began the unwrapping process. I choose to unwrap symmetrically as this will save time during texturing and I am running out of time…
I didn’t really have any issues doing this, except for the hands. The hands were extremely fiddly and I think this was because of my shitty hand modelling. Otherwise, it was all pretty easy to unwrap. Below is the unwrapped character with the checkerboard texture applied:
Today I added the last details and finalised my model. Firstly, I had to do something about her face – it just looked wrong and a little scary. The tutorial I followed to create her face was for a realistic male model; I think this is why she looks so odd here.
From here I made several adjustments to the existing model: I made the nose thinner, moved the corners of the mouth up, moved the eyes in a bit and adjusted their shape and adjusted her jaw line. Finally, and most crucially, I added eyelashes and eyebrows.
Next, I created her horns. This took me quite a long time and a lot of messing around. Luckily, Steve showed me how to use the “Extend along Spline” tool in class. This worked out quite well and was easy to use.
Lastly, I worked on her hair. For this I tried several different methods: extending along a spline, box modelling the strands and rendering splines as polys. None of this worked well. Finally, I found a tutorial on using splines and the Hair and Fur modifier. To begin with I created some splines:
It still needs some work: the hair still seems to clip through the head a little, I needs a couple more splines and I need to adjust the settings so it is not so stringy. However, I like how it is working at the moment and think that I will definitely use this method. Additionally, I want to use some nice hair shaders and materials.
From here I can finally start the unwrapping, texturing and rigging stages.
Using my model sheet (below), I began to model my character.
Before I began, I looked at several different tutorials on how to model a character. Because of these tutorials, I started by using cylinders for the torso, arms and legs. This was new for me as I am used to box modelling and I found that it worked out much better. I will definitely be using this technique in the future.
At this stage I had finished the torso (with smoothing groups) and was at the point of connecting the arms to the shoulders. I was using symmetry mode at this point (and through most of the process).
From there I continued, adding a waist and legs.
This was the final body mesh minus the hands, feet and head. At this stage I went back to the joints and fixed up the topography of the knee, wrist and elbow joints.
After this, I began modelling the hands. In the tutorials, they started by modelling the hands separately and attaching them after.
In my opinion, the hands worked out OK considering I have never modeled hands before. Again, I adjusted the topography to give the knuckles the proper joints.
Finally, I began to work on the face and head of the character. I have never modeled a character’s face in such detail before so it took me a much, much longer time that expected. Unfortunately, my model sheet was lacking detail in the face which definitely hindered my workflow. However, I followed an excellent tutorial which helped me a whole lot. The tutorial began the face with several cylinder caps which I adjusted to suit.
After literally hours, I had finished the head. She still looks scary / horrifying. I am not sure if this is the lack of hair (which I will be completing later), lack of eyebrows or simply my inexperienced and fumbled attempt at modeling a face.
Taylor, J. (2013). Maya Character modeling tutorial, part 2 – Hands and Feet [Video]. Retrieved from
Taylor, J. (2014). Maya HEAD MODELING for ANIMATION tutorial [Video]. Retrieved from
Taylor, J. (2015). MAYA 2016 FEMALE BODY character modeling tutorial [Video]. Retrieved from
Ward, A. (2013). Game Character Creation Series. Retrieved 2nd October, 2015, from
For my game character I plan on animating four basic actions: idle, walk, run and jump. These animations are the minimum requirement for most video games and would also fulfill the brief. From these actions I created my animation breakdown list (which can be found in my student folder):
For my game character production pipeline, I plan on creating a character that would suit a stylized RPG. I wanted to design an earth-magic user so I could use earthy tones and green particle effects (for the magic). Below are my concepts for Tahlia:
I realize that this is a basic ass character design and the horns don’t work unless she is a full satyr etc. but I was mainly inspired by the comic book series Saga and how Fiona Staples creates humanoid characters with animal features:
I usually find that model sheets do not include a back view of the character. However, I have included it as I find it helpful, especially for the placement of muscles. For how I will begin the process of modelling.
Marcotte, J and Staples, F. (2014). Interview: Saga Artist Fiona Staples.
This blogs continues from Part 1.
As covered in the last research blog, the pipeline for a game character can be non-linear in order to increase efficiency. This is true for the animation side of things as many aspects can be worked on before and during the modelling of the character.
On a quick side note, I will briefly discuss the differences between animation film and game. Animation for video games is quite different from films or movies as it is an interactive form of entertainment as opposed to a passive form (Sanders, 2015). The animation itself is meant to be interacted with, not just viewed. In addition to this, the camera is not locked down and directed as it is in a film (Masters, 2013). This means that animation must look good and the curves must be smooth from all possible angles (Masters; Sanders). Additionally, the transitions between every possible action combination must be considered. This is quite different from films in which animators can ‘hide’ certain aspects of the animation or ‘cheat’ (for example, by breaking the rig in a way which looks good from a particular angle). Of course, certain parts of a game, such as cut scenes, might be passive and in larger studios these animations are handled by a separate “cinematic” department (Dahl, 2015).
Additionally, video game animation tends to be heavily focused on body mechanics due to the media itself (Masters, 2015).
There are variety of aspects that need to be consider before planning the gameplay animation. These considerations from the type of game you are creating and the constraints involved with the project. For example, the animation process will be extremely different if the game is 2D as opposed to 3D. Additionally, animations will vary on the type of camera used – third person will differ to isometric (Masters, 2013).
One really important thing to consider is the importance of responsiveness to the gameplay (Masters, 2013). As Tobias Dahl (2015) stated: “Gameplay comes first!” For example, a fast-paced military shooter demands an instantaneous response while a puzzle game may not. How responsive an action needs to be will impact the timing and amount of anticipation of an animation (Masters; Sanders). Nothing is more frustration to a player then pressing the attack button and having the character slowly draw their sword. Due to this, animations not only need to be responsive but also fun and engaging (Dahl, 2015).
Another major aspect to consider is style the game is trying to achieve. For example, many AAA game companies try to create characters and environments with a high level of realism (Micu, 2013). However, this can be extremely time consuming to create manually so methods such as motion capture is used to assist animation (Masters, 2013).
The level of interaction in the gameplay is also important to consider (Sanders, 2015). If the level of interaction is high this can be a huge strain on the animators. Dahl (2015) states that it is better to use short cycles than long sequences. Cutting down on the variety and length of animations can be achieved through the following considerations: Can the player use / interact with a wide variety of things? Do these interactions require unique animations? Can we blend or layer animations to achieve this? What can be reused?
Additional constraints include: platform, poly-count, software, engine, real-time rendering, processing power, programming, application of physics, and, of course, the triple constraints of time, money and quality.
Once the constraints have been taken into consideration, the direction of the animation will need to be considered. The type of game, desirable style and constraints will all factor into the animation direction (Dahl, 2015). A good example of two games with varying art direction are the current Fallout games: Fallout 4 and Fallout Shelter. Both are set in the same world with the same lore. However, Fallout 4 is a high-powered PC and console game with realistic 3D while Fallout Shelter is a 2.5D mobile game with a cartoonish style. As games they have very different goals and different constraints dictated by gameplay and platform. These differences make for very different animation styles but both styles of animation suit their respective game.
By now, the direction and style of the animation should be clear. From here, a comprehensive list can be written up. This list should breakdown all actions that need to be animated into their respective segments. Each segment will be corrected named according to a naming convention and should be categorized as looping or forward. This will aid both the animators and game programmers.
To do this consider the goals for the animation: what is it trying to achieve? What is its purpose? What are the gameplay constraints or demands? How long does it need to be? For example, a ‘tank’ character will have slower attacks (longer animations) that exaggerate weight and force to demonstrate strength. (Dahl, 2015).
The next stage is animation. This is an iterative process, meaning that it is repeated several times: each cycle improving on the last, bringing the animation closer to the final result. Going from the breakdown list, animators will find or create reference images and videos (Dahl, 2015). From this the animation is blocked out “quick and dirty”. Animation may be hand keyed in programs such as Maya or Max or it may first come from motion capture footage and adjusted to suit. As I have not used motion capture, I typically use stepped keys when blocking as I find that it helps with have dynamic poses. It is then exported and moved into the game engine, in this case Unreal. Any bugs or major issues are tweaked. Then the animation is implemented and tested. In this stage the timing, responsiveness, and general feel is examined and the animation is tested from all angles. It is then reviewed (this may be a group process) and the next iteration begins. This may require going back to reference or re-shooting motion capture footage. This process is repeated until the animation is finalized and ready to be implemented. (Dahl, 2015).
As mentioned above, you will be exporting the animated model many times throughout the multiple animation iterations. This is done by simply exporting the model as an FBX file with the following settings ticked: Smoothing Groups, Triangulate, Animation, Baked Animation, Deformed Model, Skins and Blend Shapes (Epic Games, 2015). This will allow the character to be correctly and easily imported into the Unreal Engine.
As demonstrated on the Epic Games (2015) website, to import the animated model into the Unreal Engine simply import the FBX with the above settings. Fully implementing the model as a controllable character with blend spaces and particle effects requires the use of either Blueprints, the Unreal alternative to scripting (Epic Games, 2015). As this is a rather large topic in its own right, I plan to research and write it up separately.
Autodesk. (2014). Export a Scene to Unreal Engine 4. Retrieved from
Dahl, T. (2015). Action: The Animator’s Process [Video]. Retrieved from
Epic Games. (2015). Creating a Blend Space. Retrieved from
Epic Games. (2015). FBX Best Practices. Retrieved from
Epic Games. (2015). FBX Animation Pipeline. Retrieved from
Epic Games. (2015). Setting Up a Character. Retrieved from
Micu, V. (2013). Jonathan Cooper on Taking Chances, Being Pushed Out of Your Comfort Zone, And Assassin’s Creed III. Retrieved from
Masters, M. (2011). From the 80s to Now: The Evolution of Animation in Video Games. Retrieved from
Masters, M. (2013). How Animation for Games is Different from Animation for Movies. Retrieved from
Sanders, A. (2015). Animating for Video Games vs. Animating for Movies. Retrieved from
Skyrim Screenshot [Image]. (2012). Retrieved from
Wyatt, D. (2015). The Art of Cutscenes. Retrieved from
With my specializations, I will combining the modelling and animation projects into a single production pipeline for a game character. Therefore, I have researched the most common industry practices regarding the pipeline.
Through my research, I have tried to piece together how the production pipeline would work. For the specialization project, I will be working alone. This means my production pipeline will be straight-forward and look like the one above.
However, depending on the level of detail, time frame and team size, different people can be working on different things simultaneously in order to save time and be efficient.
Before production on the game character begins, the art direction should work with the game designers to create an art bible, write some lore, define character abilities etc in order to help define what characters should look like and be like to fit into the game world. From here, concept art for the character can be created. When creating concepts, the priority is speed and quantity into order to explore a variety of different ideas and looks (Anhut, 2014).
According to Anhut (2014) there are some common misconceptions about concept art. A lot of art labeled as “concept art” is created after the final character design has been finalized for promotion and marketing. This confusion between actual concept art and promo art can cause workflow and time issues as the concept artists are forced to create “publishable” concept art (Anhut, 2014). For this reason, it is essential quickly create concept art to design interesting characters that suit the game.
When the character design has be defined, a turnaround sheet is created. This image should be suitable for modelling: character’s shaped and outlines should be clear, with enough detail to model but no unnecessary lighting, line-work or coloring. The turnaround sheet will be brought into the modelling program and used as a reference.
To begin the character modelling a base mesh is created. Usually this is created out of ‘primitives’ and adjusted so that it has the basic shape of the character with the lowest amount of detail possible (Ward, 2013).
From there, detail is added to the base mesh in order to create the hi-res (and hi-poly) version of the model. There are two common ways to add this detail, the method you choose will depend on your skill set, your familiarity with different software and the software and tools you have access to. One way to do this is through subsurface division in Max, Maya or Blender (Ward, 2013). Ward (2013) states that this method is really efficient as the base model, hi-res and retopologizing can all be handled in a single program. An alternative method is to use sculpting programs such as ZBrush. This method can ensure a extremely high level of detail by may make for more complex topology(Antonio, 2010). From my research, it seems that both ways are equally popular.
Once the hi-res version is complete, it is saved as a separate file.
The model is then taken and retopologizied. This is the process of simplifying a model and removing excess geometry (Ward, 2013). For example, if a shirt was added on top of the torso, the ‘skin’ beneath the shirt can be removed. There are multiple plugins and external tools that do this and can help with workflow. During this process, it is important test the normal map (which will be generated from the hi-res model). If the topography has been simplified or changed too much, the normal map will not work (Ward, 2013). In addition to this, it is important to check that the joints can still deform correctly. Once this stage is complete, the game ready model is finished.
At this stage the final game model is taken and unwrapped (Ward, 2013). This is done through adding seams and relaxing the maps. This is pretty standard and how you do it will mostly depend on symmetry and level of detail.
Using the hi-res model we can generate normal, specular, crevice and AO maps and bake them to lo-res model (Ward, 2013). This allows detail to be ‘added’ onto the model without the topology being adjusted. Once again, different games may require different maps.
The next stage is building the character’s skeleton out of bones in Max or Maya (Ward, 2013). This can get very complicated so a simplified version may be good for a game character. Depending on your team, can be started by another team member once the low-poly model is complete in order to increase the efficiency of the workflow.
So finally the character should be modeled and movable. In order to ensure that the mesh does not break during animation, the model must be skinned to the skeleton (Ward, 2013). This can be handled in Max or with “Paint Skin Weights” in Maya (Ward, 2013).
The last step before animation is to set up an animation friendly rig. This is done by setting up controls and FK/IK targets for the limbs and joints (Ward, 2013). Additionally, depending on the game and level of detail, a full facial rig may be added. At this stage the game character should fully ready for animation and implementation.
Referring back to this workflow chart it can be seen that how much time can be saved by having different people work on different things simultaneously. This can help streamline a project by using time efficiently and by coming across issues earlier rather than later.
Alchemist Model [Image]. (2014). Retrieved from
Anhut, A. (2014). Let’s Get Real About Concept Art. Retrieved 2nd October, 2015, from
Antonio, L. (2010). Character Creation for Videogames. Retrieved 2nd October, 2015, from
Crimson Viper [Image]. (2009). Retrieved from
Diamant, R. & Simantov, J. (2011). Uncharted 2: Character Pipeline. Retrieved from
Fisher, A. (2013). Create a Game Character. Retrieved 2nd October, 2015, from
Michelle, L. (2011). Female Character for Games. Retrieved 2nd October, 2015, from
Simantov, J. & Yates, J. (2011). Uncharted Animation Workflow. Retrieved 2nd October, 2015, from
Street Fighter Concepts [Image]. (2009). Retrieved from
Ward, A. (2011). How to create character models for games: 18 top tips. Retrieved from
Ward, A. (2013). Game Character Creation Series. Retrieved 2nd October, 2015, from