It seems that every year, the games released look better and better, but rarely does the hardware, the consoles games are played on, improve. Even a PC user will only upgrade every so often. Without a more powerful system, how do games look better and better all the time?
The answer, is optimization. As new generations of hardware hit the market, the game development industry is bombarded by new “next gen” tools to take advantage of this new power. As time goes on, tricks are discovered and shared, creating ways for artists to optimize their props. These shortcuts allow artists to push the envelope on what is possible, as they are always applying past lessons to new projects.
While readily available technology and artist skill will always play a a factor in the visual aspect of games, they are not the sole causes for the evolution of game graphics.
After the model is created, UVed, and the texture and other appropriate maps are created, it needs to be assembled in the Game Engine, which serves as a framework for the game world, and allows artists, programmers, and animators to put their work together in to a full game. In the game engine’s level editor, a 3D modeler will place the props they’ve made around the level, apply the appropriate maps and textures, and use lighting tools to set the mood of the level, bringing the game world to life.
While parts of games may be procedurally generated, such as minecraft, or may be sculpted in something like Zbrush or mudbox, rather than modeled using polygonal modeling, this basic workflow is used across the industry to create assets and props for games.
Normal mapping, as discussed in a previous post, is a way of applying normal information from a high poly mesh to a low poly mesh, to give it more detail. But what are normals? How do they work?
Normals, or surface normals, simply put are the direction that the surface faces. Each side of a cube has a unique normal, as they all face in different directions. Since high poly models are much more detailed than low poly models, their have much richer normal information. By seeing where this normal information differs, 3D modeling software can figure out where to apply new normals to a surface (even if it is not it’s natural, default normal), and where to leave it the same. While this does not affect the actual shape of the model, it will affect the way light reacts when hitting that model, since the way it’s faces are pointing, or sections of its faces are pointing, has been changed by the normal map in the computers eye.
Although the order in which the High Poly and Low Poly models are created is interchangeable depending on the workflow (creating an extremely detailed model and removing detail from it, or creating a simple model and adding detail to it), it’s exact place in the workflow depends often on personal preference. As the name suggests, a High Poly model is created with a large amount of detail, which would be undesirable for use in a real-time rendering situation, but can be beneficial in several other ways, as explained below.
4) UV Mapping
UV mapping is one of the most important parts of the 3D workflow. It involves taking a 3D model that exists in 3D “XYZ coordinate” space, and unwrapping and unfolding it so that it sits perfectly in 2D “UV coordinate” space. The easiest way to think of this process is backwards origami. By unfolding the model and laying it flat, the 3D artist creates a 2D image that can be drawn on. Since every polygon, edge and vertex are laid out in this 2D space, each section of the image is also associated with a location in 3D space.
My name is Jason Moran, I am 21 years old, and I was born in raised in Northern Virginia. I have a background in CAD and BIM modeling, as well as working with Laser Scan Data. I am majoring in Game Art and Design. I have tested, demonstrated, and trained 3D modeling software professionally in the past.
I came to Ex’pression College because I wanted to be in a place closer to the heart of the game industry, where I could network and me surrounded by people with similar interests in skills, both of which were nearly impossible in Virginia.
Creatively, I am very interested in the “illusion” of 3D art. I love the idea that a few meshes and textures/materials can be assembled in a game engine in a way that creates the illusion of a world filled with history to the player.
In ten years, I would like to be making kickass 3D art, and getting paid for it. I would also like to create video tutorials for total beginners in 3D art, to spread knowledge of 3D art as well as answer the questions that I myself had starting out. I would love to be a part of actually making a great Spider-Man game.
Some of my biggest influences are the Borderlands series, as well as Bioware title such as Knights of The Old Republic and Mass Effect. I enjoy Assassin’s Creed as well (The first one is the best in my opinion, the lack of quality in some of the newer ones is a shame).
At the moment I am working on learning to create a complete prop for use in a video game, including the creating of high/low poly meshes, baking normal maps, creating bitmap textures as well as specular maps and eventually, even PBR materials. I’m already a pretty good modeler.
Deconstructing the major components of 3D art for Games & Animation