by Marcus Wilkinson
A huge part of any game is the graphics that goes into it. This normally involves around 3 steps: Modelling, painting and animating.
The modeling stage is where the shape of the 3D model is created. This can be done in several ways. The most common is to use NURBS and then convert these to polygons. Almost all computer graphics whether rendered in realtime (as in gameplay) or pre-rendered (as in cut-scenes or film effects) is rendered using polygons, specifically surfaces broken down into triangles.
Nurbs stands for Non-Uniform Rational B-Splines. This basically means that the surfaces are defined by a series of points and the surface fits these points in a continuous manner. This allows for much smoother surfaces than traditional rendering methods. When it comes to rendering, the surfaces are broken down into polygons, usually by the renderer.
Traditional polygon modeling usually involves the manipulation of points in space (vertices) which define independent polygons. Polygon modeling is often accomplished by manipulating a simple shape (e.g. cube), and then sequentially sub-dividing the model and re-modelling. Sub-dividing the model increases the number of vertices and maintains the shape of the model. Detail is added to this extra level of detail.
This stage adds the colouring information to the model. These normally involve creating various types of 2D maps (or images) and placing them onto the model. For example, a texture map is one that has colour information. A displacement map is one that displaces the vertices of the model. Other types of maps are included mostly to improve the realism of the model. Some of these include:
Bump Maps: To give the illusion of depth (different to displacement maps which actually adjust the vertex positions)
Specular Maps: Adjusts the level of specular highlights of the model. E.g. simulating rusty patches on a car, which would typically have a lower specular level compared to the rest of the model.
These maps are assigned to the model, and advanced modelling packages allow the texture to be edited or ‘painted’ whilst applied to the model.
Once the model is all set up to render, it may need to be animated. Animating 3D characters has been compared to acting because one must enter the mind of the 3D character in order to give it a convincing personality.
There are many tools to aid in animation which make the motion seem more realistic. For example physics simulations can simulate a ball bouncing, soft bodies wobbling, water ripples and many more. These all require precision in setting up and the common myth of ‘the computer does it all for you’ is often misplaced here.
A typical animation task is creating a walk cycle. This is often accomplished using motion capture software which records the movement of a real-life actor and applies it to the original model. Sometimes though, the animator must create the walk cycle himself. He does this by assigning positions of limbs, torso etc at a certain time to a certain position. Software then interpolates between these positions creating a smooth animation. To preserve realism, constraints are applied to joints which stop elbows bending backwards, feet going through shins etc. The setting up of these constraints is called rigging and is often performed just prior to animating.
Facial animation such as speaking is often accomplished by using morphing. Sevral versions of the same head are created in extremes of various poses. Such as mouth-wide-open, the shape you have when you make a ‘fff’ sound, and various other phonemes and expressions. These are ‘morph targets’ and the process of blending between them creates the animation.
Next Page: Cybernetics Menu
Previous Page: VRML