Section 1 of the figure
shows a flat
surface, if it's illuminated by a direct light, every part of it will
have the same color. Section 2, on the other hand, shows a bumpy
surface; if it's illuminated by a direct light, the parts that face the
light appear lighter and the ones that oppose the light appear darker.
effect is desired when you are modeling surfaces that aren't smooth
like a rock or sand. Section 3 shows the normals of the faces
the bumpy surface. These normals determine how dark is the
surface when a light illuminates it.
Now, if what determines how
the surface is lit are the normals, why bother creating the complex
geometry of a rock? We can save both content creation and GPU work by
setting the flat surface's normals to the ones of the bumpy surface as
shown in section 4 and let the illumination algorithm lit the flat
surface as shown in section 5.
The surface in section
with an inclination with a normal like the one in the left
section 2a, then it has a declination with a normal like the one in the
right part of section 2a, then it repeats this pattern so the normals
A normal is defined in the [-1,1] interval
but an image can only store values in the [0,255] interval, the
intervals are shown in section 2b. The conversion of the values in 2a
can be seen in 2d, what it means is that 0.9 in the [-1,1] interval is
243 in the [0,255] interval.
The normals in the [0,255] interval
are in 'texture space'. In 2c, you can see the texture space versions
of the normals in section 2a, for example, the left normal of 2a is
[-0.44,0.90] which in texture space is [72,243] which is the color
shown in the left part of 2c. Notice that since we are downscaling the
problem to 2D, the blue component of the color remains 0.
Section 3 of the figure above shows the normal map in colors that
represents the bumpy surface of section 1.
Suppose we want to
build a bumpy box
like the one shown in 4, for that we would need to apply the normal map
shown in 1 to the box shown in 2.
The result should be 3 but
thinking like a computer, we are declaring to just replace the normals
and this simple replacement can be seen in 5 (if you don't believe, go
through each of the colors in 5 and draw the normal corresponding to
map in 1)
Normal mapping is done at a pixel level. to decide the
final color of a pixel in the screen, the normal for the
is obtained from the map and lighting calculations are performed with
The trick is to 'fix' the normal before the lighting calculations to
obtain 3 instead of 5.
To fix the normals we
change of frame. A very famous 2D frame is the coordinate system where
the horizontal axis is the [1,0] vector and the vertical one is the
[0,1] vector, this frame is represented as A in the figure.
general, a 2D vector can be described as amounts in a
frame's horizontal and vertical vectors. For example
[0.44,0.9] in frame A.
Another frame is the one shown in B,
where the horizontal and vertical vectors aren't the standard axis but
arbitrary vectors. In it, the vertical axis is [0.71,0.71] and the
horizontal one is [0.71,-0.71].
In frame B, vector [0.44,9.9] is [0.95,0.33].
The solution to the
problem is to
find a frame for each face, this frame is known as the tangent space of
the face. Note in the figure that the box has 4 faces and that each has
it's own tangent space, then we convert the normals in the map from
texture space to the tangent space of the face before applying lighting
BetaCell has routines that calculate the tangent space of the faces of
a mesh so that you don't have to mess with this.