[update: included a short demo video]
Normal maps are standard. It's a great and simple technique to simulate three-dimensional surface structure and details on low polygon object.
Often it's enough to have a static texture for that, but sometimes the surface changes in a way that can't be represented by joint moments or blendshapes, for example fine creases in cloth, wrinkles in skin and so on. For these things it would be nice, to have different normal maps which could be blended together at runtime. But how to do that?
AREAS OF INFLUENCE
When trying to divide the wrinkles and creases in a human face into groups, which can be change more or less independently, we end up with something like this:
- Above eyebrows
- Between the eyebrows
- Around the eyes
- Sides the mouth
- Chin area
But there's a nice trick. These creases occur perpendicular to the direction of movement. And every part of the face has one main direction in which it moves:
The skin on the forehead moves up or down, never to the left or right. Same with the skin at the outer side of the eyes or the mouth.
So for our seven independent crease areas above, there is not much overlapping.
CREATING TEXTURE MAPS
Knowing this we can simply create on normal map which holds all dynamic creases and wrinkles and bulges and what not. To access them individually at run time, we need to separate them via additional texture maps. The number of texture maps required depends on how many different shapes we want to implement. Each shapes takes one texture channel and one extra channel to be able to to split them into left/right:
Seven + one fits perfectly into two textures. I use two 256 maps (64KB each), but I guess it would work with an even lower resolution.
SUM OF SOME NORMALS
Now for the shader part! First we need our textures (normalMap, creaseMap, blendMaskA, blendMaskB + all the other maps you may need) and additionally we need some variables to control the influence of each crease area. I use three float4's to pass this information to the shader.
In the pixel shader we sample each of the textures and then multiply each channel with the according control variable and add it up. Something like that:
half blendLeft = inputA.r * maskA.g + inputA.b * maskA.b + (..);After that creaseNormal is multiplied by the blendFinal value and added to our regular normalMap. Unfortunately normal maps can't be simply added, so I use this equation:
half blendRight = inputA.g * maskA.g + inputA.a * maskA.b + (..);
half blendFinal = blendLeft * maskB.a + blendRight * (1 - maskB.a);
half3 N = half3(normalMap.ar + creaseMap.ar * blendFinal - half2(0.5 * blendFinal), 1);If you wonder what the normalMap.ar is:
N = normalize(N);
In my normal maps I store only the tangent and binormal, because this way DXT5 compression can be used without visible artifacts (moving one channel to the alpha channel).
PRERENDERING THE NORMALMAP
For my project I need to access the normal map multiple times in two different shaders (n is the number of light used):
- for the diffuse-light-baking: n+1 passes
- for the specular part: n passes
If for example two light sources affect the object, the whole normal-map-blending-thing would be calculated 5(!) times. So I choose to prerender the normal map in and extra pass at the beginning of each frame. Then a renderTexture is passed to my regular shaders, instead of the static normalMap.
Additionally this process can be further optimized. The facial expression of a normal human being is not changing very often, only perhaps when the person is talking. So this pre-rendering of the normal map can be triggered, if changes to the facial expression and with it the blend parameters are made.
If you have questions, feel free to leave a comment! :)