Interactive demo of my skin shader:
(requires the Unity Web Player plugin)If you see only a black head, then probably your graphics card does not support ShaderModel 3 (only Nvidia GeForce-6, Ati Radeon-X1000 or newer).
For older graphics cards I captured a video of the demo:
Ok, now some more details on the skin shading implementation:
First all the shadowmaps are calculated, this is done automatically by Unity.
STRETCH CORRECTION MAP
After that a map is rendered that holds information about how much the UV layout is stretched, compared to the real dimensions. This is done by unwrapping the mesh into texturespace (see below) and saving the the screen space derivatives of the world coordinates for U and V direction, which gives a rough estimation of the stretching.
Rendering in texture space:
To project the mesh into texture space, a vertex shader is used. The position of each vertex is set to it's UV coordinate and the world coordinates are stored in a extra texture coordinate. This way the mesh is rendered in its UV layout and the fragment shader can still access the "real" vertex position for lighting and things like that.
The next step is to calculate the irradiance into a texture. There for the mesh is rendered with a replacement shader, which uses the vertex shader described above. In a first pass, the fragment shader calculates the ambient light (see my post about using ambient cubemaps). After that the irradiance for each light source shining onto the mesh is added. Here a regular lambert algorithm is used, which means the light intensity is directly proportional to the cosine of the angle between surface normal and the lights angle of incidence.
The surface normal can be modified by a normal map. In my implementation I use the normal map only for the light sources, but not for the ambient light for performance reasons. The cubemap has such a low variation, so that a normal map would not change the result very much (I haven't tested this, but I will when finishing the skin shading system).
The gaussian blur of the lightmap needs a lot of samples to get good results. In my case I use 5x5 samples, so a total of 25 texture accesses would be required. By using the common technique of splitting up the process into a horizontal and a vertical blur, the number of texture accesses can be reduced to only 10 (5+5) at the cost of an additional draw call. This gets even more efficient when using more samples (e.g. 64 > 16).
I use four differently blurred lightmaps to compute the final (diffuse) brightness of the skin. The blur radii reach form almost zero to a few millimeters and are scaled by the stretchmap to achieve an uniform sampling width. Each blur level is build from the previous, where the first is actually only the unblurred lightmap.
Nvidias Human Head Demo uses the same technique, but much more samples and blur levels. The result looks amazing, but with a big impact on the performance.
FINAL RENDER ON SCREEN
After all this, all the maps for the final rendering onto screen are ready.
After that we add the specular light for all the lights (using a fresnel term and a prebaked BDRF-lookup texture).
The color map could also be multiplied into the lightmap before blurring instead doing this in the end. This way the color map does not just "sit on top", but gets a little bit depth. But it also looses all the details:
Enough written for today! :)
Next part: skin shading - stretch correction