- Compiling x264 on Windows with MSVC
- Building a zero copy parser
- Kalman Filter
- Saving pixel data using libpng
- Compile Apache, PHP and MySQL on Mac 10.10
- Fast Pixel Transfers with Pixel Buffer Objects
- High Resolution Timer function in C/C++
- Rendering text with Pango, Cairo and Freetype
- Fast OpenGL blur shader
- Spherical Environment Mapping with OpenGL
- Using OpenSSL with memory BIOs
- Attributeless Vertex Shader with OpenGL
- Circular Image Selector
- Decoding H264 and YUV420P playback
- Fast Fourier Transform
- OpenGL Rim Shader
- Rendering The Depth Buffer
- Delaunay Triangulation
- RapidXML
- Git Snippets
- Cross Platform OpenGL
- Basic Shading With OpenGL
- Open Source Libraries For Creative Coding
- Bouncing particle effect
- OpenGL Instanced Rendering
- A Collection Of Interesting Articles
- Mapping a texture on a disc
- Download HTML page using CURL
- Height Field Simulation on GPU
- Scholar
- OpenCV
- openGL
- Math
- Gists to remember
- Reverse SSH
- Working Set
- Consumer + Producer model with libuv
- Parsing binary data
- C++ file operation snippets
- Importance of blur with image gradients
- Real-time oil painting with openGL
- Basic CUDA example
- x264 encoder
- Generative helix with openGL
- Mini test with vector field
- Protractor gesture recognizer
- Hair simulation
- Some glitch screenshots
- Working on video installation
- Generative meshes
- Converting video/audio using avconv
- Auto start terminal app on mac
- Export blender object to simple file format

# Basic Shading With OpenGL

I remember that before I started with OpenGL or graphics programming I saw all these nice colorful images without actually knowing that it were normal maps I was looking at. Also when writing shaders I didn't really know common ways to check if the results of my shading calculations were actually correct. Sometimes your lighting might look correct but then at some point you might realize that you were actually mixing different spaces or that you made some other silly mistake. Therefore I present here some snippets which might be usefull for someone starting with writing their own shaders. If you have comments or suggestions on ways to debug your shaders please drop me a mail.

*Check your normals*

One of the most important things you need to consider is that your normals are correct. This means that they need to be normalized and that they must be converted into the space in which you are doing your lighting calculations. Note, when doing light calculations always make sure that you are in the same space with all your vectors. Also normalize the vectors which are meant to be used as "direction" vectors.

I find it convenient to work in view space and therefore convert the
normals to this space using a normal matrix. As normal matrix I simply
use the upper 3x3 matrix of the modelview-matrix. The modelview-matrix
is: `model_matrix * view_matrix`

; and basically allows you to move into
view space. You can only use the upper 3x3 matrix if your model matrix
is a rigid body transformation. A rigid transformation of a vector space
preserves distances between every pair of points. You'll sometimes see
somethine like:

mat3 normal_matrix = transpose(inverse(mat3(modelview_matrix)))

For rigid body transformations the transpose would cancel out the
transpose again. When you have uniform scaling or rotation you don't
need to use the above `transpose`

and `inverse`

but you can directly
use the upper 3x3 of the modelview-matrix.

The image below shows an example of normals that are not converted into model-view space. You can clearly see that the normals change which would mean the light attenuation will change as well. When you see that your normals are changing from pinkish to greenish you know that the normals are not in model-view space.

*Normals not multiplied by normal matrix or mat3(modelviewmatrix)*

The shader code which created the above version (simplyfied to only the important parts)

// VERTEX SHADER // -------------------------------------- #version 150 uniform mat4 u_pm; // projection matrix uniform mat4 u_vm; // view matrix uniform mat4 u_mm; // model matrix in vec4 a_pos; // vertex position in in vec3 a_norm; // normal in out vec3 v_norm; // normal out out mat4 v_mv; // modelview matrix out out vec3 v_pos; // vertex position out void main() { gl_Position = u_pm * u_vm * u_mm * a_pos; v_mv = u_vm * u_mm; v_norm = a_norm; v_pos = vec3(v_mv * a_pos); // our position in eye coords } // FRAGMENT SHADER // -------------------------------------- #version 150 uniform mat4 u_vm; out vec4 fragcolor; in vec3 v_norm; in vec3 v_pos; in mat4 v_mv; void main() { fragcolor.rgb = 0.5 + 0.5 * v_norm; // note that we are directly using the normals as colors and don't convert it to eye coords }

When you multiply your normals using the upper 3x3 matrix of your model-view matrix all colors should look blueish/purpleish and not green/reddish, as in the image below. Note that you can only use the upper 3x3 matrix of your model-view matrix when it contains uniform transformations. When you have non-uniform transformations you should provide a separate normal matrix and use that instead.

The fragment shader, which does correctly multiply the normals by the view matrix that created the above image looks something like below. Note, this is the correct version as we want all our calculations done in the same space. For the vertex shader see the previous snippet.

// FRAGMENT SHADER #version 150 #define USE_HALF_VECTOR 1 uniform mat4 u_vm; out vec4 fragcolor; in vec3 v_norm; // in object coords in vec3 v_pos; // in eye coords in mat4 v_mv; // model-view-matrix void main() { vec3 spec = vec3(0.0); vec3 n = normalize(mat3(v_mv) * v_norm); // here we transform our normal to eye space fragcolor.rgb = 0.5 + 0.5 * n; }

Here are a couple of other images which may help you to debug your shaders. So first, check your normals and if your are converting them into the same space as where you're doing the lighting calculations. Above I convert everything into eye space (also called view space, camera space by others).

*Debugging*

First you can draw your normals like `fragcolor.rgb = 0.5 + 0.5 * n`

and check if the
colors stay a bit purple/blueish and don't turn red/greenish. If they stay nice blue/purple
everything is fine, else not.

Secondly you can use a `vec3(1.0)`

as you light direction vector. If you're normals are incorrect
you will get black spots in your result, like the image below. Compare with the next image below
two where the normals are correctly converted to the same space (eye).

When you correctly moved the normals to the same space then you'll see something like below. Notice how the light also brightens the back side of the monkey? This is correct because we do not move the position of the light which is shining onto the monkey.

Another image with specular and diffuse lighting where we do not convert the normals to eye space. Again notice how the back of the monkey's head is not lit up by the light.

The correct version, where we use specular and diffuse shading:

*Shader with Diffuse and Specular Shading with and w/o Half Vector*

Vertex Shader

#version 150 uniform mat4 u_pm; uniform mat4 u_vm; uniform mat4 u_mm; in vec4 a_pos; in vec3 a_norm; out vec3 v_norm; out mat4 v_mv; out vec3 v_pos; void main() { gl_Position = u_pm * u_vm * u_mm * a_pos; v_mv = u_vm * u_mm; v_norm = a_norm; v_pos = vec3(v_mv * a_pos); }

Fragment Shader

#version 150 #define USE_HALF_VECTOR 1 uniform mat4 u_vm; out vec4 fragcolor; in vec3 v_norm; in vec3 v_pos; in mat4 v_mv; void main() { vec3 spec = vec3(0.0); vec3 n = normalize(mat3(v_mv) * v_norm); // modelview matrix to move the normal into eye space // diffuse vec3 s = vec3(1.0, 1.0, 1.0); float sdn = max(dot(n, s), 0.0); #if USE_HALF_VECTOR vec3 v = normalize(-v_pos); vec3 h = normalize(v + s); if(sdn > 0.0) { spec = pow(max(dot(h,n), 0.0), 13.0) * vec3(1.0, 0.0, 1.0) * 5; } #else vec3 v = normalize(-v_pos); vec3 r = reflect(-s, n); if(sdn > 0.0) { spec = pow(max(dot(r, v), 0.0), 3.0) * vec3(1.0, 0.0, 1.0); } #endif fragcolor.a = 1.0; fragcolor.rgb = vec3(0.0, 0.2, 0.6) * sdn + 0.4 * spec; }