website articles
on ambient occlusion


Realtime ambient occlusion, the holly grail of todays realtime rendering... we seem to be very close to get something that looks similar enough to it. With the explosion of the screen space techniques in 2006 the door was opened to all sort of variations of ambient occlusion techniques. Being the zbuffer a partial repository of scene geometry, non local information has been available (although incomplete) since years and so nonlocal effects like ambient occlusion could have been done much before. I believe it was the paper by Thomas Luft, Carsten Colditz and Oliver Deussen the first one to point to the screen space ambient occlusion technique, but it somehow passed unoticed to most realtime cg coders. The paper that catched most people's attention was this one by P.Shanmugam and O.Arikan altough it was too slow for real applications. It was Crytekīs paper that convinced most of us that it was possible to use screen space techniques. However there are many ways to do ambient occlusion in realtime. For example, I find specially relevant this paper (chapter 9) by Alex Evans (aka Statix) from 2006 based on 3d textures.

I have used ambient occlusion in several projects since I first knew about it, in many flavours. Raytraced, per vertex precomputed, screen space (few variations), object space... Here goes a short sumary of them with images and videos.



per pixel, dynamic, offline :: object space :: brute force raytraced

The easy and slow way to do ambient occlusion is to use its definition, perhaps with some distance attenuation (to avoid complete black scenes in interiors). At the time (2005) my raytracer could cast two million rays per second in a single core machine, but even so the amount of time needed to render this noise free images was quite considerable (in the order of 5 minutes). I tried some shortcuts based on the variance of the casted shadow rays (like first cast only 64 rays, compute the variance, and if it was high, cast 64 additional rays, and so on) without much success.

image
image
image
image


per pixel, dynamic, realtime :: object space :: raytracer

One day I quickly implemented a system pretty much ripped from Alexander Kellerīs Instant Radiosity method. The idea was to cast one single ambient occlusion ray per pixel instad of few hundreds. The result was stored in a separate buffer instead directly used for shading. Later, the (very noisy) ambient occlusion buffer was smart blured (taking the z buffer jumps into account) and composited with the color buffer. I didn't have the time to tweak the smart blur parameters (that's why the ugly halos around objects in the screenshot), but it definitiveņy was realtime (just as the raytracer itself).

image


per vertex, static, realtime :: object space :: precomputed in gpu

The next experiment with ambient occlusion was the Paradise 64k intro in 2004 (although I never included it on the final version), and also one year after in 195/95/256. The idea is described in this article. In short, for each vertex an ambient occlusion value is computed on the GPU in a preprocess phase. For that a camera is set on each vertex, pointing in the direction of the normal to that vertex, and the scene is rendered with a big field of view angle. Background can be drawn in white, objects in black, and then compute the average pixel color for the color buffer.

image
image
image
image
video and executable demo


per pixel, dynamic, realtime :: object space :: analytic

One day in 2006 I though to the ambient occlusion definition once again. An integral. What if the integral could be evaluated instead of just estimated by sampling methods? Given that the maths we humans have developed so far are quite primitive, only simple equations can be analytically computed. For example, equations of spheres and planes are easy to be integrated. Therefore I took the pencil and the paper and developed some formulas to compute exact ambient occlusion values for some basic geometric configurations. There are a series of articles on the subject, like this one or this one.

video


per vertex, dynamic, realtime :: object space :: analytic

This was my first approach to object space realtime ambient occlusion, begining of 2007. It was all done in the CPU (SSE coding for speed up) and I could shade few hundred thousand vertices per frame. The computations where not exact, but a good approximation. It was analytical, so it only worked with spheres, cubes, planes and cylinders (so, objects with simple equations). The idea is to be able to evaluate the proximity to the geometry at any point in space. With this information, one can sample a few points arround the surface point to be shaded, and "see" how far or close the point is from any geometry. With this information one can "guess" how occluded or exposed to free space the point is. It's not real ambient occlusion, but hey, neither ambient occlusion does reflect reality, it's just a trick :)

image
image


per pixel, dynamic, realtime :: screen space :: zbuffer

After Crytek published the paper about SSAO I jumped into the technique quickly. My first implementation was too slow and had a big overhead, but was good enough to make a realtime demo and even an article and to open an interesting thread in gamedev. This was middle of 2007, but I later improved the technique.

image
image
image
video or executable demo


per pixel, dynamic, realtime :: screen space :: zbuffer

I also tryed of course the zbuffer bluring technique, but I only got cartoon rendering style halos and borders. I tryed to remove them by using all sort of tricks (z variance detectors included). No way. The technique was incredibly fast of course, it was runing full framerate in my radeon9800 (pixel shaders 2.0).

image
image
image
image
video


per pixel, dynamic, realtime :: screen space :: zbuffer

One day a work mate of me (Flavien Brebion) downloaded the Crysis multiplayer demo. I don't like games at all - neither playing nor programming, but having a look to the ssao was a must. What I did right after watching it for 5 minutes was to have a look to the shaders code (yep, they are all "there"). I had a look to their ssao and understood that it's best to do the sphere sampling without any divisions/proyections, just in a kind of clipspace. I didn't implement their ssao, but just my own thing, now that I had a new view on the subject thanks to them. I only spent one day to try to implement it, so I never really tuned the parameters, but got some interesting results. Some months later I simplified the shader and make a 4 kilobytes demo (kindercrasher) with it.

image
image
image
image
video or executable demo


per pixel, dynamic, realtime :: screen space :: zbuffer

I have some ideas about improving the ssao to completelly remove the hallo effects. Since k-buffers are still to come, one cannot do perfect in/out analysis, but tricks can be done :)