website articles
smooth iteration count for generalized Mandelbrot sets


It seems people have been using the so called smooth-iteration-count formula to render the Mandelbrot for a while now, which produces a smooth continuous gradient of color in the exterior of the M set as opposed to the old school discreet iteration count algorithm, but few know where the formula comes from. In this article I'll quickly explain how the formula is derived and how it generalizes to M sets and polynomical fractals of any positive integer degree.

Smooth vs classic iteration count for f(z) = z²+c

The basics

Lets render the fractal generated by a polynomial of the form with d an integer (degree of the polynomial) and real, complex or quaternionic coefficients a. To make things easier to manipulate, but without lose of generality, we will assume ad = 1. We are interested in studying the dynamics introduced by the iteration of such a polynomial map for different values of a, which is a "d" dimensional problem. In particular, we want to find the divergent, periodic (including convergent) and caotic sets of the space when points are repeatedly maped and re-maped and re-re-maped by f(z).

Traditionally, for simplifying the study and the visualization of such sets, all of the coeffients a except for one are consindered constant. This one dimensional problem (on complex numbers, which are two dimensional themselves) is now easily displayed in a two dimensional screen. For example, for the traditional Mandelbrot set one sets d=2 and a2 = 1, a1 = 0 and only a0 is considered variable (renamed to c). Another similar polynomial is

Note that we are playing with two different planes here. The parameter plane where c lives, the one we are making a picture of, and the dynamical plane which is different for each choice of c that we make, where z lives (and the Julia sets). In the dynamical plane, the points that satisfy f'(z)=0 are called the critical points, these are the points where the mapping behaves funny (it contracts the plane so much it cannot be reversed!). There are of course d-1 such points, one for each root of the equation f'(z)=0. Because all of our polynomials defined above satisfy f'(0) = 0, we'll be paying special attention to the behave of our map f(z) under iterations starting with z0=0, which generates the critical orbit and determines the shape of the associated Julia set. This is the reason the clasic Mandelbrot set and all of the fractales generated by the polynomials defined above start their iterations with z=0.

This critial orbit can converge, can become periodic, can become chaotic or can diverge to infinity (which in our one dimensional complex case, it's the exterior of a disk of inifinitelly big radius). In classical renders of these fractals images, c points that produce orbits not diverging to infinity are marked black, and otherwise points are colored according to how many iterations or applications of f(z) it took detect the orbit is divergent.

To determine such condition, one has to do the following observation: given a sufficiently big (distant from the origin) point z, the map f(z) can be approximated by since the leading term of power d will grow way faster than the lower powers of z. That means, that after certain radius or distance from the origin, f(z) will only push the orbit to infinity, meaning the interesting orbits and hence values of c and the fractal itself are situated around the origin of the plane.

One can compute exactly what the actual threshold radius that we need to check against is (you can read this article to see how to compute it:

For example, for the traditional Mandelbrot set d = 2 and therefore B = 2, which is the value used in the traditional rendering of the Mandelbrot set (you have seen this value squared to avoid having to take the square root in the computation of the length of z). However, for the smooth iteration count computation we are going to really leverage the approximation for f(z) just mentioned, which requires us to work a little bit closer to infinity than a distance of 2 from the origin. However, we don't really need to go that far, something in the order of 50 will be suficient since by then the leading term of f(z) is relativelly powerfull enough compared to the others for the approximation to work. But regardless of the value of B, each point in the orbit of f(z) associated with a different value of c, or fc(z) if you want, is compared againts it for an "escape" - a point that became bigger than B and therefore is guaranteed to diverge to infinity. Usually the number of iterations or applications of f is recorded and displayed as a color, what creates a series of color "bands".

The technique

Now that we understand the setup, lets dive into the actual smooth iteration count formula. The observation is that once a point zn in the orbit has become bigger than B and escaped, it is not the same if it barely escaped by becoming just slightly bigger than B or if actually escaped by a lot. We can actually measure this strength and remap it or normalize it to the interval [0, 1] so that we can use it as the fractional part on our otherwise discreet iteration counter, giving us efectively a smooth gradation between the escape bands.

The lower bound for such a remap is simply B : any |zn| (adding "n" now for the index in the orbit or iteration) exactly of size B will map to 0. The upper bound happens where a point zn surpassed B by so much that it could have almost not needed the current iteration to escape but could have almost done it in the previous one. In that case, |zn| is going to be really close to Bd, since accoring to our approximation that's where the next ring of excaping points will be (remember we are far enough from the origin, and each iteration or application of f(z) increasers the size by |z|d). So, basically we have to remap values of |zn| what are B to 0 and Bd to 1. We can achieve this simply by taking the logarithm of |zn| in base B which will remap |zn| to the range [1, d]:

then doing a linear transform to bring it back to [0, 1], and finally subtract this from our integer number of iterations n to get the smooth itearation count:

This produces a smooth color gradient, you can check it by implementing this yourself. However, it's not as uniform as we wish, the reason being that h(zn) is growint itself in powers of d with each extra iteration. To undo this exponential growth and linearize with respect to distance we can take once again a logarithm, the logarithm of h in base d, which also remaps [1, d] to [0, 1]:

This produces the desired smooth iteration count (see image at the top of the article). And of course, it not only works for the classic Mandelbrot set, it also works for any of the polynomial generalizations we proposed due to the strength of the leading term in the polynomial near infinity. For example, for the polynomial we get the following smooth coloring:

Smooth iteration count

Classic (integer) iteration count

You find source code for this technique here:

One note on implementation: it is possible to run the smooth itearation formula on the square of the length of zn, which avoids one square root computation (not much of a relative saving really). Also, if you are using the classic Mandelbrot set and d is 2, then you can replace the natural logarithms in the formula above by logarithms in base 2 and avoid the last division by ln d, and since the thresgold B is a constant another one of the logarithms can be avoided as well:

making the final code (again, only for the classic Mandelbrot set), see this example extracted from here:
// compute the smooth iteration count for a pointin the plane
float IterateMandelbrot( in vec2 c )
    const float B = 256.0;

    float n = 0.0;
    vec2 z  = vec2(0.0);
    for( int i=0; i<200; i++ )
        z = vec2( z.x*z.x - z.y*z.y, 2.0*z.x*z.y ) + c; // z = z² + c
        if( dot(z,z)>(B*B) ) break;
        n += 1.0;

    //float sn = n - log(log(length(z))/log(B))/log(2.0); // smooth interation count
    float sn = n - log2(log2(dot(z,z))) + 4.0;  // equivalent optimized smooth interation count
    return sn;