Crop circles and other rounding errors

My first attempt at adding fixed-point calculation to my fractal revealed a strange phenomenon:



Alien intelligence at work? No, there’s a perfectly rational scientific explanation: buggy arithmetic.

Fixed-point fractals

When writing complicated mathematical programs it’s sensible to use an existing, thoroughly tested library. Failing that, it’s advisable to read some tutorials or peruse a couple of textbooks. At the very least, writing a prototype with a generous assortment of unit tests would barely suffice.

Or you can just make it up as you go along, and tweak it until it looks right. This was the approach I chose to take. (Although I have read a couple of textbooks in the distant past, too.) I had a hunch that using fixed-point arithmetic might be faster, so with gusto I began to implement it, using ordinary 32-bit integers.

Let’s assume we want fixed accuracy to 1/S = 0.001. Then we treat each integer variable as representing the number of thousandths in the value (S = 1000). Each value x is mapped to the integer x \cdot S). Then arithmetic (sufficient for Mandelbrot computation) is as follows, where x \cdot S,etc. are fixed-point values as stored in variables, and N,etc. are the native values (e.g. floating point):

Operation Implemented Explanation
x := N x \cdot S := N \times S Scale a value to convert it between formats
N := x N := (x \cdot S)/S Unscale to convert back
x \cdot S + y \cdot S x + y Addition does not affect scaling
x \cdot S - y \cdot S x - y Nor does subtraction
(x \cdot S) \times (y \cdot S) (x \times y)/S Need to correct for double-scaling
x \cdot S < y \cdot S x < y Values retain relative proportions under scaling

In order to perform multiplication without resorting to variables wider than 32 bits, we pre-emptively scale the multiplicands. Each side is scaled by \sqrt S. The product is therefore x \cdot \sqrt S \times y \cdot \sqrt S = xy (\sqrt S)^2 = xy \cdot S, which is in the correct form. Because it is more straightforward in C to obtain S from \sqrt S than vice versa, \sqrt S is the configurable parameter to the fixed-point arithmetic definition.

The next question is, what value of \sqrt S is optimal? We have 32 bits to work with in a variable; the Mandelbrot set is contained within a circle of radius 2, centered on the origin. We could therefore have \sqrt S = \sqrt {2^30 – 1}$, which maps each integer value between +/- 2^31 to a real value between +/- 2.0. Unfortunately, this doesn’t yield a very interesting picture. Pixel colours are determined by the number of iterations before the value exceeds that boundary; if every representable value is within that boundary, all pixels will be black.

In accordance with the officially sanctioned methodology (described above), I took a wild guess at a reasonable value. The first escaped value for each pixel needs to be representable. Within the circle of radius 2, the most extreme case of this is the pixel at c = 2.0+0i: when it escapes, z = 2^2 + 2 = 6. A scaling factor that leaves 6 representable is \sqrt S = \sqrt {2^31/6} = 18918. This will result in noise for the areas outside that circle — for some of them, even the initial mapping from pixels to points will exceed the range of our fixed-point variables. But that noise is distant from the interesting parts of the set and can be ignored (and with a bit of special-case programming, could simply be excised).

This might work for a pixel precisely at 2.0+0i. It does not work for pixels beyond that radius. And it doesn’t work for closer pixels, either! A point at say c = 1.9+0i will require one more iteration before it escapes; when it does, z^2 will be 30.3601; so that value must also be representable. As we reduce the semi-scaling factor to \sqrt S = \sqrt {2^31/30.3601} = 8410, the largest area of noise shrinks to a crescent between 1.9 and 2.0.

That crescent disappears entirely when we cater for points arbitrarily close to 2.0, with \sqrt S = \sqrt {2^31/32} = 7723.

Mysterious minibrots

Before we put the chapter of naive fixed-point math behind us, we should take a closer look at some of those patterns.

Those crop circles contain copies of the Mandelbrot set! I admit I’m dumbfounded by this. In the correct Mandelbrot fractal, there is no detail outside the circle of radius 2.0.

Somehow, the residues of those values — divided by the scaling factor 18918^2 and modulo the bit precision 2^31 — contrive to draw the same fractal.

Effects of precision

Now that fixed-point calculation is working, we can ask a couple of questions:

  1. Is it much faster than floating point calculation?
  2. Does it have noticeably different accuracy from floating point?

The answer to the first question it is a faster.

Mfunc Mode PPS
LOOP_INT 32-bit fixed-point 186115
LOOP_FLOAT 32-bit floating point 127242
LOOP 64-bit floating point 105646

It should be noted that some of that speed advantage has come with a compromised to accuracy. Firstly, multiplication is done within the 32-bit range, so the effective range is 16 bits. Secondly, rounding is always done by simple truncation. Addressing either of these would likely require additional integer instructions.

SIMD modes for 32-bit fixed and floating point can process twice as many pixels in parallel as SIMD for 64-bit. (I still have not implemented SIMD fixed point, so we there are no benchmarks that particular comparison.)

What about apparent precision? Here are the benchmark screenshots from the first two modes:

The fixed-point output breaks down at this zoom level. There are three distinct kinds of artifact shown:

  1. Point perturbation. The fractal does not have a coherent outline because successive truncation of values in the calculation have perturbed pixels in a chaotic fashion. In many cases adjacent pixels will be subject to truncation at different points, and will diverge.
  2. Pixel drift. The fractal as a whole has undergone a shift towards the South-West in the fixed-point mode. The reason for this is unknown; there is enough precision for individual points within the image to under distinct calculations. Perhaps due to a larger-scale truncation before individual points diverge.
  3. Larger cells on the fractal. The limits of precision (to 1/7723) imply that multiple adjacent pixels will map to the same point in the fixed-point representation. If we continue to zoom in, the cells will simply get bigger.

It is possible that integer mode is useful as a faster method for calculating unzoomed fractals. But we should be careful about that drift. If there is a zoom threshold at which integer mode is replaced by floating point mode, it should be before the point where the drift is larger than a few pixels. Otherwise the user may be surprised when they zoom and are suddenly in a different neighbourhood. (This example shows a drift of only about 1/4 of the screen, so the user will not get lost. But it may disrupt them if they are aiming for a precisely focussed sequence of images.)

The 32-bit and 64-bit floating point results do not differ substantially in that benchmark. If we zoom in a little more, they begin to:

An interesting effect of floating point is that the cells (pixels that map to the same point) are rectangular, not square.  Floating point numbers store a constant number of bits of precision; but the corresponding scale of those bits <em>floats</em>.  A zoom point that is close to the origin, such as 0.00101011, can use more bits to distinguish adjacent pixels, than one further, e.g. at 0.10101011.  In that example, 2 bits are wasted storing that additional 0.1.  The cost of storing leading digits is the base-2 logarithm of their magnitude.  If these values represented coordinates in the fractal, four times as many adjacent pixels would map together on the first dimension than on the second, because those pixels are deprived of 2 bits in which to be distinguished from each other.

This visible difference between 32 and 64 bit calculations raises a question: are there novel parts of the Mandelbrot set we’re missing out on, because of 64-bit precision limits?  It is a subjective question; I think it is likely.  On the other hand I’m not sure I have noticed anything special that demands 64-bit floating point over 32-bit.  The additional precision so far is most useful for getting a crisp image at higher zoom levels; I have not uncovered anything at those zoom levels that hints at something new and unusual beyond them.  (Yet! Super-high-precision computation modes may well be part of Fractal’s destiny.)

This entry was posted in Math, Programming and tagged . Bookmark the permalink.

1 Response to Crop circles and other rounding errors

  1. Jonathan Deneault says:

    I had the same circle crops formation and I found what was causing it. Q4.28 fixed point format is enough for updating your x and y values, but not enough for testing the exit condition (if x^2 + y^2 > 4). My workaround was to compute x^2 + y^2 using raw 64 bits values (which are effectively Q8.56).

Leave a comment