Thought it looked cool and just wanted to have some fun (after exams).
The Mandelbrot set is a set of complex numbers
Since it is a set of complex numbers
For starters, the escape time algorithm seems like the easiest to implement. Plotting algorithms for the Mandelbrot set (Wikipedia).
Basically, for each point MAX_ITERATION
.
Then, there are many ways to colour the iteration counts of the points. But... I'm just going to let Matplotlib do it for me.
And this is my first ever image of the Mandelbrot set!
(Much thanks to matplotlib for the colours.)
Here's an example psuedocode:
MAX_ITERATION = 100
def escape(c, z=complex(0, 0)):
i = 0
while abs(z) < 2 and i < MAX_ITERATION:
z = z * z + c
i += 1
return i
def image(xs, ys):
return [[escape(complex(x, y))
for x in xs] for y in ys]
if __name__ == '__main__':
im = image(np.linspace(-2.5,0.5,100),
np.linspace(-1.5,1.5,100))
plt.imshow(im)
plt.show()
My end in mind was a zoom animation of the Mandelbrot set, so I had to find a way to generate a zoom sequence. The zoom sequence will specify the coordinates of the window (left, right, bottom, top) of each frame of the animation.
There are many ways to do the zoom sequence, but this is the approach I had:
Specifically, given
The sequence of window widths desired (
The sequence of window centres desired (
Simplifying, we have
I eventually settle on this zoom sequence with coordinates
Experimenting with various zoom sequences took quite long to render so I had various ideas to speed up the computation. Since each image (and pixel) can be computed separetely, it is an embarrassingly parallel workload.
With the help of python multiprocessing module, the frames of the animation can be split and rendered by multiple CPU cores separetely.
I remembered that GPUs have much more processing cores than their CPU counterparts, so it begs the question if GPUs can be used for parallelising the workload.
Though, GPU works on the idea of SIMD (Single instruction, multiple data). But computing the escape time of each pixel require different iterations, so it isn't obvious how this could be done.
Instead, a tradeoff I thought, would be to iterate (the step update MAX_ITERATION
times, regardless if the point had "diverged".
Since we continue the step update
Though, it is still possible to compute the escape time, by checking for the bailout condition on each iteration (for all points even though they had already bailed out).
Psuedocode of the escape time algorithm for SIMD:
def step_(z, c):
z *= z
z += c
return z
def escapes(n, z, xs, ys, dtype=torch.complex64):
cs = [complex(x, y) for y in ys for x in xs]
cs = torch.tensor(cs, dtype=dtype, device='mps')
zs = torch.full_like(cs, z, dtype=dtype, device='mps')
zss = accumulate(repeat(cs, n), step_, initial=zs)
ms = sum(zs.abs() < 2 for zs in zss)
return ms.cpu().reshape(len(ys), len(xs))
I repeated the same benchmark above, with pytorch (gpu) and numpy (cpu). However, as of current, pytorch does not support 128-bit complex numbers, so I had to use its 64-bit variant.
Numba can help to accelerate the execution of Python code by performing JIT (just in time) compilation of Python code to machine code.
@numba.njit
def escape(c, z=complex(0, 0)):
i = 0
while abs(z) < 2 and i < MAX_ITERATION:
z = z * z + c
i += 1
return i
@numba.njit
def image(xs, ys):
return [[escape(complex(x, y))
for x in xs] for y in ys]
if __name__ == '__main__':
im = image(np.linspace(-2.5,0.5,1000),
np.linspace(-1.5,1.5,1000))
plt.imshow(im)
plt.show()
Numba is very fast.
But I suppose there could be more variations I could test, and with languages apart from Python. I read online that using GPUs should actually be the fastest.
Some images I took along the way...
The final zoom animation is on the release page.
The entire animation took around 4 hours to generate (on M2 air 8 cores).
This project took me too long than I actually wanted to. I thought it would have been simpler. But overall happy with the result.