So I'm trying to compute the mandelbrot set at very large magnifications, and have run into a bit of a brick wall.
Using pythons own float paired with a jit gave me plenty of speed.
But after having switched over to use decimal instead, my function just started returning 0 for all fields. Turning off Numba's jit solved the problem, but the performance of my software has plummetted to unusable(~30 seconds for 100 iterations over a 64x64 field with a floating point precision of 6).
My code:
def mandelbrot_point(creal, cimag, maxiter):
real=creal
imag=cimag
for n in range(maxiter):
real2=real*real
imag2=imag*imag
if real2+imag2>4.0:
return n
imag=2*real*imag+cimag
real=real2-imag2+creal
return 0