I'm comparing two versions of the Fibonacci routine in Python 3:
import functools
@functools.lru_cache()
def fibonacci_rec(target: int) -> int:
if target < 2:
return target
res = fibonacci_rec(target - 1) + fibonacci_rec(target - 2)
return res
def fibonacci_it(target: int) -> int:
if target < 2:
return target
n_1 = 2
n_2 = 1
for n in range(3, target):
new = n_2 + n_1
n_2 = n_1
n_1 = new
return n_1
The first version is recursive, with memoization (thanks to lru_cache
). The second is simply iterative.
I then benchmarked the two versions and I'm slightly surprised by the results:
In [5]: %timeit fibonacci_rec(1000)
82.7 ns ± 2.94 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
In [6]: %timeit fibonacci_it(1000)
67.5 µs ± 2.1 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
The iterative version is waaaaay slower than the recursive one. Of course the first run of the recursive version will take lots of time (to cache all the results), and the recursive version takes more memory space (to store all the calls). But I wasn't expecting such difference on the runtime. Don't I get some overhead by calling a function, compared to just iterating over numbers and swapping variables?
As you can see,
timeit
invokes the function many times, to get a reliable measurement. The LRU cache of the recursive version is not being cleared between invocations, so after the first run,fibonacci_rec(1000)
is just returned from the cache immediately without doing any computation.