Reputation: 149973
The documentation for @functools.lru_cache
provides an example of computing Fibonacci numbers using a cache to implement a dynamic programming technique:
@lru_cache(maxsize=None)
def fib(n):
if n < 2:
return n
return fib(n-1) + fib(n-2)
I've seen this approach used for solving various programming puzzles. Does it have the same time/space complexity as the 'standard' iterative dynamic programming approach, e.g.:
def fib(n):
if n < 2:
return n
dp = [0]*(n+1)
dp[1] = 1
for i in range(2 , n+1):
dp[i] = dp[i-1] + dp[i-2]
return dp[n]
Also, are there any downsides to using the recursive approach?
Upvotes: 1
Views: 1471
Reputation: 152725
The following compares the functions you've shown. In general these observations don't necessarily hold for any recursive or iterative approach for calculating the Fibonacci numbers but only the implementations you've shown in the question.
The constant factors might be completely different. It's likely (first call) that the iterative approach will be faster than the recursive approach even though both are O(n)
. This is just a guess, based on my experience (not actual tests) that list indexing is much faster than calling a function.
Both require O(n) additional memory, however the recursive approach will keep the memory (so it's permanently allocated) while the iterative approach will free the memory after the function completed.
The recursion limit of Python. When the time gets too big and the cache isn't filled then the recursive approach will fail because of that limit, for example fib(500)
. There is no such limit (except when you run out of memory) for list indexing.
Upvotes: 0
Reputation: 80232
It should have the same complexity as memoization (top-down) kind of dynamic programming.
Iterative method with step-by-step table filling (bottom-up dynamic programming) might have slightly different complexity, because memoization remembers only parameter sets, needed to build final solution
This difference is not important for fibbonacci or factorial examples, but might happen for tasks with sparse subproblem table (when it's hard to predict what entries will be used later)
Upvotes: 3