Reputation: 340
I've been playing with wolfram language and noticed something: the same function written in different ways works very differently in terms of time.
Consider these two functions:
NthFibonacci[num_] :=
If [num == 0 || num == 1, Return[ 1],
Return[NthFibonacci[num - 1] + NthFibonacci[num - 2]]
]
Fibn[num_] := {
a = 1;
b = 1;
For[i = 0, i < num - 1, i++,
c = a + b;
a = b;
b = c;
];
Return [b];
}
NthFibonacci[30]
takes around 5 seconds to evaluate.
Fibn[900 000]
also takes around 5 seconds to evaluate.
So does the built-in Fibonacci[50 000 000]
I simply can't get why are there such differences in speed between the three. In theory, recursion should be more or less equivalent to a for loop. What is causing this?
Upvotes: 1
Views: 129
Reputation: 6989
The issue noted by @pjs can be addressed to a degree by having the recursive function remember prior values. (eliminating the If
helps too)
Clear[NthFibonacci]
NthFibonacci[0] = 1
NthFibonacci[1] = 1
NthFibonacci[num_] :=
NthFibonacci[num] = NthFibonacci[num - 1] + NthFibonacci[num - 2]
NthFibonacci[300] // AbsoluteTiming
{0.00201479, 3.59 10^62}
Cleaning up you loop version as well (You should almost never use Return
in mathematica):
Fibn[num_] := Module[{a = 1, b = 1,c},
Do[c = a + b; a = b; b = c, {num - 1}]; b]
Fibn[300] // AbsoluteTiming
{0.000522175 ,3.59 10^62}
you see the recursive form is slower, but not horribly so. (Note the recursive form hits a depth limit around 1000 as well )
Upvotes: 0
Reputation: 77860
There are many ways to implement recursion; the Fibonacci function is a lovely example. As pjs already pointed out, the classic, double-recursive definition grows exponentially. The base is
φ = (sqrt(5)+1) / 2 = 1.618+
Your NthFibonacci implementation works this way. It's order φ^n, meaning that for large n, calling f(n+1) takes φ times as long as f(n).
The gentler approach computes each functional value only once in the stream of execution. Instead of exponential time, it takes linear time, meaning that calling f(2n) takes 2 times as long as f(n).
There are other approaches. For instance, Dynamic Programming (DP) keeps a cache of previous results. In pjs f(4) case, a DP implementation would compute f(2) only once; the second call would see that the result of the first was in cache, and return the result rather than making further calls to f(0) and f(1). This tends toward linear time.
There are also implementations that make checkpoints, such as caching f(k) and f(k)+1 for k divisible by 1000. These save time by have a starting point not too far below the desired value, giving them an upper bound of 998 iterations to find the needed value.
Ultimately, the fastest implementations use the direct computation (at least for larger numbers) and work in constant time.
φ = (1+sqrt(5)) / 2 = 1.618...
ψ = (1-sqrt(5)) / 2 = -.618...
f(n) = (φ^n - ψ^n) / sqrt(5)
Upvotes: 1
Reputation: 19855
It's because the recursive version you present does lots and lots of repeated calculations. Build a tree of the function calls to see what I mean. Even for an argument as small as 4, look at how many function calls are generated to get to a base case down each chain of the logic.
f(1)
/
f(2)
/ \
f(3) f(0)
/ \
/ f(1)
/
f(4)
\
\ f(1)
\ /
f(2)
\
f(0)
With your recursion, the number of function calls grows exponentially with the argument num
.
By contrast, your looped version grows linearly in num
. It doesn't take a very large value of n
before n
is a lot less work than 2n.
Upvotes: 3