Reputation: 7574
Wikipedia says this about dynamic programming :
In mathematics, computer science, economics, and bioinformatics, dynamic programming is a method for solving a complex problem by breaking it down into a collection of simpler subproblems. It is applicable to problems exhibiting the properties of overlapping subproblems and optimal substructure. When applicable, the method takes far less time than other methods that don't take advantage of the subproblem overlap (like depth-first search).
and also from Introduction to Algorithms (Cormen)
, I have learnt that dynamic programming
is a method applied to solve repeating computations
that have already been computed once
. In layman's terms ,
if you're going to compute something again and again , better store it somewhere.
Applying this on Fibonacci I could write its algorithm as follows :
arr[3] = {1,1,1} //first two index for computation , last to keep track
fibbDyn(n){
if(n>=1 || a[2]>n ) return n; // return on base conditions
else {
res = arr[0] + fibbDyn(n-1);
arr[0]=arr[1];
arr[1]=res;
arr[2]+=1; // increment value by 1
return res;
}
}
While I believe this algorithm follows the example of dynamic programming as it reduces the extra computations being done in original recursive fibbonaci version :
fibb(n){
if (n>=1)return n;
else return fibb(n-1) + fibb(n-2);
}
as here due to two separate calls at each recursive step else return fibb(n-1) + fibb(n-2)
many computations are repeated.
an iterative solution will probably look like :
int FibonacciIterative(int n)
{
if (n == 0) return 0;
if (n == 1) return 1;
int prevPrev = 0;
int prev = 1;
int result = 0;
for (int i = 2; i <= n; i++)
{
result = prev + prevPrev;
prevPrev = prev;
prev = result;
}
return result;
}
So my question is , will an iterative solution to Fibonacci problem be classified as dynamic programming?
My reasoning for a disagreement is that , an iterative solution dosen't exhibits Overlapping subproblems
such as an recursive solution is exhibiting. In an iterative solution , there are no redundant and repetitive computations being made , so it shouldn't be included in dynamic programming.
relevant articles : optimal substructure , overlapping subproblems , dynamic programming.
Upvotes: 0
Views: 1600
Reputation: 23
On the wikipedia page of Dynamic Programming read - Dynamic programming in computer programming. This explains about the two approaches, Top Down that falls out as recursive formulation of the problem and Bottom Up in which we iteratively generate solution to bigger problems and current solution are stored in table. In this case, a table is not required and the job can be done using two variables.
Thus, in case of Iterative Approach, only two variables are being used for holding the values of subproblems; ie, prev, (i-1 th)
and prevPrev, (i-2 th)
. Here, prev and prevPrev are used to find the solution of the ith iteration (Bigger problem).
result = prev + prevPrev;
is nothing but representation of ith iteration result, which is equal to prev(i-1) + prevPrev(i-2). Thus, reuse of subproblems is taking place in the iterative approach too. This is the bottom up approach of dynamic programming and the recursive approach is Top Down approach of dynamic programming.
Upvotes: 1
Reputation: 64904
Yes. That's just a special case of Bottom Up dynamic programming. You're allowed to discard table entries that you know you will never use again, in the case of Fibonacci that means you only need to keep 2 entries, and then you can forget it was ever a table and just use two named variables. So, it ends up looking different, almost too simple. But the structure of that algorithm is still DP. The overlapping subproblems that you say aren't there are still there, because you use every result twice (once when it's in prev
, again when it's in prevPrev
), except in the end. Of course there are no redundant computations made, but then that's the idea of DP - remove redundant computation by reuse.
There is a general "plan of attack" for problems that allow dynamic programming, namely
In the case of Fibonacci, what happened is that the order is trivial (that's not even particularly uncommon, but it makes it look as if we're "not really doing anything special"), and the dependencies never go back more than 2 places, so the only part of the table that has to be remembered is the previous two cells. So applying all that, you get the well-known iterative algorithm. That doesn't mean it's not DP anymore, it means that the DP was extremely successful.
As for the properties (optimal substructure, overlapping subproblems), they're properties of the problem, they don't go away no matter how you decide to solve it. But you can still see them back in the iterative algorithm, as I pointed out in the first paragraph.
Upvotes: 1