Reputation: 331
I saw the recursive dynamic programming solution to 0-1 Knapsack problem here. I memoized the solution and came up with the following code.
private static int knapsack(int i, int W, Map<Pair<Integer, Integer>>, Integer> cache)
{
if (i < 0) {
return 0;
}
Pair<Integer, Integer> pair = new Pair<>(i,W);
if (cache.contains(pair)) {
return cache.get(pair)
}
int result = -1;
if (weights[i] > W) {
result = knapsack(i-1, W);
} else {
result = Math.max(knapsack(i-1, W), knapsack(i-1, W - weights[i]) + values[i]);
}
cache.put(pair, result);
return result;
}
Can someone explain to me why should the time complexity be O(nW) where n is the number of items and W is the restriction on weight.
Upvotes: 3
Views: 32952
Reputation: 1
import java.util.ArrayList;
import java.util.Collections;
import java.util.Comparator;
import java.util.List;
public class Knapsack {
double price;double kg;double pricePerKg;
Knapsack(double price,double kg, double pricePerKg){
this.price=price;
this.kg=kg;
this.pricePerKg=pricePerKg;
}
public static List<Double> pricePerKg(List<Double> price,List<Double> kg){
List<Double> pricePerKg = new ArrayList<Double>();
for(int i=0;i<price.size();i++) {
pricePerKg.add(price.get(i)/kg.get(i));
}
return pricePerKg;
}
public String toString() { return String.format("%s,%s,%s",price,kg,pricePerKg); }
public static void main(String[] args) {
List<Knapsack> list = new ArrayList<Knapsack>();
double W=50;
List<Double> price = new ArrayList<Double>();
price.add(60.00);
price.add(120.00);
price.add(100.00);
price.add(1000.00);
ArrayList<Double> kg = new ArrayList<Double>();
kg.add(10.00);
kg.add(30.00);
kg.add(20.00);
kg.add(100.00);
List<Double> pricePerKg =pricePerKg(price,kg);
double weightCarry=0,currentWeight=0;
double sum=0;
for(int i=0;i<pricePerKg.size();i++) {
list.add(new Knapsack(price.get(i),kg.get(i),pricePerKg.get(i)));
}
Collections.sort(list,new Comparator<Knapsack>() {
@Override
public int compare(Knapsack o1, Knapsack o2) {
return o1.pricePerKg > o2.pricePerKg ? -1 : o1.pricePerKg == o2.pricePerKg ? 0 : 1;
}
});
System.out.println(list.toString().replaceAll(",", "\n"));
for(int i=0;i<list.size();i++) {
Knapsack li=list.get(i);
weightCarry+=Double.valueOf(li.toString().split(",")[1]);
if(weightCarry<W) {
sum+=Double.valueOf(li.toString().split(",")[1])*Double.valueOf(li.toString().split(",")[2]);
currentWeight=weightCarry;
}
else {
sum+=(W-currentWeight)*Double.valueOf(li.toString().split(",")[2]);
break;
}
}
System.out.println("Fractional knapsack="+sum);
}
}
Time Complexity is nearly O(nlogn)
Upvotes: -1
Reputation: 1406
A recursive dynamic programming algorithm can be presented by subproblem graph.
Subproblem graph consists of vertices resembling non-overlapping subproblems. And the directed edges in the vertex represent the recursive calls. The edges actually represent the dependency of the subproblems.
The runtime of the dynamic algorithm = (time to solve each subproblem)*(number of unique subproblems)
Typically,
the cost = (outdegree of each vertex)*(number of vertices)
For knapsack,
Outdegree of each vertex is at most 2=O(1). This is because in each subproblem, we try to solve it in at most two ways.
Now if we check the subproblems, we can find some pattern,
The subproblem `(n,W)` depends on `(n-1,W)` and `(n-1,W-w(n))`.
`(n-1,W)` depends on `(n-2,W)` and `(n-2,W-w(n-1))`
`(n-1,W-w(n))` depends on `(n-2,W-w(n))` and `(n-2,W-w(n)-w(n-1))`
Note that for each of the n
items, the weight can vary at most 1 to W
.
(We can compare this extended pattern with the dynamic Fibonacci problem pattern with added dimension.)
So there is at most n*W
unique subproblems.
And as we solve each subproblem only once,
The running time = O(1)O(nW) = O(n*W).
Upvotes: 3
Reputation: 58441
Yeah, that's one of the reasons I don't like recursion. You can almost always rewrite a recursive algorithm into one that only uses loops and no recursion. Here is what your algorithm looks like with for loops only:
A[i,j]=0 for j=0, 1, ..., W
For i=1, 2, ..., n
For j=0, 1, ..., W
A[i,j]=max(A[i-1,j], A[i-1,j-weight[i]]+value[i] // or 0 if the index is invalid
Return A[n,W]
Here, it is obvious that the complexity is O(nW)
. I will leave it up to you to compare this code with yours.
Upvotes: 4
Reputation: 46960
It's more obvious if you think through what the table would look like in a tabular implementation of the DP. It has items on one axis and max achievable weight on the other, with one row per possible integer weight. For some weight sets, the table must be densely filled to find the optimum answer. These same weight sets will require Omega(nW) pairs in your memo hash, ergo since each entry is a constant time computation, the same time to compute all. It's a good exercise to think through how to get the costly weight sets, so I'll let that to you.
Upvotes: 4