Reputation: 18198
The knapsack problem can be solved in O(n²V)
time where V = max(v[i], i = 1,..,n)
denotes the maximum value of any item. If we "change units" by a rounding parameter θ = ε/n * V
and consider modified values y[i] = ceil(v[i]/θ)
, we obtain a running time of O(n³/ε)
for any fixed ε > 0
.
In the book Algorithm Design by Jon Kleinberg, they state:
The dependence on the desired precision
ε
is not polynomial, as the running time includes1/ε
rather thanlog 1/ε
.
First of all, the whole idea behind the term pseudo-polynomial is not exactly clear to me, even though I've read some articles about it. However, besides this knowledge gap, I mainly ask myself why the dependence on ε
would be polynomial, if the running time would include log 1/ε
instead of 1/ε
.
Upvotes: 2
Views: 336
Reputation: 65468
Whether a problem is polynomial-time or not depends on how its instances are encoded. For example, I can define a problem PADDED-SATISFIABILITY whose valid instances consist of a Boolean formula with n variables followed by 2^n junk bits. Thanks to the padding, PADDED-SATISFIABILITY is solvable in polynomial time (unlike, presumably, SATISFIABILITY, which is NP-hard).
When the instances involve integers, it is customary to assume an encoding where the number of bits used to represent a positive integer is about log2 of that integer. A pseudo-polynomial-time algorithm, in this context, means that, if we switch from a binary representation of positive integers to unary, so that they require exponentially (!) more bits to be represented, then the algorithm is polynomial-time.
If we make the (frequent) assumption that 1/ε is a positive integer, then Kleinberg is pointing out that a polynomial dependence would be on log 1/ε, since that is approximately the number of bits required to specify that parameter in binary. The number of bits in a unary representation would be approximately 1/ε.
Upvotes: 1