Reputation: 199
I'm trying to understand part of a question I have as my HW but it really looks like Chinese...
Let's say we have coins x_1, x_2, x_3, ... x_n
. x_1 = 1
always.
We want to give a certain amount of money in a minimum number of coins.
Then we use dynamic programming.
And now I don't understand this - c(i,j) = min { c(i-1,j), 1+c(i,j-x_i) }
where c(i,j)
is the minimal amount of coins to return amount j
.
Upvotes: 0
Views: 729
Reputation: 178411
c(i,j-x_i)
is the minimal number of coins to get the value j-x_i
using only coins i,i+1,...,n
(This is the induction hypothesis, that's what the recursive formula ensures us).
Thus, 1+c(i,j-x_i)
is the minimal way to get j-x_i
with the given set of coins + an extra coin valued x_i
, which we decided to use.
From this, c(i,j) = min { c(i-1,j), 1+c(i,j-x_i) }
is actually choosing "what is best" exhaustively:
Taking the minimal of those ensures us (because it is done exhaustively - over all possibilities) that c(i,j)
is minimal.
Upvotes: 1