Reputation: 1627
I have coded a greedy recursive algorithm to Find minimum number of coins that make a given change. Now I need to estimate its time complexity. As the algorithm has nested "ifs" depending on the same i (n * n), with the inner block halving the recursive call (log(2)n), I believe the correct answer could be O(n*log(n)), resulting from the following calculation:
n * log2(n) * O(1)
Please, give me your thoughts on whether my analysis is correct and feel free to also suggest improvements on my greedy recursive algorithm.
This is my recursive algorithm:
coins = [1, 5, 10, 21, 25]
coinsArraySize = len(coins)
change = 63
pickedCoins = []
def findMin(change, i, pickedCoins):
if (i>=0):
if (change >= coins[i]):
pickedCoins.append(coins[i])
findMin(change - coins[i], i, pickedCoins)
else:
findMin(change, i-1, pickedCoins)
findMin(change, coinsArraySize-1, pickedCoins)
Upvotes: 2
Views: 721
Reputation: 70725
What is n
? The runtime depends on both the amount and the specific coins. For example, suppose you have a million coins, 1 through 1,000,000, and try to make change for 1. The code will go a million recursive levels deep before it finally finds the largest coin it can use (1). Same thing in the end if you have only one coin (1) and try to make change for 1,000,000 - then you find the coin at once, but go a million levels deep picking that coin a million times.
Here's a non-recursive version that improves on both of those: use binary search to find the next usable coin, and once a suitable coin is found use it as often as possible.
def makechange(amount, coins):
from bisect import bisect_right
# assumes `coins` is sorted. and that coins[0] > 0
right_bound = len(coins)
result = []
while amount > 0:
# Find largest coin <= amount.
i = bisect_right(coins, amount, 0, right_bound)
if not i:
raise ValueError("don't have a coin <=", amount)
coin = coins[i-1]
# How many of those can we use?
n, amount = divmod(amount, coin)
assert n >= 1
result.extend([coin] * n)
right_bound = i - 1
return result
It still takes O(amount)
time if asked to make change for a million with the only coin being 1, but because it has to build a result list with a million copies of 1. If there are a million coins and you ask for change for 1, though, it's O(log2(len(coins)))
time. The first could be slashed by changing the output format to a dict, mapping a coin to the number of times that coin is used. Then the first case would be cut to O(1)
time.
As is, the time it takes is proportional to the length of the result list, plus some (usually trivial) time for a number of binary searches equal to the number of distinct coins used. So "a bad case" is one where every coin needs to be used; e.g.,
>>> coins = [2**i for i in range(10)]
>>> makechange(sum(coins), coins)
[512, 256, 128, 64, 32, 16, 8, 4, 2, 1]
That's essentially O(n + n log n)
where n
is len(coins)
.
As @Stef noted in a comment, the greedy algorithm doesn't always find a minimal number of coins. That's substantially harder. The usual approach is via dynamic programming, with worst case O(amt * len(coins))
time. But that's also best case: it works "bottom up", finding the smallest number of coins to reach 1, then 2, then 3, then 4, ..., and finally amt
.
So I'm going to suggest a different approach, using breadth-first tree search, working down from the initial amount until reaching 0. Worst-case O()
behavior is the same, but best-case time is far better. For the comment's:
mincoins(10000, [1, 2000, 3000])
case it looks at less than 20 nodes before finding the optimal 4-coin solution. Because it's a breadth-first search, it knows there's no possible shorter path to the root, so can stop right away then.
For a worst-case example, try
mincoins(1000001, range(2, 200, 2))
All the coins are even numbers, so it's impossible for any collection of them to sum to the odd target. The tree has to be expanded half a million levels deep before it realizes 0 is unreachable. But while the branching factor at high levels is O(len(coins))
, the total number of nodes in the entire expanded tree is bounded above by amt + 1
(pigeonhole principle: the dict can't have more than amt + 1
keys, so any number of nodes beyond that are necessarily duplicate targets and so are discarded as soon as they're generated). So, in reality, the tree in this case grows wide very quickly, but then quickly becomes very narrow and very deep.
Also note that this approach makes it very easy to reconstruct the minimal collection of coins that sum to amt
.
def mincoins(amt, coins):
from collections import deque
coins = sorted(set(coins)) # increasing, no duplicates
# Map amount to coin that was subtracted to reach it.
a2c = {amt : None}
d = deque([amt])
while d:
x = d.popleft()
for c in coins:
y = x - c
if y < 0:
break # all remaining coins too large
if y in a2c:
continue # already found cheapest way to get y
a2c[y] = c
d.append(y)
if not y: # done!
d.clear()
break
if 0 not in a2c:
raise ValueError("not possible", amt, coins)
picks = []
a = 0
while True:
c = a2c[a]
if c is None:
break
picks.append(c)
a += c
assert a == amt
return sorted(picks)
Upvotes: 1
Reputation: 4864
Each recursive call decreases change by at least 1, and there is no branching (that is, your recursion tree is actually a straight line, so no recursion is actually necessary). Your running time is O(n)
.
Upvotes: 1