Reputation: 83
I'm practising some Amazon interview coding exercises and I'm stuck with this exercise.
In a nutshell, given a list l
and an integer d
, I have to split the list into d parts such that the sum of the max of each part is the smallest, for example, with l = [10,2,20,5,15,10,1], d = 3
should return 31 as the best option is [10,2,20,5,15],[10],[1] -> 20+10+1 = 31
.
Link to the problem: Problem
I thought about splitting the list in 2 and calling recursively with the first part and d-1
, and taking the max of the second part (this second part will start having one element and we will add an element to this part in each iteration).
I did this and I'm getting the "maximum recursion depth exceeded while calling a Python object" error. I know the solution to this problem can be found easily but I want to understand why is this approach not working.
def getmin(l, d):
# base case, if the list has d elements each element will be a sublist
if len(l)==d:
return sum(l)
else:
minn = float("inf")
for i in range(1, len(l)-d+1):
val = getmin(l[0:-i], d-1) + max(l[-i:])
if minn > val:
minn = val
return minn
Upvotes: 3
Views: 1180
Reputation: 56993
Your list l
is not a 2d list as your written example, so your algorithm seems to be having trouble keeping the result list that stores the cuts you made to get to a particular point and the input list straight. It's much easier to keep a separate list and accumulate the result onto it, treating l
as read-only. Not to mention, copying it per recursive call is painful from a performance standpoint.
Before going further, Python recommends never using l
as a variable name. Prefer L
, nums
or lst
instead--it's really hard to differentiate 1
, i
and l
. I'll use L
for the rest of the post.
A brute force solution similar to yours would be to run a counter from 0 to len(L)
and use recursion to try every possibility at every node. At each i
, we can either append the current number L[i]
to the last bucket or start a new bucket with L[i]
as the first element. A small optimization is to only keep the max per bucket instead of all the elements.
Now that you have a brute-force solution, you can slap on a cache
to memoize repeated work.
from functools import cache
import random
def min_sum_of_max_elems_from_d_cuts(L, d):
assert len(L) >= d
@cache
def find_best_cut(i, cuts):
if len(cuts) == d and i == len(L):
return sum(cuts)
elif len(cuts) > d or i >= len(L):
return float("inf")
best = float("inf")
if cuts: # try extending the current cut by L[i]
new_cuts = cuts[:-1] + (max(cuts[-1], L[i]),)
best = min(best, find_best_cut(i + 1, new_cuts))
# try making a new cut at L[i]
return min(best, find_best_cut(i + 1, cuts + (L[i],)))
return find_best_cut(i=0, cuts=tuple())
if __name__ == "__main__":
tests = [
dict(L=[10,2,20,5,15,10,1], d=3, result=31),
dict(L=[10,2,20,5,15,10,1], d=5, result=43),
dict(L=[5,4,2,4,3,4,5,4], d=4, result=16),
dict(L=[22,12,1,14,17], d=2, result=39),
]
for test in tests:
assert min_sum_of_max_elems_from_d_cuts(test["L"], test["d"]) == test["result"]
L = random.Random(42).choices(range(200), k=25)
assert min_sum_of_max_elems_from_d_cuts(L, d=15) == 1056
I'll leave optimizations, bottom-up DP and coming up with a recurrence to the reader. It looks like the same problem as LeetCode 1335, so the target constraints are 1 <= len(L) <= 300
, 0 <= L[i] <= 1000
and 1 <= d <= 10
so it'll need way more firepower than this to avoid TLE.
Upvotes: 3
Reputation: 32244
For the simple case, where d=1, the solution is just the max of the list
def getmin(l, d):
if d == 1:
return max(l)
When d is greater than 1, the solution is to return the smaller of the head or the tail of the list added to the result of a recursive call
def getmin(l, d):
if d == 1:
return max(l)
if l[0] < l[-1]:
return l[0] + getmin(l[1:], d-1)
else:
return getmin(l[:-1], d-1) + l[-1]
Upvotes: 3