Reputation: 553
So, I want to half the largest element of a list k times.
I implemented it with heapq, after multiplying everything with -1, to have a min-heap.
def minStoneSum(self, piles: List[int], k: int) -> int:
h = [-el for el in piles]
heapq.heapify(h)
for i in range(k):
# version 1
newmax = min(h) >> 1
heapq.heapreplace(h, newmax)
# version 2
# newmax = heapq.heappop(h) >> 1
# heapq.heappush(h, newmax)
...
At first, I was using max()
to get the first item and heapreplace
, because according to the documentation, they were faster:
heapq.heapreplace(heap, item)
... This one step operation is more efficient than a heappop() followed by heappush() and can be more appropriate when using a fixed-size heap.
heapq.nsmallest(n, iterable, key=None)
... Also, when n==1, it is more efficient to use the built-in min() and max() functions
However, when I run it with a large initial list and k, it takes very long (over 30s), and when I run the version with heappop
and heappush()
it runs very fast.
Is heapreplace really slower than heappop and heappush? Or am I doing something else wrong?
Upvotes: 0
Views: 306
Reputation: 553
It seems that the documentation for heapq.nsmallest
and the comment on using min()
, does not refer to heaps but to iterables in general.
For heaps, the smallest element can be found by h[0]
.
So, the reason that the first version is much slower, is because min() treats the heap like a normal list, and (I guess) uses O(n) per call.
After replacing min(h)
with h[0]
, it turns out that heapreplace is indeed a bit faster than calling heappop and heappush.
Upvotes: 1