Reputation: 742
I needed to compute sums within a range on an array, so I came across Segment Tree and Fenwick Tree and I noticed that both of these trees query and update with the same asymptotic running time. I did a bit more research, and these 2 data structures seem to do everything at the same speed. Both have linear memory usage (Segment Tree uses twice as much).
Aside from constant factors in running-time/memory and implementations, is there any reason why I would choose one over the other?
I am looking for an objective answer, like some operation that is faster with one than the other, or maybe some restriction one has that the other does not.
I saw 2 other StackOverflow questions about this, but the answers just described both data structures rather than explaining when one might be better than the other.
Upvotes: 33
Views: 23340
Reputation: 31
The truth is that a Fenwick tree can be seen, in a certain sense, as a "half" of a segment tree. Specifically, assume n is a power of two, then a segment tree's segments will always divide evenly into powers of two down to the last level as illustrated below:
This is how various segments are split by such a tree:
Note that if a segment starts at zero, we will only use the blue segments, which are the left children in the segment tree:
Since computing sums on intervals [l, r) can be reduced to computing prefix sums (for intervals starting at zero), we can limit ourselves to keeping just the blue segments. Additionally, each blue segment can be uniquely associated with the position of its right-hand side, and the specific layout provided by n being a power of two allows for a direct computation of the segment beginning given its end. That's where the previous segment in the split starts, so we get an explicit way to enumerate constituent segments. Thus we get a Fenwick tree:
New questions arise:
For these, I refer you to this blog post.
Upvotes: 0
Reputation: 1853
Some additional information:
2n
spaceO(log(n))
by doubling the size of the segment tree (just like amortized O(1)
arrays). You need to study what the segment tree looks like in memory and then put the new space accordingly (you can't just put all the extra space at one end). Keep in mind that since the segment tree already takes 2n
space, every time you double the array you now have 4n
space usagelog(n)
(to my knowledge); that is if we are storing partial sums for example, and I want to know what index i
evaluates to a partial sum s
, that will take log(n)^2
. This process is trivial in log(n)
for a segment tree2n
storage cost, of courseEdit: you can compute this query in log(n)
! Here is my implementation:
def find(self, s):
b = 1
while b < len(bit):
b <<= 1
b >>= 1
index = 0
cur = 0
while b > 0:
if bit[index + b] + cur <= s:
index += b
cur += bit[index]
b >>= 1
return (index, cur)
This will return the index and sum that were closest to the target partial sum (will always be <= target). I don't believe this works with negative numbers in the BIT, however.
Good segment tree writeup: https://cp-algorithms.com/data_structures/segment_tree.html
Upvotes: 3
Reputation: 83
Comment on Harsh Hitesh Shah answer: The last point against using a Fenwick tree, does not hold in general. Proof by counter example: Lets say we have a Fenwick tree for prefix sums, the function query(x) returns the prefix sum starting from first index 1 upto and including index x. If we would like to compute the sum for some interval [L, R], with 1 < L <= R <= N, we can just take query(R)-query(L-1).
Upvotes: 5
Reputation: 182
I found something on cp-Algorithm which might help you.
Segment tree -
Fenwick tree -
answers each query in O(logN)
preprocessing done in O(NlogN)
Pros: the shortest code, good time complexity
Cons: Fenwick tree can only be used for queries with L=1, so it is not applicable to many problems.
Upvotes: 7
Reputation: 7063
I read this on Quora. Hope you find it useful.
Upvotes: 64