Reputation: 1473
I'm not entirely sure what amortized complexity means. Take a balanced binary search tree data structure (e.g. a red-black tree). The cost of a normal search is naturally log(N) where N is the number of nodes. But what is the amortized complexity of a sequence of m searches in, let's say, ascending order. Is it just log(N)/m?
Upvotes: 1
Views: 3060
Reputation: 3346
Pass in the items to be found all at once.
You can think of it in terms of divide-and-conquer.
x
in the root node.x
into your array of m
items.x
and greater than x
. (Ignore things equal to x
, since you already found it.)One worst case: your array of items is just the list of things in the leaf nodes. (n
is roughly 2m
.) You'd have to visit every node. Your search would cost lg(n) + 2*lg(n/2) + 4*lg(n/4) + ...
. That's linear. Think of it as doing smaller and smaller binary searches until you hit every element in the array once or twice.
I think there's also a way to do it by keeping track of where you are in the tree after a search. C++'s std::map
and std::set
return iterators which can move left and right within the tree, and they might have methods which can take advantage of an existing iterator into the tree.
Upvotes: 0
Reputation: 18368
In case of a balanced search tree the amortized complexity is equal to asymptotic one. Each search operation takes O(logn)
time, both asymptotic and average. Therefore for m
searches the average complexity will be O(mlogn)
.
Upvotes: 0
Reputation: 7610
Well you can consider asymptotic analysis as a strict method to set a upper bound for the running time of algorithms, where as amortized analysis is a some what liberal method.
For example consider an algorithm A with two statements S1 and S2. The cost of executing S1 is 10 and S2 is 100. Both the statements are placed inside a loop as follows.
n=0;
while(n<100)
{
if(n % 10 != 0)
{
S1;
}
else
{
s2;
}
n++;
}
Here the number of times S1 executed is 10 times the count of S2. But asymptotic analysis will only consider the facts that S2 takes a time of 10 units and it is inside a loop executing 100 times. So the upper limit for execution time is of the order of 10 * 100 = 1000. Where as amortized analysis averages out the number of times the statements S1 and S2 are executed. So the upper time limit for execution is of the order of 200. Thus amortized analysis gives a better estimate of the upper limit for executing an algorithm.
Upvotes: 1
Reputation: 165
I think it is mlog(N) because you have to do m search operations (each time from root node downto target node), while the complexity of one single operation is log(N).
EDIT: @user1377000 you are right, I have mistaken amortized complexity from asymptotic complexity. But I don't think it is log(N)/m... because it is not guaranteed that you can finished all m search operations in O(logN) time.
What is amortized analysis of algorithms? I think this might help.
Upvotes: 0