Reputation: 16837
In Skiena's book of algorithm design, given that the hash table has can have maximum m
buckets and total number of elements is n
, the following worse case time complexities are observed:
Search: O(n)
Successor: O(n + m)
Why are the two different? Doesn't finding successor also in a way involve searching the next element?
Upvotes: 1
Views: 1778
Reputation: 11051
Hashing achieves constant-time search at the cost of destroying order. When I search for an element, I hash it (O(1)
) and look in the chosen bucket (O(n)
in the worst case if I scan linearly, as all the other buckets might be empty.)
When I want the next element after a given one, I have no guarantee that it will be in the same bucket. In fact I have no knowledge about where it is at all. Since I do not know what the successor is yet, I can't hash it to find its bucket. Instead I am forced to look in each bucket (O(m)
.)
If I probe items in order when scanning a bucket, I end up also doing a total of linear work in the number of items (O(n)
). This results in a total complexity of O(n + m)
.
Upvotes: 6