Reputation: 17
This is not a code questions iam just trying to understand the concept of hashmaps and time complexity.
Well i think i know how hashMaps/sets etc. work and i think i understand why HashMaps.get has a constant time but when we have a very big hashMap the indicies where the values get stored should overlap. When 2 hashcodes resolve to the same index they get stored in a LinkList at this index right?.
Couldn't it be that all elements got stored in one index as a link List. Shouldn't now HashMap.get run at worst case in O(n).
Upvotes: 0
Views: 54
Reputation: 18923
Yes the worst case time complexity for HashMap
is O(n) where n
is the number of elements stored in one bucket. The worst case will be when all the n
elements of the HashMap gets stored in one bucket.
However this can be improved if binary search is implemented for each of the buckets. In that case the worst case time complexity will be O(log(n))
as binary search tree
is used to store the elements instead of a doubly linked list
.
Upvotes: 1