Reputation: 5736
The Hash table wiki entry lists its Big O as:
Search: O(n)
Insert: O(n)
Delete: O(n)
while a java HashMap is listed with Big O as:
get: O(1)
put: O(1)
remove: O(1)
Can someone plz explain why does the Big O differ between the concept and the implementation? I mean if there an implementation with a worst case of O(1) then why is there a possibility of O(n) in the concept?
Upvotes: 2
Views: 975
Reputation: 4462
I'm not sure where you found the reported complexity of a java HashMap, but it is listing the average case, which matches what wikipedia states on the page you linked.
Upvotes: 0
Reputation: 1109
Because there's a difference between worst case and average case, and even wikipedia lists the O(1) complexity for the avarage case. Java's HashMap is exactly the same as wikipedia's Hash table. So it is just a documentation issue.
Basically, hash tables compute a numerical value from the object you want to store. That numerical value is roughly used as an index to access the location to store the object into (leading to O(1) complexity). However, sometimes certain objects may lead to the same numerical value. In this case those objects will be stored in a list stored in the corresponding location in the hash map, hence the O(n) complexity for the worst case.
Upvotes: 1
Reputation: 2896
The worst case is O(n) because it might be possible that every entry you put into the HashMap produces the same hash value (lets say 10). This produces a conflict for every entry because every entry is put at HashMap[10]. Depending on what collision resolution strategy was implemented, the HashMap either creates a list at the index 10 or moves the entry to the next index.
Nevertheless when the entry should be accessed again, the hash value is used to get the initial index of the HashMap. As it is 10 in every case, the HashMap has to resolve this.
Upvotes: 1