Reputation: 2102
I need the internal implementation of HashTable in java code. Further can you explain me with Additional comments in code as how does it work.
I just have some basic knowledge on load factor and capacity used in HashTable where load factor is 0.75. Could you explain with a brief example.
I'm stuck up with this for a long time.
1> Why is load factor 0.75 for hashtable and not some other value which is varying. Quite strange Please clarify.
2> Why don't we have load factor for HashMap ?
I don't want the existing code. Some one who has written a better code than what is actually written
Upvotes: 0
Views: 4235
Reputation: 533850
The code for most of the JDK comes with the JDK. It is in the src.jar file. If you use an IDE, it wil automatically link to this jar so when you + on an internal class it will show you the source.
It is worth noting that Hashtable was replaced by the Java 1.2 collections in 1998. I don't recommend you use it unless you have to for legacy libraries.
1> Why is load factor 0.75 for hashtable and not some other value which is varying.Quite strange.
Not strange at all, the default has to be something and this value was chosen for the best all round performance.
i dont want the existing code.some one who has written a better code than what is actually written
What do you mean by better? Have you looked at the collections added with the concurrency library in Java 5.0 (2005) Something better in some cases, is the Trove4j collections as these support primitives. You could also look at Guava which has many extensions.
Upvotes: 0
Reputation: 719486
Why dont we have load factor for HashMap ?
We do - see HashMap(int initialCapacity, float loadFactor)
Why is load factor 0.75 for hashtable and not some other value which is varying.
The load factor for a Hashtable
is a tunable parameter.
The javadoc for HashMaps says this about the 0.75
value.
"As a general rule, the default load factor (.75) offers a good tradeoff between time and space costs. Higher values decrease the space overhead but increase the lookup cost (reflected in most of the operations of the HashMap class, including get and put)."
I understand that the number was determined by empirical testing rather than by theoretical analysis. (A thorough theoretical analysis would be difficult. However, a load factor of 0.75 combined with a good hash function is probably sufficient to keep hash chains down to 1 or 2 in the vast majority of cases, and that results in fast average lookup times.)
Upvotes: 5
Reputation: 3268
You shoud never need the internal implementation and that is why Joshua Bloch did hide it away from us. I prefer using the HashMap
from the standard Java Collections API class to Hashtable
. First, it is faster (not synchronized). Second, Hashtable
had been in Java before the real collections were added there and then it was adapted (as far as I know).
This is an excerpt from the JavaDoc of the java.util.HashMap<K, V>
class:
An instance of HashMap
has two parameters that affect its performance: initial capacity and load factor. The capacity is the number of buckets in the hash table, and the initial capacity is simply the capacity at the time the hash table is created. The load factor is a measure of how full the hash table is allowed to get before its capacity is automatically increased. When the number of entries in the hash table exceeds the product of the load factor and the current capacity, the capacity is roughly doubled by calling the rehash
method.
I think that is clear. A map with a load factor of 0.75 and initial capacity 100 will be inserting first 75 new map entries without any memory allocation. If you then add one more element. A new memory block with capacity 200 is allocated and the existing items are copied into this new memory. The new number of items to reach in order another even bigger memory block was allocated is 150 now.
More detail can be seen in the HashMap
's class source code.
Upvotes: 1
Reputation: 53516
Well don't mind the pun, but it's implementation specific. As long as it conforms to the interface and has the same expected big O runtime it can do things however it likes.
That said here is a link to the GCC version of Java's HashTable http://www.google.com/codesearch/p?hl=en#t4cUIrRdV2U/gnu/mingw/gcc-java-3.4.2-20040916-1-src.tar.gz|HvPdZYyCY6Q/gcc-3.4.2-20040916-1/libjava/java/util/Hashtable.java&q=HashTable.java
I don't think I'll be outlining any comments though :)
Upvotes: 1
Reputation: 11463
Your JVM has source code for the libraries available as part of the JDK install. You just have to select it. Java has two different implementations: HashMap and HashTable. The HashTable has extra stuff for synchronization, so if you want the core implementation, look at the source code for HashMap. Beyond that you're on your own.
Upvotes: 1
Reputation: 8560
The source code is free available and you can download it. Or see it here:
http://www.docjar.com/html/api/java/util/Hashtable.java.html
Upvotes: 1
Reputation: 1503329
The source code is available for download. You may even have it already - if you're using Eclipse, hit Ctrl-Shift-T, type "Hashtable" and see whether you can see the source. Downloading the code yourself is definitely preferable to it just being posted here.
If you have any specific questions about the implementation, ask those (either by editing this question or asking another). A general question of "how does Hashtable work" is unlikely to get you very far... although reading up on the general principles couldn't hurt.
Upvotes: 1