User27854
User27854

Reputation: 884

Weird Behaviour while using HashMap

I am trying to fix a code, which I wont be able to post it here, So I have a trimmed down version here.

I am getting a unstable output, while using HashMap.

HashMap<Integer, String> test= new HashMap<>(); 
test.put(1, "one");
test.put(2, "one");
test.put(3, "one"); 
test.put(4,"four");
test.put(5, "one");
test.put(6, "one");
test.put(10, "one");
test.put(19, "one");    
test.put(20, "Sixteen");    
System.out.println(test);


HashMap<Integer, String> test3= new HashMap<>(200); 
test3.put(1, "one");
test3.put(2, "one");
test3.put(3, "one");    
test3.put(4,"four");
test3.put(5, "one");
test3.put(6, "one");
test3.put(10, "one");
test3.put(19, "one");   
test3.put(20, "Sixteen");
System.out.println(test3);  

Outputs

test --> {1=one, 19=one, 2=one, 3=one, 4=four, 20=Sixteen, 5=one, 6=one, 10=one}
test3--> {1=one, 2=one, 3=one, 4=four, 5=one, 6=one, 10=one, 19=one, 20=Sixteen}---> My desired output. 

Why are the results different even though the input values are the same. How is this sorting differing i.e storing of elements?

I am unable to use the second method as the size is dynamic, it keeps on changing based on the application. Can I use TreeMap for the same and get a consistent output for all values.

Upvotes: 1

Views: 193

Answers (7)

Matt
Matt

Reputation: 3760

With HashMaps there is a concept called load factor. The load factor is how full the hashmap can be before it will resize itself. If the load factor is 0.5 (or 50%) then when the hashmap reaches 50% capacity it will resize itself to make room for more elements.

The reason a hashmap generally has a load factor smaller than 100% is because of the way the elements are stored. When you generate a hash you use a hash function. The hash function you are using is the default and is based on the equals() method of the object being stored with some extra twiddling that is not important at the moment. Thing is, two elements can end up with the same hash, it's not guaranteed to be unique. When you try to store two values with the same hash, they end up in the same 'bucket' in the hashmap. This is called a collision. When you have a collision, the hashmap needs a strategy to deal with it.

Sometimes the strategy is 'linear probing'. This means, if an element collides, the hashmap will step through the next few buckets looking for an empty one.

Sometimes the strategy is 'chaining' where, if an element collides, the hashmap will replace the existing element with a linked list and put each of the elements that collide in the list.

All this means is that when elements collide in a hashmap, it becomes slower to insert and slower to retrieve elements. So to reduce the chance of collisions, the hashmap resizes itself according to the load factor.

On top of all this, as others have mentioned, there is no guarantee of ordering in a basic HashMap. You need to use an implemention like LinkedHashMap that does.

Upvotes: 2

kasson
kasson

Reputation: 45

HashMap with the default initial capacity (16)

{1=one, 2=one, 3=one, 4=four, 5=one, 6=one, 10=one}

{1=one, 2=one, 3=one, 4=four, 5=one, 6=one, 10=one}

Upvotes: 0

Yassin Hajaj
Yassin Hajaj

Reputation: 21975

Use a TreeMap if you want the keys to be sorted naturally

The map is sorted according to the natural ordering of its keys, or by a Comparator provided at map creation time, depending on which constructor is used.

Map<Integer, String> test= new TreeMap<>(); 
test.put(1, "one");
test.put(2, "one");
test.put(3, "one"); 
test.put(4,"four");
test.put(5, "one");
test.put(6, "one");
test.put(10, "one");
test.put(19, "one");    
test.put(20, "Sixteen");    
System.out.println(test); // Output : {1=one, 2=one, 3=one, 4=four, 5=one, 6=one, 10=one, 19=one, 20=Sixteen}

Upvotes: 0

Thanigai Arasu
Thanigai Arasu

Reputation: 413

HashMap doesn't give the guarantees as the order. If you want to retrieve the elements in map based on the order you inserted, You can use LinkedHashMap.

Upvotes: 1

SpringLearner
SpringLearner

Reputation: 13844

Why the outputs are different when the size differs –

this because while calling the put method of hashmap indexFor(hash,table.length) is called internally. Table.length is different that means default is 16 but for your second way size is 200. so index will be different.

Please read more in this article

Can I use TreeMap for the same and get a consistent output for all values.

In Treemap gurantees, keys will be ordered and so you will get {1=one, 2=one, 3=one, 4=four, 5=one, 6=one, 10=one, 19=one, 20=Sixteen}

You may use LinkedHashmap if you want to retrieve in the order the way its inserted

Upvotes: 5

Ghazanfar
Ghazanfar

Reputation: 1469

From the Java docs entry for the HashMap class,

This class makes no guarantees as to the order of the map; in particular, it does not guarantee that the order will remain constant over time.

Upvotes: 0

Shivam
Shivam

Reputation: 649

That's because

new HashMap<>() Constructs an empty HashMap with the default initial capacity (16)

So it tends to print values of index 0--->15 and then again from index 16-->31 considered as 0--->15.

Upvotes: 1

Related Questions