seroga
seroga

Reputation: 31

MaxSizeConfig size with PER_NODE policy

Is MaxSizeConfig broken? I expect map to contain maximum 2000 entries and only then start evicting old entries, however when I insert 2000 entries, I get 340 evictions.

@Bean
public Config hazelCastConfig() {
    Config config = new Config();
    config.setProperty("hazelcast.logging.type", "slf4j");
 // config.setProperty("hazelcast.partition.count", "2");

    MapConfig mapConfig = new MapConfig()
            .setBackupCount(0)
            .setName("map")
            .setEvictionPolicy(EvictionPolicy.LRU)
            .setMaxSizeConfig(new MaxSizeConfig(2000, MaxSizeConfig.MaxSizePolicy.PER_NODE))
            .addEntryListenerConfig(new EntryListenerConfig(new ExampleEntryListener(), false, true));
    config.addMapConfig(mapConfig);

    return config;
}

Found this formula about eviction: https://docs.hazelcast.org/docs/3.12/manual/html-single/index.html#understanding-map-eviction

partition-maximum-size = max-size * member-count / partition-count

In my case it's, partition-count=271, member-count=1 (running spring-boot app with hazelcast instance embedded). So to get 2000 stored guaranteed, max-size needs to be:

max-size = partition-maximum-size * partition-count / member-count

max-size = 2000 * 271 / 1 = 471000

471000 seems too big number for only 2000 entries. When I set max-size=6000 hazelcast seems to keep at least 2000 entries.

Question: What's wrong with my config/formula? How can I configure hazelcast to hold strictly X entries PER_NODE. So when app is deployed with (1 node for TEST and 2 nodes for PRELIVE) hazelcast forms cluster of 2 (or 1) members, there is 2000 entries held inside cluster and not more, without calculating anything with fancy formulas and just specifying 2000?

Upvotes: 3

Views: 1171

Answers (1)

ali
ali

Reputation: 886

here is what happens when you configure a max size with PER_NODE:

configuredEntrySize=2000, PER_NODE

totalEntrySize=configuredEntrySize*memberCount (2000 for test environment, 4000 for PRELIVE)

you have 271 partitions, by default

perPartitionEntrySize=totalEntrySize/partitionCount (7 for test environment, 14 for PRELIVE)

When you are putting entries to the map, hazelcast finds out which partition should an entry stored using the hash of the key (hash_of_the_key%partitionCount). If your data is not perfectly uniform, you will have more than 7 entries stored in some partitions thus the eviction.

You can think of Hazelcast IMap as an array of maps, the size of the array is the partition count. Each member stores some part of this array. Say MemberA stores map-0, map-3, map-6... MemberB stores map-1, map-4, map-7... MemberC stores map-2, map-5, map-8... Hazelcast picks a map from this array using the hash of the key, and stores your entry in this selected map. When you call IMap.size() an operation is sent to each member collecting all the sizes of these maps and returns as a total. Since it would be suboptimal to do this for each eviction check, Hazelcast calculates a per partition max size from what you've configured and uses that number as the max size per map.

Upvotes: 1

Related Questions