Samrat
Samrat

Reputation: 1389

Hazelcast persisting and loading data on all nodes

I have a 2 node setup distributed cache setup which needs persistence setup for both members.

I have MapSore and Maploader implemented and the same code is deployed on both nodes.

The MapStore and MapLoader work absolutely ok on a single member setup, but after another member joins, MapStore and Maploader continue to work on the first member and all insert or updates by the second member are persisted to disk via the first member.

My requirement is that each member should be able to persist to disk independently so that distributed cache is backed up on all members and not just the first member.

Is there a setting I can change to achieve this.

Here is my Hazlecast Spring Configuration.

@Bean
    public HazelcastInstance hazelcastInstance(H2MapStorage h2mapStore) throws IOException{
        MapStoreConfig mapStoreConfig = new MapStoreConfig();
        mapStoreConfig.setImplementation(h2mapStore);
        mapStoreConfig.setWriteDelaySeconds(0);
        YamlConfigBuilder configBuilder=null;

        if(new File(hazelcastConfiglocation).exists()) {
            configBuilder = new YamlConfigBuilder(hazelcastConfiglocation);
        }else {
            configBuilder = new YamlConfigBuilder();
        }
        Config config = configBuilder.build();
        config.setProperty("hazelcast.jmx", "true");
        MapConfig mapConfig = config.getMapConfig("requests");
        mapConfig.setMapStoreConfig(mapStoreConfig);

        return Hazelcast.newHazelcastInstance(config);
    }

Here is my hazlecast yml config - This is placed in /opt/hazlecast.yml which is picked up by my spring config up above.

hazelcast:
    group:
      name: tsystems
    management-center:
      enabled: false
      url: http://localhost:8080/hazelcast-mancenter
    network:
      port:
        auto-increment: true
        port-count: 100
        port: 5701
      outbound-ports:
        - 0
      join:
        multicast:
          enabled: false
          multicast-group: 224.2.2.3
          multicast-port: 54327
        tcp-ip:
          enabled: true
          member-list:
            - 192.168.1.13

Entire code is available here : [https://bitbucket.org/samrat_roy/hazelcasttest/src/master/][1]

Upvotes: 0

Views: 1524

Answers (2)

Samrat
Samrat

Reputation: 1389

Ok after struggling a lot I noticed a teeny tiny buy critical detail.

Datastore needs to be a centralized system that is accessible from all Hazelcast members. Persistence to a local file system is not supported.

This is absolutely in line with what I was observing [https://docs.hazelcast.org/docs/latest/manual/html-single/#loading-and-storing-persistent-data]

However not be discouraged, I found out that I could use event listeners to do the same thing I needed to do.

    @Component
public class HazelCastEntryListner
        implements EntryAddedListener<String,Object>, EntryUpdatedListener<String,Object>, EntryRemovedListener<String,Object>,
        EntryEvictedListener<String,Object>, EntryLoadedListener<String,Object>, MapEvictedListener, MapClearedListener {


    @Autowired
    @Lazy
    private RequestDao requestDao;

I created this class and hooked it into the config as so

MapConfig mapConfig = config.getMapConfig("requests");
        mapConfig.addEntryListenerConfig(new EntryListenerConfig(entryListner, false, true));
        return Hazelcast.newHazelcastInstance(config);

This worked flawlessly, I am able to replicate data over to both the embedded databases on each node.

My use case was to cover HA failover edge-cases. During HA failover, The slave node needed to know the working memory of the active node.

I am not using hazelcast as a cache, rather I am using as a data syncing mechanism.

Upvotes: 0

Neil Stevenson
Neil Stevenson

Reputation: 3150

This might just be bad luck and low data volumes, rather than an actual error.

On each node, try the running the localKeySet() method and printing the results.

This will tell you which keys are on which node in the cluster. The node that owns key "X" will invoke the map store for that key, even if the update was initiated by another node.

If you have low data volumes, it may not be a 50/50 data split. At an extreme, 2 data records in a 2-node cluster could have both data records on the same node. If you have a 1,000 data records, it's pretty unlikely that they'll all be on the same node.

So the other thing to try is add more data and update all data, to see if both nodes participate.

Upvotes: 0

Related Questions